Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1126 2021-08-29 00:41:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1103) Fossil fuel

Fossil fuel, any of a class of hydrocarbon-containing materials of biological origin occurring within Earth’s crust that can be used as a source of energy.

Fossil fuels include coal, petroleum, natural gas, oil shales, bitumens, tar sands, and heavy oils. All contain carbon and were formed as a result of geologic processes acting on the remains of organic matter produced by photosynthesis, a process that began in the Archean Eon (4.0 billion to 2.5 billion years ago). Most carbonaceous material occurring before the Devonian Period (419.2 million to 358.9 million years ago) was derived from algae and bacteria, whereas most carbonaceous material occurring during and after that interval was derived from plants.

All fossil fuels can be burned in air or with oxygen derived from air to provide heat. This heat may be employed directly, as in the case of home furnaces, or used to produce steam to drive generators that can supply electricity. In still other cases—for example, gas turbines used in jet aircraft—the heat yielded by burning a fossil fuel serves to increase both the pressure and the temperature of the combustion products to furnish motive power.

Since the beginning of the Industrial Revolution in Great Britain in the second half of the 18th century, fossil fuels have been consumed at an ever-increasing rate. Today they supply more than 80 percent of all the energy consumed by the industrially developed countries of the world. Although new deposits continue to be discovered, the reserves of the principal fossil fuels remaining on Earth are limited. The amounts of fossil fuels that can be recovered economically are difficult to estimate, largely because of changing rates of consumption and future value as well as technological developments. Advances in technology—such as hydraulic fracturing (fracking), rotary drilling, and directional drilling—have made it possible to extract smaller and difficult-to-obtain deposits of fossil fuels at a reasonable cost, thereby increasing the amount of recoverable material. In addition, as recoverable supplies of conventional (light-to-medium) oil became depleted, some petroleum-producing companies shifted to extracting heavy oil, as well as liquid petroleum pulled from tar sands and oil shales.

One of the main by-products of fossil fuel combustion is carbon dioxide (CO2). The ever-increasing use of fossil fuels in industry, transportation, and construction has added large amounts of CO2 to Earth’s atmosphere. Atmospheric CO2 concentrations fluctuated between 275 and 290 parts per million by volume (ppmv) of dry air between 1000 CE and the late 18th century but increased to 316 ppmv by 1959 and rose to 412 ppmv in 2018. CO2 behaves as a greenhouse gas—that is, it absorbs infrared radiation (net heat energy) emitted from Earth’s surface and reradiates it back to the surface. Thus, the substantial CO2 increase in the atmosphere is a major contributing factor to human-induced global warming. Methane (CH4), another potent greenhouse gas, is the chief constituent of natural gas, and CH4 concentrations in Earth’s atmosphere rose from 722 parts per billion (ppb) before 1750 to 1,859 ppb by 2018. To counter worries over rising greenhouse gas concentrations and to diversify their energy mix, many countries have sought to reduce their dependence on fossil fuels by developing sources of renewable energy (such as wind, solar, hydroelectric, tidal, geothermal, and biofuels) while at the same time increasing the mechanical efficiency of engines and other technologies that rely on fossil fuels.

fossil_fuels.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1127 2021-08-30 00:46:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1104) Mind

Mind, in the Western tradition, the complex of faculties involved in perceiving, remembering, considering, evaluating, and deciding. Mind is in some sense reflected in such occurrences as sensations, perceptions, emotions, memory, desires, various types of reasoning, motives, choices, traits of personality, and the unconscious.

A brief treatment of mind follows. The subject of mind is treated in a number of articles.

To the extent that mind is manifested in observable phenomena, it has frequently been regarded as a peculiarly human possession. Some theories, however, posit the existence of mind in other animals besides human beings. One theory regards mind as a universal property of matter. According to another view, there may be superhuman minds or intelligences, or a single absolute mind, a transcendent intelligence.

Common Assumptions Among Theories Of Mind

Several assumptions are indispensible to any discussion of the concept of mind. First is the assumption of thought or thinking. If there were no evidence of thought in the world, mind would have little or no meaning. The recognition of this fact throughout history accounts for the development of diverse theories of mind. It may be supposed that such words as “thought” or “thinking” cannot, because of their own ambiguity, help to define the sphere of mind. But whatever the relation of thinking to sensing, thinking seems to involve more—for almost all observers—than a mere reception of impressions from without. This seems to be the opinion of those who make thinking a consequence of sensing, as well as of those who regard thought as independent of sense. For both, thinking goes beyond sensing, either as an elaboration of the materials of sense or as an apprehension of objects that are totally beyond the reach of the senses.

The second assumption that seems to be a root common to all conceptions of mind is that of knowledge or knowing. This may be questioned on the ground that, if there were sensation without any form of thought, judgment, or reasoning, there would be at least a rudimentary form of knowledge—some degree of consciousness or awareness by one thing or another. If one grants the point of this objection, it nevertheless seems true that the distinction between truth and falsity and the difference between knowledge, error, and ignorance or between knowledge, belief, and opinion do not apply to sensations in the total absence of thought. Any understanding of knowledge that involves these distinctions seems to imply mind for the same reason that it implies thought. There is a further implication of mind in the fact of self-knowledge. Sensing may be awareness of an object, and to this extent it may be a kind of knowing, but it has never been observed that the senses can sense or be aware of themselves.

Thought seems to be not only reflective but reflexive, that is, able to consider itself, to define the nature of thinking, and to develop theories of mind. This fact about thought—its reflexivity—also seems to be a common element in all the meanings of “mind.” It is sometimes referred to as “the reflexivity of the intellect,” as “the reflexive power of the understanding,” as “the ability of the understanding to reflect upon its own acts,” or as “self-consciousness.” Whatever the phrasing, a world without self-consciousness or self-knowledge would be a world in which the traditional conception of mind would probably not have arisen.

The third assumption is that of purpose or intention, of planning a course of action with foreknowledge of its goal or of working in any other way toward a desired and foreseen objective. As in the case of sensitivity, the phenomena of desire do not, without further qualification, indicate the realm of mind. According to the theory of natural desire, for example, the natural tendencies of even inanimate and insensitive things are expressions of desire. But it is not in that sense of desire that the assumption of purpose or intention is here taken as evidence of mind.

It is rather on the level of the behaviour of living things that purpose seems to require a factor over and above the senses, limited as they are to present appearances. It cannot be found in the passions, which have the same limitation as the senses, for unless they are checked they tend toward immediate emotional discharge. That factor, called for by the direction of conduct to future ends, is either an element common to all meanings of “mind” or is at least an element associated with mind. It is sometimes called the faculty of will—rational desire or the intellectual appetite. Sometimes it is treated as the act of willing, which, along with thinking, is one of the two major activities of mind or understanding; and sometimes purposiveness is regarded as the very essence of mentality.

Disputed Questions

These assumptions—thought, knowledge or self-knowledge, and purpose—seem to be common to all theories of mind. More than that, they seem to be assumptions that require the development of the conception. The conflict of theories concerning what the human mind is, what structure it has, what parts belong to it, and what whole it belongs to does not comprise the entire range of controversy on the subject. Yet enough is common to all theories of mind to permit certain other questions to be formulated: How does the mind operate? How does it do whatever is its work, and with what intrinsic excellences or defects? What is the relation of mind to matter, to bodily organs, to material conditions, or of one mind to another? Is mind a common possession of men and animals, or is whatever might be called mind in animals distinctly different from the human mind? Are there minds or a mind in existence apart from man and the whole world of corporeal life? What are the limits of so-called artificial intelligence, the capacity of machines to perform functions generally associated with mind?

The intelligibility of the positions taken in the disputes of these issues depends to some degree on the divergent conceptions of the human mind from which they stem. The conclusions achieved in such fields as theory of knowledge,  metaphysics, logic, ethics, and the philosophy of religion are all relevant to the philosophy of mind; and its conclusions, in turn, have important implications for those fields. Moreover, this reciprocity applies as well to its relations to such empirical disciplines as neurology, psychology, sociology, and history.

cbqizJs2ravFTf43Xd3VrA-1024-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1128 2021-08-31 00:28:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1105) Bose-Einstein condensate

Bose-Einstein condensate (BEC), a state of matter in which separate atoms or subatomic particles, cooled to near absolute zero (0 K, − 273.15 °C, or − 459.67 °F; K = kelvin), coalesce into a single quantum mechanical entity—that is, one that can be described by a wave function—on a near-macroscopic scale. This form of matter was predicted in 1924 by Albert Einstein on the basis of the quantum formulations of the Indian physicist Satyendra Nath Bose.

Although it had been predicted for decades, the first atomic BEC was made only in 1995, when Eric Cornell and Carl Wieman of JILA, a research institution jointly operated by the National Institute of Standards and Technology (NIST) and the University of Colorado at Boulder, cooled a gas of rubidium atoms to 1.7 × {10}^{-7} K above absolute zero. Along with Wolfgang Ketterle of the Massachusetts Institute of Technology (MIT), who created a BEC with sodium atoms, these researchers received the 2001 Nobel Prize for Physics. Research on BECs has expanded the understanding of quantum physics and has led to the discovery of new physical effects.

BEC theory traces back to 1924, when Bose considered how groups of photons behave. Photons belong to one of the two great classes of elementary or submicroscopic particles defined by whether their quantum spin is a nonnegative integer (0, 1, 2, …) or an odd half integer (1/2, 3/2, …). The former type, called bosons, includes photons, whose spin is 1. The latter type, called fermions, includes electrons, whose spin is 1/2.

As Bose noted, the two classes behave differently  and Fermi-Dirac statistics). According to the Pauli exclusion principle, fermions tend to avoid each other, for which reason each electron in a group occupies a separate quantum state (indicated by different quantum numbers, such as the electron’s energy). In contrast, an unlimited number of bosons can have the same energy state and share a single quantum state.

Einstein soon extended Bose’s work to show that at extremely low temperatures “bosonic atoms” with even spins would coalesce into a shared quantum state at the lowest available energy. The requisite methods to produce temperatures low enough to test Einstein’s prediction did not become attainable, however, until the 1990s. One of the breakthroughs depended on the novel technique of laser cooling and trapping, in which the radiation pressure of a laser beam cools and localizes atoms by slowing them down. (For this work, French physicist Claude Cohen-Tannoudji and American physicists Steven Chu and William D. Phillips shared the 1997 Nobel Prize for Physics.) The second breakthrough depended on improvements in magnetic confinement in order to hold the atoms in place without a material container. Using these techniques, Cornell and Wieman succeeded in merging about 2,000 individual atoms into a “superatom,” a condensate large enough to observe with a microscope, that displayed distinct quantum properties. As Wieman described the achievement, “We brought it to an almost human scale. We can poke it and prod it and look at this stuff in a way no one has been able to before.”

BECs are related to two remarkable low-temperature phenomena: superfluidity, in which each of the helium isotopes 3He and 4He forms a liquid that flows with zero friction; and superconductivity, in which electrons move through a material with zero electrical resistance. 4He atoms are bosons, and although 3He atoms and electrons are fermions, they can also undergo Bose condensation if they pair up with opposite spins to form bosonlike states with zero net spin. In 2003 Deborah Jin and her colleagues at JILA used paired fermions to create the first atomic fermionic condensate.

BEC research has yielded new atomic and optical physics, such as the atom laser Ketterle demonstrated in 1996. A conventional light laser emits a beam of coherent photons; they are all exactly in phase and can be focused to an extremely small, bright spot. Similarly, an atom laser produces a coherent beam of atoms that can be focused at high intensity. Potential applications include more-accurate atomic clocks and enhanced techniques to make electronic chips, or integrated circuits.

The most intriguing property of BECs is that they can slow down light. In 1998 Lene Hau of Harvard University and her colleagues slowed light traveling through a BEC from its speed in vacuum of 3 × {10}^8 metres per second to a mere 17 metres per second, or about 38 miles per hour. Since then, Hau and others have completely halted and stored a light pulse within a BEC, later releasing the light unchanged or sending it to a second BEC. These manipulations hold promise for new types of light-based telecommunications, optical storage of data, and quantum computing, though the low-temperature requirements of BECs offer practical difficulties.

bec_velocity.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1129 2021-09-01 00:29:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1106) Fermi-Dirac statistics

Fermi-Dirac statistics, in quantum mechanics, one of two possible ways in which a system of indistinguishable particles can be distributed among a set of energy states: each of the available discrete states can be occupied by only one particle. This exclusiveness accounts for the electron structure of atoms, in which electrons remain in separate states rather than collapsing into a common state, and for some aspects of electrical conductivity. The theory of this statistical behaviour was developed (1926–27) by the physicists Enrico Fermi and P.A.M. Dirac, who recognized that a collection of identical and indistinguishable particles can be distributed in this way among a series of discrete (quantized) states.

In contrast to the Bose-Einstein statistics, the Fermi-Dirac statistics apply only to those types of particles that obey the restriction known as the Pauli exclusion principle. Such particles have half-integer values of spin and are named fermions, after the statistics that correctly describe their behaviour. Fermi-Dirac statistics apply, for example, to electrons, protons, and neutrons.

Fermi-Dirac-distribution-function-at-different-temperatures-T3-T2T1-and-T0-0-K-At.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1130 2021-09-02 00:45:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1107) Thermodynamics

What Is Thermodynamics?

Thermodynamics is the branch of physics that deals with the relationships between heat and other forms of energy. In particular, it describes how thermal energy is converted to and from other forms of energy and how it affects matter.

Thermal energy is the energy a substance or system has due to its temperature, i.e., the energy of moving or vibrating molecules, according to the Energy Education website of the Texas Education Agency. Thermodynamics involves measuring this energy, which can be "exceedingly complicated," according to David McKee, a professor of physics at Missouri Southern State University. "The systems that we study in thermodynamics … consist of very large numbers of atoms or molecules interacting in complicated ways. But, if these systems meet the right criteria, which we call equilibrium, they can be described with a very small number of measurements or numbers. Often this is idealized as the mass of the system, the pressure of the system, and the volume of the system, or some other equivalent set of numbers. Three numbers describe {10}^{26} or {10}^{30} nominal independent variables."

Heat

Thermodynamics, then, is concerned with several properties of matter; foremost among these is heat. Heat is energy transferred between substances or systems due to a temperature difference between them, according to Energy Education. As a form of energy, heat is conserved, i.e., it cannot be created or destroyed. It can, however, be transferred from one place to another. Heat can also be converted to and from other forms of energy. For example, a steam turbine can convert heat to kinetic energy to run a generator that converts kinetic energy to electrical energy. A light bulb can convert this electrical energy to electromagnetic radiation (light), which, when absorbed by a surface, is converted back into heat.

Temperature

The amount of heat transferred by a substance depends on the speed and number of atoms or molecules in motion, according to Energy Education. The faster the atoms or molecules move, the higher the temperature, and the more atoms or molecules that are in motion, the greater the quantity of heat they transfer.

Temperature is "a measure of the average kinetic energy of the particles in a sample of matter, expressed in terms of units or degrees designated on a standard scale," according to the American Heritage Dictionary. The most commonly used temperature scale is Celsius, which is based on the freezing and boiling points of water, assigning respective values of 0 degrees C and 100 degrees C. The Fahrenheit scale is also based on the freezing and boiling points of water which have assigned values of 32 F and 212 F, respectively.

Scientists worldwide, however, use the Kelvin (K with no degree sign) scale, named after William Thomson, 1st Baron Kelvin, because it works in calculations. This scale uses the same increment as the Celsius scale, i.e., a temperature change of 1 C is equal to 1 K. However, the Kelvin scale starts at absolute zero, the temperature at which there is a total absence of heat energy and all molecular motion stops. A temperature of 0 K is equal to minus 459.67 F or minus 273.15 C.

Specific heat

The amount of heat required to increase the temperature of a certain mass of a substance by a certain amount is called specific heat, or specific heat capacity, according to Wolfram Research. The conventional unit for this is calories per gram per kelvin. The calorie is defined as the amount of heat energy required to raise the temperature of 1 gram of water at 4 C by 1 degree.

The specific heat of a metal depends almost entirely on the number of atoms in the sample, not its mass.  For instance, a kilogram of aluminum can absorb about seven times more heat than a kilogram of lead. However, lead atoms can absorb only about 8 percent more heat than an equal number of aluminum atoms. A given mass of water, however, can absorb nearly five times as much heat as an equal mass of aluminum. The specific heat of a gas is more complex and depends on whether it is measured at constant pressure or constant volume.

Thermal conductivity

Thermal conductivity (k) is “the rate at which heat passes through a specified material, expressed as the amount of heat that flows per unit time through a unit area with a temperature gradient of one degree per unit distance,” according to the Oxford Dictionary. The unit for k is watts (W) per meter (m) per kelvin (K). Values of k for metals such as copper and silver are relatively high at 401 and 428 W/m·K, respectively. This property makes these materials useful for automobile radiators and cooling fins for computer chips because they can carry away heat quickly and exchange it with the environment. The highest value of k for any natural substance is diamond at 2,200 W/m·K.

Other materials are useful because they are extremely poor conductors of heat; this property is referred to as thermal resistance, or R-value, which describes the rate at which heat is transmitted through the material. These materials, such as rock wool, goose down and Styrofoam, are used for insulation in exterior building walls, winter coats and thermal coffee mugs. R-value is given in units of square feet times degrees Fahrenheit times hours per British thermal unit  (ft2·°F·h/Btu) for a 1-inch-thick slab.

Newton's Law of Cooling

In 1701, Sir Isaac Newton first stated his Law of Cooling in a short article titled "Scala graduum Caloris" ("A Scale of the Degrees of Heat") in the Philosophical Transactions of the Royal Society. Newton's statement of the law translates from the original Latin as, "the excess of the degrees of the heat ... were in geometrical progression when the times are in an arithmetical progression." Worcester Polytechnic Institute gives a more modern version of the law as "the rate of change of temperature is proportional to the difference between the temperature of the object and that of the surrounding environment."

This results in an exponential decay in the temperature difference. For example, if a warm object is placed in a cold bath, within a certain length of time, the difference in their temperatures will decrease by half. Then in that same length of time, the remaining difference will again decrease by half. This repeated halving of the temperature difference will continue at equal time intervals until it becomes too small to measure.

Heat transfer

Heat can be transferred from one body to another or between a body and the environment by three different means: conduction, convection and radiation. Conduction is the transfer of energy through a solid material. Conduction between bodies occurs when they are in direct contact, and molecules transfer their energy across the interface.

Convection is the transfer of heat to or from a fluid medium. Molecules in a gas or liquid in contact with a solid body transmit or absorb heat to or from that body and then move away, allowing other molecules to move into place and repeat the process. Efficiency can be improved by increasing the surface area to be heated or cooled, as with a radiator, and by forcing the fluid to move over the surface, as with a fan.

Radiation is the emission of electromagnetic (EM) energy, particularly infrared photons that carry heat energy. All matter emits and absorbs some EM radiation, the net amount of which determines whether this causes a loss or gain in heat.

The Carnot cycle

In 1824, Nicolas Léonard Sadi Carnot proposed a model for a heat engine based on what has come to be known as the Carnot cycle. The cycle exploits the relationships among pressure, volume and temperature of gasses and how an input of energy can change form and do work outside the system.

Compressing a gas increases its temperature so it becomes hotter than its environment. Heat can then be removed from the hot gas using a heat exchanger. Then, allowing it to expand causes it to cool. This is the basic principle behind heat pumps used for heating, air conditioning and refrigeration.

Conversely, heating a gas increases its pressure, causing it to expand. The expansive pressure can then be used to drive a piston, thus converting heat energy into kinetic energy. This is the basic principle behind heat engines.

Entropy

All thermodynamic systems generate waste heat. This waste results in an increase in entropy, which for a closed system is "a quantitative measure of the amount of thermal energy not available to do work," according to the American Heritage Dictionary. Entropy in any closed system always increases; it never decreases. Additionally, moving parts produce waste heat due to friction, and radiative heat inevitably leaks from the system.

This makes so-called perpetual motion machines impossible. Siabal Mitra, a professor of physics at Missouri State University, explains, "You cannot build an engine that is 100 percent efficient, which means you cannot build a perpetual motion machine. However, there are a lot of folks out there who still don't believe it, and there are people who are still trying to build perpetual motion machines."

Entropy is also defined as "a measure of the disorder or randomness in a closed system," which also inexorably increases. You can mix hot and cold water, but because a large cup of warm water is more disordered than two smaller cups containing hot and cold water, you can never separate it back into hot and cold without adding energy to the system. Put another way, you can’t unscramble an egg or remove cream from your coffee. While some processes appear to be completely reversible, in practice, none actually are. Entropy, therefore, provides us with an arrow of time: forward is the direction of increasing entropy.

The four laws of thermodynamics

The fundamental principles of thermodynamics were originally expressed in three laws. Later, it was determined that a more fundamental law had been neglected, apparently because it had seemed so obvious that it did not need to be stated explicitly. To form a complete set of rules, scientists decided this most fundamental law needed to be included. The problem, though, was that the first three laws had already been established and were well known by their assigned numbers. When faced with the prospect of renumbering the existing laws, which would cause considerable confusion, or placing the pre-eminent law at the end of the list, which would make no logical sense, a British physicist, Ralph H. Fowler, came up with an alternative that solved the dilemma: he called the new law the “Zeroth Law.” In brief, these laws are:

The Zeroth Law states that if two bodies are in thermal equilibrium with some third body, then they are also in equilibrium with each other. This establishes temperature as a fundamental and measurable property of matter.

The First Law states that the total increase in the energy of a system is equal to the increase in thermal energy plus the work done on the system. This states that heat is a form of energy and is therefore subject to the principle of conservation.

The Second Law states that heat energy cannot be transferred from a body at a lower temperature to a body at a higher temperature without the addition of energy. This is why it costs money to run an air conditioner.

The Third Law states that the entropy of a pure crystal at absolute zero is zero. As explained above, entropy is sometimes called "waste energy," i.e., energy that is unable to do work, and since there is no heat energy whatsoever at absolute zero, there can be no waste energy. Entropy is also a measure of the disorder in a system, and while a perfect crystal is by definition perfectly ordered, any positive value of temperature means there is motion within the crystal, which causes disorder. For these reasons, there can be no physical system with lower entropy, so entropy always has a positive value.

The science of thermodynamics has been developed over centuries, and its principles apply to nearly every device ever invented. Its importance in modern technology cannot be overstated.

fff9f4c8e29129303962aa8b604e1541--second-law-of-thermodynamics-seem-to.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1131 2021-09-03 00:11:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1108) Helicopter

Background

Helicopters are classified as rotary wing aircraft, and their rotary wing is commonly referred to as the main rotor or simply the rotor. Unlike the more common fixed wing aircraft such as a sport biplane or an airliner, the helicopter is capable of direct vertical take-off and landing; it can also hover in a fixed position. These features render it ideal for use where space is limited or where the ability to hover over a precise area is necessary. Currently, helicopters are used to dust crops, apply pesticide, access remote areas for environmental work, deliver supplies to workers on remote maritime oil rigs, take photographs, film movies, rescue people trapped in inaccessible spots, transport accident victims, and put out fires. Moreover, they have numerous intelligence and military applications.

Numerous individuals have contributed to the conception and development of the helicopter. The idea appears to have been bionic in origin, meaning that it derived from an attempt to adapt a natural phenomena—in this case, the whirling, bifurcated fruit of the maple tree—to a mechanical design. Early efforts to imitate maple pods produced the whirligig, a children's toy popular in China as well as in medieval Europe. During the fifteenth century, Leonardo da Vinci, the renowned Italian painter, sculptor, architect, and engineer, sketched a flying machine that may have been based on the whirligig. The next surviving sketch of a helicopter dates from the early nineteenth century, when British scientist Sir George Cayley drew a twin-rotor aircraft in his notebook. During the early twentieth century, Frenchman Paul Cornu managed to lift himself off the ground for a few seconds in an early helicopter. However, Cornu was constrained by the same problems that would continue to plague all early designers for several decades: no one had yet devised an engine that could generate enough vertical thrust to lift both the helicopter and any significant load (including passengers) off the ground.

Igor Sikorsky, a Russian engineer, built his first helicopter in 1909. When neither this prototype nor its 1910 successor succeeded, Sikorsky decided that he could not build a helicopter without more sophisticated materials and money, so he transferred his attention to aircraft. During World War I, Hungarian engineer Theodore von Karman constructed a helicopter that, when tethered, was able to hover for extended periods. Several years later, Spaniard Juan de la Cierva developed a machine he called an autogiro in response to the tendency of conventional airplanes to lose engine power and crash while landing. If he could design an aircraft in which lift and thrust (forward speed) were separate functions, Cierva speculated, he could circumvent this problem. The autogiro he subsequently invented incorporated features of both the helicopter and the airplane, although it resembled the latter more. The autogiro had a rotor that functioned something like a windmill. Once set in motion by taxiing on the ground, the rotor could generate supplemental lift; however, the autogiro was powered primarily by a conventional airplane engine. To avoid landing problems, the engine could be disconnected and the autogiro brought gently to rest by the rotor, which would gradually cease spinning as the machine reached the ground. Popular during the 1920s and 1930s, autogiros ceased to be produced after the refinement of the conventional helicopter.

The helicopter was eventually perfected by Igor Sikorsky. Advances in aerodynamic theory and building materials had been made since Sikorsky's initial endeavor, and, in 1939, he lifted off the ground in his first operational helicopter. Two years later, an improved design enabled him to remain aloft for an hour and a half, setting a world record for sustained helicopter flight.

The helicopter was put to military use almost immediately after its introduction. While it was not utilized extensively during World War II, the jungle terrain of both Korea and Vietnam prompted the helicopter's widespread use during both of those wars, and technological refinements made it a valuable tool during the Persian Gulf War as well. In recent years, however, private industry has probably accounted for the greatest increase in helicopter use, as many companies have begun to transport their executives via helicopter. In addition, helicopter shuttle services have proliferated, particularly along the urban corridor of the American Northeast. Still, among civilians the helicopter remains best known for its medical, rescue, and relief uses.

Design

A helicopter's power comes from either a piston engine or a gas turbine (recently, the latter has predominated), which moves the rotor shaft, causing the rotor to turn. While a standard plane generates thrust by pushing air behind its wing as it moves forward, the helicopter's rotor achieves lift by pushing the air beneath it downward as it spins. Lift is proportional to the change in the air's momentum (its mass times its velocity): the greater the momentum, the greater the lift.

Helicopter rotor systems consist of between two and six blades attached to a central hub. Usually long and narrow, the blades turn relatively slowly, because this minimizes the amount of power necessary to achieve and maintain lift, and also because it makes controlling the vehicle easier. While light-weight, general-purpose helicopters often have a two-bladed main rotor, heavier craft may use a four-blade design or two separate main rotors to accommodate heavy loads.

To steer a helicopter, the pilot must adjust the pitch of the blades, which can be set three ways. In the collective system, the pitch of all the blades attached to the rotor is identical; in the cyclic system, the pitch of each blade is designed to fluctuate as the rotor revolves, and the third system uses a combination of the first two. To move the helicopter in any direction, the pilot moves the lever that adjusts collective pitch and/or the stick that adjusts cyclic pitch; it may also be necessary to increase or reduce speed.

Unlike airplanes, which are designed to minimize bulk and protuberances that would weigh the craft down and impede airflow around it, helicopters have unavoidably high drag. Thus, designers have not utilized the sort of retractable landing gear familiar to people who have watched planes taking off or landing—the aerodynamic gains of such a system would be proportionally insignificant for a helicopter. In general, helicopter landing gear is much simpler than that of airplanes. Whereas the latter require long runways on which to reduce forward velocity, helicopters have to reduce only vertical lift, which they can do by hovering prior to landing. Thus, they don't even require shock absorbers: their landing gear usually comprises only wheels or skids, or both.

One problem associated with helicopter rotor blades occurs because airflow along the length of each blade differs widely. This means that lift and drag fluctuate for each blade throughout the rotational cycle, thereby exerting an unsteadying influence upon the helicopter. A related problem occurs because, as the helicopter moves forward, the lift beneath the blades that enter the airstream first is high, but that beneath the blades on the opposite side of the rotor is low. The net effect of these problems is to destabilize the helicopter. Typically, the means of compensating for these unpredictable variations in lift and drag is to manufacture flexible blades connected to the rotor by a hinge. This design allows each blade to shift up or down, adjusting to changes in lift and drag.

Torque, another problem associated with the physics of a rotating wing, causes the helicopter fuselage (cabin) to rotate in the opposite direction from the rotor, especially when the helicopter is moving at low speeds or hovering. To offset this reaction, many helicopters use a tail rotor, an exposed blade or ducted fan mounted on the end of the tail boom typically seen on these craft. Another means of counteracting torque entails installing two rotors, attached to the same engine but rotating in opposite directions, while a third, more space-efficient design features twin rotors that are enmeshed, something like an egg beater. Additional alternatives have been researched, and at least one NOTAR (no tail rotor) design has been introduced.

Raw Materials

The airframe, or fundamental structure, of a helicopter can be made of either metal or organic composite materials, or some combination of the two. Higher performance requirements will incline the designer to favor composites with higher strength-to-weight ratio, often epoxy (a resin) reinforced with glass, aramid (a strong, flexible nylon fiber), or carbon fiber. Typically, a composite component consists of many layers of fiber-impregnated resins, bonded to form a smooth panel. Tubular and sheet metal substructures are usually made of aluminum, though stainless steel or titanium are sometimes used in areas subject to higher stress or heat. To facilitate bending during the manufacturing process, the structural tubing is often filled with molten sodium silicate. A helicopter's rotary wing blades are usually made of fiber-reinforced resin, which may be adhesively bonded with an external sheet metal layer to protect edges. The helicopter's windscreen and windows are formed of polycarbonate sheeting.

The Manufacturing Process

In 1939, a Russian emigre to the United States tested what was to become a prominent prototype for later helicopters. Already a prosperous aircraft manufacturer in his native land, Igor Sikorsky fled the 1917 revolution, drawn to the United States by stories of Thomas Edison and Henry Ford.

Sikorsky soon became a successful aircraft manufacturer in his adopted homeland. But his dream was vertical take-off, rotary wing flight. He experimented for more than twenty years and finally, in 1939, flew his first flight in a craft dubbed the VS 300. Tethered to the ground with long ropes, his craft flew no higher than 50 feet off the ground on its first several flights. Even then, there were problems: the craft flew up, down, and sideways, but not forward. However, helicopter technology developed so rapidly that some were actually put into use by U.S. troops during World War II.

The helicopter contributed directly to at least one revolutionary production technology. As helicopters grew larger and more powerful, the precision calculations needed for engineering the blades, which had exacting requirements, increased exponentially. In 1947, John C. Parsons of Traverse City, Michigan, began looking for ways to speed the engineering of blades produced by his company. Parsons contacted the International Business Machine Corp. and asked to try one of their new main frame office computers. By 1951, Parsons was experimenting with having the computer's calculations actually guide the machine tool. His ideas were ultimately developed into the computer-numerical-control (CNC) machine tool industry that has revolutionized modern production methods.

75606_edatflightcbell_168623.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1132 2021-09-04 00:56:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1109) Hovercraft

Hovercraft, any of a series of British-built and British-operated air-cushion vehicles (ACVs) that for 40 years (1959–2000) ferried passengers and automobiles across the English Channel between southern England and northern France. The cross-Channel Hovercraft were built by Saunders-Roe Limited of the Isle of Wight and its successor companies. The first in the series, known as SR.N1 (for Saunders-Roe Nautical 1), a four-ton vehicle that could carry only its crew of three, was invented by English engineer Christopher; it crossed the Channel for the first time on July 25, 1959. Ten years later math was knighted for his accomplishment. By that time the last and largest of the series, the SR.N4, also called the Mountbatten class, had begun to ply the ferry routes between Ramsgate and Dover on the English side and Calais and Boulogne on the French side. In their largest variants, these enormous vehicles, weighing 265 tons and powered by four Rolls-Royce gas-turbine engines, could carry more than 50 cars and more than 400 passengers at 65 knots (1 knot = 1.15 miles or 1.85 km per hour). At such speeds the cross-Channel trip was reduced to a mere half hour. In their heyday of the late 1960s and early ’70s, the various Hovercraft ferry services (with names such as Hoverlloyd, Seaspeed, and Hoverspeed), were ferrying as many as one-third of all cross-Channel passengers. Such was the allure of this quintessentially British technical marvel that one of the Mountbatten vehicles appeared in the James Bond film Diamonds Are Forever (1971). However, the craft were always expensive to maintain and operate (especially in an era of rising fuel costs), and they never turned consistent profits for their owners. The last two SR.N4 vehicles were retired in October 2000, to be transferred to the Hovercraft Museum at Lee-on-the-Solent, Hampshire, England. math’s original SR.N1 is in the collection of the Science Museum’s facility at Wroughton, near Swindon, Wiltshire. The generic term hovercraft continues to be applied to numerous other ACVs built and operated around the world, including small sport hovercraft, medium-sized ferries that work coastal and riverine routes, and powerful amphibious assault craft employed by major military powers.

Perhaps the first man to research the ACV concept was Sir John Thornycroft, a British engineer who in the 1870s began to build test models to check his theory that drag on a ship’s hull could be reduced if the vessel were given a concave bottom in which air could be contained between hull and water. His patent of 1877 emphasized that, “provided the air cushion could be carried along under the vehicle,” the only power that the cushion would require would be that necessary to replace lost air. Neither Thornycroft nor other inventors in following decades succeeded in solving the cushion-containment problem. In the meantime aviation developed, and pilots early discovered that their aircraft developed greater lift when they were flying very close to land or a water surface. It was soon determined that the greater lift was available because wing and ground together created a “funnel” effect, increasing the air pressure. The amount of additional pressure proved dependent on the design of the wing and its height above ground. The effect was strongest when the height was between one-half and one-third of the average front-to-rear breadth of the wing (chord).

Practical use was made of the ground effect in 1929 by the German Dornier Do X flying boat, which achieved a considerable gain in performance during an Atlantic crossing when it flew close to the sea surface. World War II maritime reconnaissance aircraft also made use of the phenomenon to extend their endurance.

In the 1960s American aerodynamicists developed an experimental craft making use of a wing in connection with ground effect. Several other proposals of this type were put forward, and a further variation combined the airfoil characteristics of a ground-effect machine with an air-cushion lift system that allowed the craft to develop its own hovering power while stationary and then build up forward speed, gradually transferring the lift component to its airfoil. Although none of these craft got beyond the experimental stage, they were important portents of the future because they suggested means of using the hovering advantage of the ACV and overcoming its theoretical speed limitation of about 200 miles (320 km) per hour, above which it was difficult to hold the air cushion in place. Such vehicles are known as ram-wing craft.

In the early 1950s engineers in the United Kingdom, the United States, and Switzerland were seeking solutions to Sir John Thornycroft’s 80-year-old problem. Christopher math of the United Kingdom is now acknowledged as the father of the Hovercraft, as the ACV is popularly known. During World War II he had been closely connected with the development of radar and other radio aids and had retired into peacetime life as a boatbuilder. Soon he began to concern himself with Thornycroft’s problem of reducing the hydrodynamic drag on the hull of a boat with some kind of air lubrication.

Christopher bypassed Thornycroft’s plenum chamber (in effect, an empty box with an open bottom) principle, in which air is pumped directly into a cavity beneath the vessel, because of the difficulty in containing the cushion. He theorized that, if air were instead pumped under the vessel through a narrow slot running entirely around the circumference, the air would flow toward the centre of the vessel, forming an external curtain that would effectively contain the cushion. This system is known as a peripheral jet. Once air has built up below the craft to a pressure equaling the craft weight, incoming air has nowhere to go but outward and experiences a sharp change of velocity on striking the surface. The momentum of the peripheral jet air keeps the cushion pressure and the ground clearance higher than it would be if air were pumped directly into a plenum chamber. To test his theory, math set up an apparatus consisting of a blower that fed air into an inverted coffee tin through a hole in the base. The tin was suspended over the weighing pan of a pair of kitchen scales, and air blown into the tin forced the pan down against the mass of a number of weights. In this way the forces involved were roughly measured. By securing a second tin within the first and directing air down through the space between, math was able to demonstrate that more than three times the number of weights could be raised by this means, compared with the plenum chamber effect of the single can.

Christopher's first patent was filed on December 12, 1955, and in the following year he formed a company known as Hovercraft Limited. His early memoranda and reports show a prescient grasp of the problems involved in translating the theory into practice—problems that would still concern designers of Hovercraft years later. He forecast, for example, that some kind of secondary suspension would be required in addition to the air cushion itself. Realizing that his discovery would not only make boats go faster but also would allow the development of amphibious craft, Christopher approached the Ministry of Supply, the British government’s defense-equipment procurement authority. The air-cushion vehicle was classified “secret” in November 1956, and a development contract was placed with the Saunders-Roe aircraft and seaplane manufacturer. In 1959 the world’s first practical ACV was launched. It was called the SR.N1.

Originally the SR.N1 had a total weight of four tons and could carry three men at a maximum speed of 25 knots over very calm water. Instead of having a completely solid structure to contain the cushion and peripheral jet, it incorporated a 6-inch- (15-cm-) deep skirt of rubberized fabric. This development provided a means whereby the air cushion could easily be contained despite unevenness of the ground or water. It was soon found that the skirt made it possible to revert once again to the plenum chamber as a cushion producer. Use of the skirt brought the problem of making skirts durable enough to withstand the friction wear produced at high speeds through water. It was necessary to develop the design and manufacturing skills that would allow skirts to be made in the optimum shape for aerodynamic efficiency. Skirts of rubber and plastic mixtures, 4 feet (1.2 metres) deep, had been developed by early 1963, and the performance of the SR.N1 had been increased by using them (and incorporating gas-turbine power) to a payload of seven tons and a maximum speed of 50 knots.

The first crossing of the English Channel by the SR.N1 was on July 25, 1959, symbolically on the 50th anniversary of French aviator Louis Blériot’s first flight across the same water. Manufacturers and operators in many parts of the world became interested. Manufacture of various types of ACV began in the United States, Japan, Sweden, and France; and in Britain additional British companies were building craft in the early 1960s. By the early 1970s, however, only the British were producing what could truly be called a range of craft and employing the largest types in regular ferry service—and this against considerable odds.

The stagnation can be explained by a number of problems, all of which led to the failure of commercial ACVs to live up to what many people thought was their original promise. As already mentioned, the design of and materials used in flexible skirts had to be developed from the first, and not until 1965 was an efficient and economic flexible-skirt arrangement evolved, and even then the materials were still being developed. Another major problem arose when aircraft gas-turbine engines were used in a marine environment. Although such engines, suitably modified, had been installed in ships with some success, their transition into Hovercraft brought out their extreme vulnerability to saltwater corrosion. An ACV by its very nature generates a great deal of spray when it is hovering over water, and the spray is drawn into the intakes of gas turbines in amounts not envisaged by the engine designer. Even after considerable filtering, the moisture and salt content is high enough to corrode large modern gas-turbine engines to such an extent that they need a daily wash with pure water, and even then they have a considerably reduced life span between overhauls. Another problem, perhaps ultimately fatal to the cross-Channel Hovercraft, was the rising price of petroleum-based fuel following the oil crisis of 1973–74. Burdened by high fuel costs, Hovercraft ferry services rarely turned a profit and in fact frequently lost millions of pounds a year. Finally, the opening of the Channel Tunnel in 1994 and the development of more efficient conventional boat ferries (some of them with catamaran-type hulls) presented such stiff competition that the building of successors to the big Mountbatten-class Hovercraft could not be justified.

hoovercraft.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1133 2021-09-05 01:03:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1110) Modem

Modem, (from “modulator/demodulator”), any of a class of electronic devices that convert digital data signals into modulated analog signals suitable for transmission over analog telecommunications circuits. A modem also receives modulated signals and demodulates them, recovering the digital signal for use by the data equipment. Modems thus make it possible for established telecommunications media to support a wide variety of data communication, such as e-mail between personal computers, facsimile transmission between fax machines, or the downloading of audio-video files from a World Wide Web server to a home computer.

Most modems are “voiceband”; i.e., they enable digital terminal equipment to communicate over telephone channels, which are designed around the narrow bandwidth requirements of the human voice. Cable modems, on the other hand, support the transmission of data over hybrid fibre-coaxial channels, which were originally designed to provide high-bandwidth television service. Both voiceband and cable modems are marketed as freestanding, book-sized modules that plug into a telephone or cable outlet and a port on a personal computer. In addition, voiceband modems are installed as circuit boards directly into computers and fax machines. They are also available as small card-sized units that plug into laptop computers.

Operating Parameters

Modems operate in part by communicating with each other, and to do this they must follow matching protocols, or operating standards. Worldwide standards for voiceband modems are established by the V-series of recommendations published by the Telecommunication Standardization sector of the International Telecommunication Union (ITU). Among other functions, these standards establish the signaling by which modems initiate and terminate communication, establish compatible modulation and encoding schemes, and arrive at identical transmission speeds. Modems have the ability to “fall back” to lower speeds in order to accommodate slower modems. “Full-duplex” standards allow simultaneous transmission and reception, which is necessary for interactive communication. “Half-duplex” standards also allow two-way communication, but not simultaneously; such modems are sufficient for facsimile transmission.

Data signals consist of multiple alternations between two values, represented by the binary digits, or bits, 0 and 1. Analog signals, on the other hand, consist of time-varying, wavelike fluctuations in value, much like the tones of the human voice. In order to represent binary data, the fluctuating values of the analog wave (i.e., its frequency, amplitude, and phase) must be modified, or modulated, in such a manner as to represent the sequences of bits that make up the data signal.

Each modified element of the modulated carrier wave (for instance, a shift from one frequency to another or a shift between two phases) is known as a baud. In early voiceband modems beginning in the early 1960s, one baud represented one bit, so that a modem operating, for instance, at 300 bauds per second (or, more simply, 300 baud) transmitted data at 300 bits per second. In modern modems a baud can represent many bits, so that the more accurate measure of transmission rate is bits or kilobits (thousand bits) per second. During the course of their development, modems have risen in throughput from 300 bits per second (bps) to 56 kilobits per second (Kbps) and beyond. Cable modems achieve a throughput of several megabits per second (Mbps; million bits per second). At the highest bit rates, channel-encoding schemes must be employed in order to reduce transmission errors. In addition, various source-encoding schemes can be used to “compress” the data into fewer bits, increasing the rate of information transmission without raising the bit rate.

Development Of Voiceband Modems

The first generation

Although not strictly related to digital data communication, early work on telephotography machines (predecessors of modern fax machines) by the Bell System during the 1930s did lead to methods for overcoming certain signal impairments inherent in telephone circuits. Among these developments were equalization methods for overcoming the smearing of fax signals as well as methods for translating fax signals to a 1,800-hertz carrier signal that could be transmitted over the telephone line.

The first development efforts on digital modems appear to have stemmed from the need to transmit data for North American air defense during the 1950s. By the end of that decade, data was being transmitted at 750 bits per second over conventional telephone circuits. The first modem to be made commercially available in the United States was the Bell 103 modem, introduced in 1962 by the American Telephone & Telegraph Company (AT&T). The Bell 103 permitted full-duplex data transmission over conventional telephone circuits at data rates up to 300 bits per second. In order to send and receive binary data over the telephone circuit, two pairs of frequencies (one pair for each direction) were employed. A binary 1 was signaled by a shift to one frequency of a pair, while a binary 0 was signaled by a shift to the other frequency of the pair. This type of digital modulation is known as frequency-shift keying, or FSK. Another modem, known as the Bell 212, was introduced shortly after the Bell 103. Transmitting data at a rate of 1,200 bits, or 1.2 kilobits, per second over full-duplex telephone circuits, the Bell 212 made use of phase-shift keying, or PSK, to modulate a 1,800-hertz carrier signal. In PSK, data is represented as phase shifts of a single carrier signal. Thus, a binary 1 might be sent as a zero-degree phase shift, while a binary 0 might be sent as a 180-degree phase shift.

Between 1965 and 1980, significant efforts were put into developing modems capable of even higher transmission rates. These efforts focused on overcoming the various telephone line impairments that directly limited data transmission. In 1965 Robert Lucky at Bell Laboratories developed an automatic adaptive equalizer to compensate for the smearing of data symbols into one another because of imperfect transmission over the telephone circuit. Although the concept of equalization was well known and had been applied to telephone lines and cables for many years, older equalizers were fixed and often manually adjusted. The advent of the automatic equalizer permitted the transmission of data at high rates over the public switched telephone network (PSTN) without any human intervention. Moreover, while adaptive equalization methods compensated for imperfections within the nominal three-kilohertz bandwidth of the voice circuit, advanced modulation methods permitted transmission at still higher data rates over this bandwidth. One important modulation method was quadrature amplitude modulation, or QAM. In QAM, binary digits are conveyed as discrete amplitudes in two phases of the electromagnetic wave, each phase being shifted by 90 degrees with respect to the other. The frequency of the carrier signal was in the range of 1,800 to 2,400 hertz. QAM and adaptive equalization permitted data transmission of 9.6 kilobits per second over four-wire circuits. Further improvements in modem technology followed, so that by 1980 there existed commercially available first-generation modems that could transmit at 14.4 kilobits per second over four-wire leased lines.

The second generation

Beginning in 1980, a concerted effort was made by the International Telegraph and Telephone Consultative Committee (CCITT; a predecessor of the ITU) to define a new standard for modems that would permit full-duplex data transmission at 9.6 kilobits per second over a single-pair circuit operating over the PSTN. Two breakthroughs were required in this effort. First, in order to fit high-speed full-duplex data transmission over a single telephone circuit, echo cancellation technology was required so that the sending modem’s transmitted signal would not be picked up by its own receiver. Second, in order to permit operation of the new standard over unconditioned PSTN circuits, a new form of coded modulation was developed. In coded modulation, error-correcting codes form an integral part of the modulation process, making the signal less susceptible to noise. The first modem standard to incorporate both of these technology breakthroughs was the V.32 standard, issued in 1984. This standard employed a form of coded modulation known as trellis-coded modulation, or TCM. Seven years later an upgraded V.32 standard was issued, permitting 14.4-kilobit-per-second full-duplex data transmission over a single PSTN circuit.

In mid-1990 the CCITT began to consider the possibility of full-duplex transmission over the PSTN at even higher rates than those allowed by the upgraded V.32 standard. This work resulted in the issuance in 1994 of the V.34 modem standard, allowing transmission at 28.8 kilobits per second.

The third generation

The engineering of modems from the Bell 103 to the V.34 standard was based on the assumption that transmission of data over the PSTN meant analog transmission—i.e., that the PSTN was a circuit-switched network employing analog elements. The theoretical maximum capacity of such a network was estimated to be approximately 30 Kbps, so the V.34 standard was about the best that could be achieved by voiceband modems.

In fact, the PSTN evolved from a purely analog network using analog switches and analog transmission methods to a hybrid network consisting of digital switches, a digital “backbone” (long-distance trunks usually consisting of optical fibres), and an analog “local loop” (the connection from the central office to the customer’s premises). Furthermore, many Internet service providers (ISPs) and other data services access the PSTN over a purely digital connection, usually via a T1 or T3 wire or an optical-fibre cable. With analog transmission occurring in only one local loop, transmission of modem signals at rates higher than 28.8 Kbps is possible. In the mid-1990s several researchers noted that data rates up to 56 Kbps downstream and 33.6 Kbps upstream could be supported over the PSTN without any data compression. This rate for upstream (subscriber to central office) transmissions only required conventional QAM using the V.34 standard. The higher rate in the downstream direction (that is, from central office to subscriber), however, required that the signals undergo “spectral shaping” (altering the frequency domain representation to match the frequency impairments of the channel) in order to minimize attenuation and distortion at low frequencies.

In 1998 the ITU adopted the V.90 standard for 56-Kbps modems. Because various regulations and channel impairments can limit actual bit rates, all V.90 modems are “rate adaptive.” Finally, in 2000 the V.92 modem standard was adopted by the ITU, offering improvements in the upstream data rate over the V.90 standard. The V.92 standard made use of the fact that, for dial-up connections to ISPs, the loop is essentially digital. Through the use of a concept known as precoding, which essentially equalizes the channel at the transmitter end rather than at the receiver end, the upstream data rate was increased to above 40 Kbps. The downstream data path in the V.92 standard remained the same 56 Kbps of the V.90 standard.

Cable Modems

A cable modem connects to a cable television system at the subscriber’s premises and enables two-way transmission of data over the cable system, generally to an Internet service provider (ISP). The cable modem is usually connected to a personal computer or router using an Ethernet connection that operates at line speeds of 10 or 100 Mbps. At the “head end,” or central distribution point of the cable system, a cable modem termination system (CMTS) connects the cable television network to the Internet. Because cable modem systems operate simultaneously with cable television systems, the upstream (subscriber to CMTS) and downstream (CMTS to subscriber) frequencies must be selected to prevent interference with the television signals.

Two-way capability was fairly rare in cable services until the mid-1990s, when the popularity of the Internet increased substantially and there was significant consolidation of operators in the cable television industry. Cable modems were introduced into the marketplace in 1995. At first all were incompatible with one another, but with the consolidation of cable operators the need for a standard arose. In North and South America a consortium of operators developed the Data Over Cable Service Interface Specification (DOCSIS) in 1997. The DOCSIS 1.0 standard provided basic two-way data service at 27–56 Mbps downstream and up to 3 Mbps upstream for a single user. The first DOCSIS 1.0 modems became available in 1999. The DOCSIS 1.1 standard released that same year added voice over Internet protocol (VoIP) capability, thereby permitting telephone communication over cable television systems. DOCSIS 2.0, released in 2002 and standardized by the ITU as J.122, offers improved upstream data rates on the order of 30 Mbps.

All DOCSIS 1.0 cable modems use QAM in a six-megahertz television channel for the downstream. Data is sent continuously and is received by all cable modems on the hybrid coaxial-fibre branch. Upstream data is transmitted in bursts, using either QAM or quadrature phase-shift keying (QPSK) modulation in a two-megahertz channel. In phase-shift keying (PSK), digital signals are transmitted by changing the phase of the carrier signal in accordance with the transmitted information. In binary phase-shift keying, the carrier takes on the phases +90° and −90° to transmit one bit of information; in QPSK, the carrier takes on the phases +45°, +135°, −45°, and −135° to transmit two bits of information. Because a cable branch is a shared channel, all users must share the total available bandwidth. As a result, the actual throughput rate of a cable modem is a function of total traffic on the branch; that is, as more subscribers use the system, total throughput per user is reduced. Cable operators can accommodate greater amounts of data traffic on their networks by reducing the total span of a single fibre-coaxial branch.

DSL Modems

In the section Development of voiceband modems, it is noted that the maximum data rate that can be transmitted over the local telephone loop is about 56 Kbps. This assumes that the local loop is to be used only for direct access to the long-distance PSTN. However, if digital information is intended to be switched not through the telephone network but rather over other networks, then much higher data rates may be transmitted over the local loop using purely digital methods. These purely digital methods are known collectively as digital subscriber line (DSL) systems. DSL systems carry digital signals over the twisted-pair local loop using methods analogous to those used in the T1 digital carrier system to transmit 1.544 Mbps in one direction through the telephone network.

The first DSL was the Integrated Services Digital Network (ISDN), developed during the 1980s. In ISDN systems a 160-Kbps signal is transmitted over the local loop using a four-level signal format known as 2B1Q, for “two bits per quaternary signal.” The 160-Kbps signal is broken into two “B” channels of 64 Kbps each, one “D” channel of 16 Kbps, and one signaling channel of 16 Kbps to permit both ends of the ISDN local loop to be initialized and synchronized. ISDN systems are deployed in many parts of the world. In many cases they are used to provide digital telephone services, although these systems may also provide 64-Kbps or 128-Kbps access to the Internet with the use of an adapter card. However, because such data rates are not significantly higher than those offered by 56-Kbps V.90 voiceband modems, ISDN is not widely used for Internet access.

High-bit-rate DSL, or HDSL, was developed in about 1990, employing some of the same technology as ISDN. HDSL uses 2B1Q modulation to transmit up to 1.544 Mbps over two twisted-pair lines. In practice, HDSL systems are used to provide users with low-cost T1-type access to the telephone central office. Both ISDN and HDSL systems are symmetric; i.e., the upstream and downstream data rates are identical.

Asymmetric DSL, or ADSL, was developed in the early 1990s, originally for video-on-demand services over the telephone local loop. Unlike HDSL or ISDN, ADSL is designed to provide higher data rates downstream than upstream—hence the designation “asymmetric.” In general, downstream rates range from 1.5 to 9 Mbps and upstream rates from 16 to 640 Kbps, using a single twisted-pair wire. ADSL systems are currently most often used for high-speed access to an Internet service provider (ISP), though regular telephone service is also provided simultaneously with the data service. At the local telephone office, a DSL access multiplexer, or DSLAM, statistically multiplexes the data packets transmitted over the ADSL system in order to provide a more efficient link to the Internet. At the customer’s premises, an ADSL modem usually provides one or more Ethernet jacks capable of line rates of either 10 Mbps or 100 Mbps.

In 1999 the ITU standardized two ADSL systems. The first system, designated G.991.1 or G.DMT, specifies data delivery at rates up to 8 Mbps on the downstream and 864 Kbps on the upstream. The modulation method is known as discrete multitone (DMT), a method in which data is sent over a large number of small individual carriers, each of which uses QAM modulation (described above in Development of voiceband modems). By varying the number of carriers actually used, DMT modulation may be made rate-adaptive, depending upon the channel conditions. G.991.1 systems require the use of a “splitter” at the customer’s premises to filter and separate the analog voice channel from the high-speed data channel. Usually the splitter has to be installed by a technician; to avoid this expense a second ADSL standard was developed, variously known as G.991.2, G.lite, or splitterless ADSL. This second standard also uses DMT modulation to achieve the same rates as G.991.1. In place of the splitter, user-installable filters are required for each telephone set in the home.

Unlike cable modems, ADSL modems use a dedicated telephone line between the customer and the central office, so the delivered bandwidth equals the bandwidth actually available. However, ADSL systems may be installed only on local loops less than 5,400 metres (18,000 feet) long and therefore are not available to homes located farther from a central office. Other versions of DSL have been announced to provide even higher rate services over shorter local loops. For instance, very high data rate DSL, or VDSL, can provide up to 15 Mbps over a single twisted wire pair up to 1,500 metres (5,000 feet) long.

eyJidWNrZXQiOiJjb250ZW50Lmhzd3N0YXRpYy5jb20iLCJrZXkiOiJnaWZcL2NhYmxlLW1vZGVtLWludHJvLmpwZyIsImVkaXRzIjp7InJlc2l6ZSI6eyJ3aWR0aCI6MjAwfX19


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1134 2021-09-06 02:43:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1111) Celestial mechanics

Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data.

History

Modern analytic celestial mechanics started with Isaac Newton's Principia of 1687. The name "celestial mechanics" is more recent than that. Newton wrote that the field should be called "rational mechanics." The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term "celestial mechanics." Prior to Kepler there was little connection between exact, quantitative prediction of planetary positions, using geometrical or arithmetical techniques, and contemporary discussions of the physical causes of the planets' motion.

Johannes Kepler

Johannes Kepler (1571–1630) was the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a ‘New Astronomy, Based upon Causes, or Celestial Physics’ in 1609. His work led to the modern laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's model greatly improved the accuracy of predictions of planetary motion, years before Isaac Newton developed his law of gravitation in 1686.

Isaac Newton

Isaac Newton (25 December 1642–31 March 1727) is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified celestial and terrestrial dynamics. Using Newton's law of universal gravitation, proving Kepler's Laws for the case of a circular orbit is simple. Elliptical orbits involve more complex calculations, which Newton included in his Principia.

Joseph-Louis Lagrange

After Newton, Lagrange (25 January 1736–10 April 1813) attempted to solve the three-body problem, analyzed the stability of planetary orbits, and discovered the existence of the Lagrangian points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such. More recently, it has also become useful to calculate spacecraft trajectories.

Simon Newcomb

Simon Newcomb (12 March 1835–11 July 1909) was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884, he conceived with A. M. W. Downing a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard.

Albert Einstein

Albert Einstein (14 March 1879–18 April 1955) explained the anomalous precession of Mercury's perihelion in his 1916 paper ‘The Foundation of the General Theory of Relativity’. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy. Binary pulsars have been observed, the first in 1974, whose orbits not only require the use of General Relativity for their explanation, but whose evolution proves the existence of gravitational radiation, a discovery that led to the 1993 Nobel Physics Prize.

Examples of problems

Celestial motion, without additional forces such as thrust of a rocket, is governed by gravitational acceleration of masses due to other masses. A simplification is the n-body problem, where the problem assumes some number n of spherically symmetric masses. In that case, the integration of the accelerations can be well approximated by relatively simple summations.

Examples:
•    4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation)
•    3-body problem:
o    Quasi-satellite
o    Spaceflight to, and stay at a Lagrangian point

In the case that n=2 (two-body problem), the situation is much simpler than for larger n. Various explicit formulas apply, where in the more general case typically only numerical solutions are possible. It is a useful simplification that is often approximately valid.

Examples:
•    A binary star, e.g., Alpha Centauri (approx. the same mass)
•    A binary asteroid, e.g., 90 Antiope (approx. the same mass)

A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid.

Examples:
•    Solar system orbiting the center of the Milky Way
•    A planet orbiting the Sun
•    A moon orbiting a planet
•    A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit)

Perturbation theory

Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.

Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use.

The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem.

There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy.

The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache.”

This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers.

celestial-mechanics-autumnal-equinox.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1135 2021-09-07 00:54:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1112) Binomial Nomenclature

Linnean system of binomial nomenclature, the scientific way to name living things with a two part generic (genus) and specific (species) name.

Carl Linnaeus

(1707-1778), Swedish physician and botanist, was the founder of modern taxonomy. He used his super-smart Homo sapiens brain to come up with a system called binomial nomenclature used for naming living things and grouping similar organisms into categories.

What is a Genus & Species?

Taxonomy is used in the related discipline of biological systematics, when scientists try to determine the evolutionary relationships between organisms (how closely related they are to each other).

Today, biologists still use the Linnaean system of classification, but advances in the fields of genetics and evolutionary theory have resulted in some of Linnaeus’ original categories being changed to better reflect the relationships among organisms.

Hierarchy of Biological Classification

All life can be classified in increasingly specific groups, starting by sorting all life into three Domains (Archaea, Eubacteria and Eukaryota) and ending with the most specific category, the individual species. And every species has its own name.

Binomial Nomenclature

Also called binary nomenclature, this formal system of naming organisms consists of two Latinized names, the genus and the species. All living things, and even some viruses, have a scientific name.

The binomial aspect of this system means that each organism is given two names, a ‘generic name,’ which is called the genus (plural = genera) and a ‘specific name,’ the species. Together the generic and specific name of an organism are its scientific name.

Having a universal system of binomial nomenclature allows scientists to speak the same language when referring to living things, and avoids the confusion of multiple common names that may differ based on region, culture or native language.

When written, a scientific name is always either italicized, or, if hand-written, underlined. The genus is capitalized and the species name is lower case. For example, the proper format for the scientific name of humans is Homo sapiens.

What Is a Genus?

In biology, ‘genus’ is the taxonomic classification lower than ‘family’ and higher than ‘species’. In other words, genus is a more general taxonomic category than is species. For example, the generic name Ursus represents brown bears, polar bears and black bears.

What is a Species?

The species name, also called "specific epithet", is the second part of a scientific name, and refers to one species within a genus.

A species is a group of organisms that typically have similar anatomical characteristics and, in reproducers, can successfully interbreed to produce fertile offspring.

In the genus  Ursus, there are a number of different bear species, including Ursus arctos, the brown bear, Ursus americanus, the American black bear and Ursus maritimus, the polar bear.

Levels

There are multiple taxonomic levels. Broad levels such as a domain have many members with few similarities. Specific levels such as a genus have very few members with many shared similarities.

Domain

A domain is the highest and most broad taxonomic rank. All life form can be classified into a domain. While prone to periodic reclassification, since 1990 there are three domains:

Archaea - one celled organisms with membranes composed of branched hydrocarbon chains
Bacteria - one celled organisms that have a substance called peptidoglycan that form their cell wall
Eukarya - one or multicelled organisms with their genetic material organized in a nucleus

Kingdom

A kingdom is a subdivision of a domain.

Eukarya kingdoms include:

Animalia (animals)
Plantae (plants)
Fungi

Phylum

A phylum is a major subdivision of an organisim's kingdom.

Some phyla of the animal kingdom are:

Arthropoda - arthropods
Chordata - chordates
Mollusca - mollusks

Some phyla of the plant kingdom are:

Bryophyta - mosses
Magnoliophyta - flowering plants
Pteridophyta - ferns and horsetails

Class

A class is a major subdivision of an organisim's phylum.

Order

An order is a major subdivision of an organisim's class.

Family

A family is a major subdivision of an organisim's order.

Genus

A genus is a major subdivision of an organism's family or subfamily and usually consist of more than one species.

The standards for genus classification are not strictly defined, with different authorities producing different guidelines. However, some general practices are:

i) the number of members in a genus is reasonably compact
ii) that members of a genus share common ancestors
iii) that members of a genus are distinct

Species

A species is a very specific classification.

7297285_orig.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1136 2021-09-08 00:45:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1113) Pitcher plant

Pitcher plant, any carnivorous plant with pitcher-shaped leaves that form a passive pitfall trap. Old World pitcher plants are members of the family Nepenthaceae (order Caryophyllales), while those of the New World belong to the family Sarraceniaceae (order Ericales). The Western Australian pitcher plant (Cephalotus follicularis) is the only species of the family Cephalotaceae (order Oxalidales). Pitcher plants are found in a wide range of habitats with poor soil conditions, from pine barrens to sandy coastal swamps, and rely on carnivory to obtain nutrients such as nitrogen and phosphorus.

Sarraceniaceae

The family Sarraceniaceae consists of three genera of pitcher plants and is distributed throughout North America and the western portion of the Guiana Highlands in South America. Members of this family commonly inhabit bogs, swamps, wet or sandy meadows, and savannas where the soils are water-saturated, acidic, and deficient in nutrients. The carnivorous traps of this family commonly resemble trumpets, pitchers, or urns and primarily capture insects.

The genus Sarracenia, sometimes known as the trumpet pitcher genus, consists of some 10 species native to eastern North America. Insects and other prey are attracted to the mouth of the pitcher by a trail of nectar-secreting glands that extend downward along the lip to the interior of the pitcher. The throat of the pitcher, just below the lip, is very smooth and sends the animal tumbling down into the liquid pool at the bottom of the pitcher, where it drowns. The body is then digested by enzymes secreted within the leaf. The purple, or common, pitcher plant (S. purpurea) has heavily veined, green to reddish, flaring, juglike leaves that bear downward-pointing bristles to keep prey, including salamanders, from escaping. Its flowers are purple-red. The parrot pitcher plant (S. psittacina) has small, fat, red-veined leaves that are topped by beaklike lids and bears dark red flowers. The sweet pitcher plant (S. rubra) produces dull red, violet-scented flowers. The crimson pitcher plant (S. leucophylla) has white trumpet-shaped pitchers with ruffled upright hoods and scarlet flowers. The yellow pitcher plant (S. flava) has bright yellow flowers and a long, green, trumpet-shaped leaf the lid of which is held upright. One species, the green pitcher plant (S. oreophila), is critically endangered and is found in limited areas of Alabama, Georgia, North Carolina, and Tennessee.

The cobra plant (Darlingtonia californica) is the only species of its genus and is native to swamps in mountain areas of northern California and southern Oregon. Its hooded pitcherlike leaves resemble striking cobras and bear purple-red appendages that look similar to a snake’s forked tongue or a set of fangs. Unlike other pitcher plants, the cobra plant does not appear to produce digestive enzymes and instead relies on bacteria to break down its prey.

The genus Heliamphora, known as sun pitchers or marsh pitcher plants, consists of some 23 species native to the rainforest mountains of western Brazil, Guyana, and Venezuela. These species form cushions on ridge crests and swampy depressions and bear stout pitchers that can attain a height of 50 cm (20 inches).

Nepenthaceae

The family Nepenthaceae consists of a single genus, Nepenthes, with some 140 species of tropical pitcher plants native to Madagascar, Southeast Asia, and Australia. Most of these species are perennials that grow in very acidic soil, though some are epiphytic and live on the branches of trees. The lid of the pitcher secretes nectar to attract prey, which are unable to escape from the trap because of its downward-pointing hairs and slick sides. This genus includes the critically endangered Attenborough’s pitcher plant (N. attenboroughii), which is one of the largest of all carnivorous plants, reaching up to 1.5 metres (4.9 feet) tall with pitchers that are 30 cm (11.8 inches) in diameter. Found near the summit of Mount Victoria on the island of Palawan in the Philippines, Attenborough’s pitcher plant is capable of capturing and digesting rodents, as well as insects and other small animals. Cultivated species of pitcher plants from the Old World genus Nepenthes include the slender pitcher plant (N. gracilis), the common swamp pitcher plant (N. mirabilis), and the golden peristome (N. veitchii), as well as a number of hybrid species such as Hooker’s pitcher plant N. ×hookeriana, N. ×mastersiana, and N. ×dominii.

Cephalotaceae

The family Cephalotaceae features only one genus with a single species, the Western Australian pitcher plant (Cephalotus follicularis). The plant is a small perennial herb native to damp sandy or swampy terrain in southwestern Australia. Unlike most other pitcher plants, it bears “traditional” leaves in addition to its pitfall traps. Its short green pitchers are protected by a hairy, red- and white-striped lid that prevents rainfall from filling the trap and attracts prey. Given its limited range and the threat of habitat loss, the species is listed as vulnerable on the IUCN Red List of Threatened Species.

Nepen-image.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1137 2021-09-09 01:08:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1114) Madeira River

Madeira River, Portuguese Rio Madeira, major tributary of the Amazon. It is formed by the junction of the Mamoré and Beni rivers at Villa Bella, Bolivia, and flows northward forming the border between Bolivia and Brazil for approximately 60 miles (100 km). After receiving the Abuná River, the Madeira meanders northeastward in Brazil through Rondônia and Amazonas states to its junction with the Amazon River, 90 miles (145 km) east of Manaus. A distributary of the Madeira flows into the Amazon about 100 miles (160 km) farther downstream, creating the marshy island of Tupinambarama. The Madeira is 2,082 miles (3,352 km) long from the upper reaches of the Mamoré, and its general width is about one-half mile. It is navigable by seagoing vessels most of the year from its mouth on the Amazon to the Cachoeira (falls) de Santo Antônio 807 miles (1,300 km) upstream, the first of 19 waterfalls or rapids that block further passage, near the town of Pôrto Velho, Brazil. The Madeira-Mamoré Railway, which extended for 228 miles (367 km) between Pôrto Velho and Guajará-Mirim, circumvented the falls and rapids and provided a link with the upper course of the Madeira River. Abandoned in the 1970s, much of the railway’s corridor is now served by highway.

Although exploration of the Madeira valley began in the 16th century, parts of the region were not mapped until the late 1970s, via satellite. The tropical rainforest’s traditional inhabitants, Indians and mestizos, who lived along the riverbanks and gathered forest products such as Brazil nuts and rubber, were joined by farmers and ranchers who settled in the area during the latter half of the 20th century.

The Madeira River is a major waterway in South America. It is estimated to be 1,450 km (900 mi) in length, while the Madeira-Mamoré is estimated near 3,250 km (2,020 mi) or 3,380 km (2,100 mi) in length depending on the measuring party and their methods. The Madeira is the biggest tributary of the Amazon, accounting for about 15% of the water in the basin. A map from Emanuel Bowen in 1747, held by the David Rumsey Map Collection, refers to the Madeira by the pre-colonial, indigenous name Cuyari:

The River of Cuyari, called by the Portuguese Madeira or the Wood River, is formed by two great rivers, which join near its mouth. It was by this River, that the Nation of Topinambes passed into the River Amazon.

Climate

The mean inter-annual precipitations on the great basins vary from 75 to 300 cm (2.5–9.8 ft), the entire upper Madeira basin receiving 170.5 cm (5.6 ft). The greatest extremes of rainfall are between 49 to 700 cm (1.6–23 ft). Even just below the confluence that forms it, the Madeira is one of the largest rivers of the world, with a mean inter-annual discharge of 18,000 cubic metres per second (640,000 cu ft/s), i.e., 568 km^3 (136 cu mi) per year, approximately half the discharge of the Congo River. On the further course towards the Amazon, the mean discharge of the Madeira increases up to 31,200 m^3/s (1,100,000 cu ft/s).

Course

Between Guajará-Mirim and the falls of Teotônio, the Madeira receives the drainage of the north-eastern slopes of the Andes from Santa Cruz de la Sierra to Cuzco, the whole of the south-western slope of Brazilian Mato Grosso and the northern slope of the Chiquitos sierras. In total this catchment area, which is slightly more than the combined area of all headwaters, is 850,000 km^2 (330,000 sq mi), almost equal in area to France and Spain combined. The waters flow into the Madeira from many large rivers, the principal of which, (from east to west), are the Guaporé or Itenez, the Baures and Blanco, the Itonama or San Miguel, the Mamoré, Beni, and Mayutata or Madre de Dios, all of which are reinforced by numerous secondary but powerful affluents. The climate of the upper catchment area varies from humid in the western edge with the origin of the river's main stem by volume (Río Madre de Dios, Río Beni) to semi arid in the southernmost part with the andine headwaters of the main stem by length (Río Caine, Río Rocha, Río Grande, Mamoré).

All of the upper branches of the river Madeira find their way to the falls across the open, almost level Mojos and Beni plains, 90,000 km^2 (35,000 sq mi) of which are yearly flooded to an average depth of about one meter (3 ft) for a period of from three to four months.

From its source in the confluence of Madre de Dios and Mamoré rivers and downstream to Abuna river the Madeira flows northward forming border between Bolivia and Brazil. Below its confluence with the latter tributary the flow of river changes to north-eastward direction, inland of Rondônia state of Brazil. The section of the river from the border to Porto Velho has notable drop of bed and was not navigable. Before 2012 the falls of Teotônio and of San Antonio existed here, they had higher flow rate and bigger level drop than more famous Boyoma Falls in Africa. Currently these rapids are submerged by the reservoir of Santo Antônio Dam. Below Porto Velho the Madeira meanders north-eastward through the Rondônia and Amazonas states of north west Brazil to its junction with the Amazon.

The 283,117 hectares (2,800 km^2; 1,100 sq mi) Rio Madeira Sustainable Development Reserve, created in 2006, extends along the north bank of the river opposite the town of Novo Aripuanã. At its mouth is Ilha Tupinambaranas, an extensive marshy region formed by the Madeira's distributaries.

Navigation

The Madeira river rises more than 15 m (50 ft) during the rainy season, and ocean vessels may ascend it to the Falls of San Antonio, near Porto Velho, Brazil, 1,070 km (660 mi) above its mouth; but in the dry months, from June to November, it is only navigable for the same distance for craft drawing about 2 meters (7 ft) of water. The Madeira-Mamoré Railroad runs in a 365 km (227 mi) loop around the unnavigable section to Guajará-Mirim on the Mamoré River, but is not functional, limiting shipping from the Atlantic at Porto Velho.

Today, it is also one of the Amazon Basin's most active waterways, and helps export close to four million tons of grains, which are loaded onto barges in Porto Velho, where both Cargill and Amaggi have loading facilities, and then shipped down the Madeira to the ports of Itacoatiara, near the mouth of the Madeira, just upstream on the left bank of the Amazon, or further down the Amazon, to the port of Santarem, at the mouth of the Tapajos River. From these two ports, Panamax-type ships then export the grains - mainly soy and corn - to Europe and Asia. The Madeira waterway is also used to take fuel from the REMAN refinery (Petrobras) in Manaus, state capital of Amazonas, to Porto Velho, from where the states of Acre, Rondonia and parts of Mato Grosso are supplied mainly with gasoline (petrol) refined in Manaus. Cargo barges also use the Madeira on the route between Manaus and Porto Velho, which is 1,225 km (760 mi) along the Rio Negro, Amazon and Madeira, connecting Manaus' industrial district with the rest of Brazil, as Manaus is land-locked as far as logistics with the rest of the country are concerned, to bring in part of its raw materials, and export its produce to the major consumer centres of São Paulo and Rio de Janeiro. In 2012, the cargo amounted to 287,835 tons (both directions). The total tonnage shipped in 2012 on the Madeira accounted to 5,076,014.

Two large dams (see below) are under construction as part of the IIRSA regional integration project. The dam projects include large ship-locks capable of moving oceangoing vessels between the impounded reservoir and the downstream river. If the project is completed, "more than 4,000 km [2,500 mi] of waterways upstream from the dams in Brazil, Bolivia, and Peru would become navigable."

Ecology

As typical of Amazonian rivers with the primary headwaters in the Andes, the Madeira River is turbid because of high sediment levels and it is whitewater, but some of its tributaries are clearwater (e.g., Aripuanã and Ji-Paraná) or blackwater (e.g., Manicoré).

The Bolivian river dolphin, variously considered a subspecies of the Amazon river dolphin or a separate species, is restricted to the upper Madeira River system. It has been estimated that there are more than 900 fish species in the Madeira River Basin, making it one of the freshwater systems in the world with the highest species richness.

In popular culture

The river is the fifth title of the 1993/1999 Philip Glass album Aguas da Amazonia.

Dams

In July 2007, plans have been approved by the Brazilian Government to construct two hydroelectric dams on the Madeira River, the Santo Antonio Dam near Porto Velho and the Jirau Dam about 100 km upstream. Both the Jirau and Santo Antonio dams are run-of-the-river projects that do not impound a large reservoir. Both dams also feature some environmental re-mediation efforts (such as fish ladders). As a consequence, it has been suggested that there has not been strong environmental opposition to the implementation of the Madeira river complex. Yet, if the fish ladders fail, "several valuable migratory fish species could suffer near-extinction as a result of the Madeira dams." There are also concerns with deforestation and pressure on conservation areas and indigenous peoples' territories. The Worldwatch institute has also criticized the fast-track approval process for "kindler, gentler dams with smaller reservoirs, designed to lessen social and environmental impacts", claiming that no project should "fast-track the licensing of new dams in Amazonia and allow projects to circumvent Brazil's tough environmental laws".

Amazon-river-map-from-tes-10.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1138 2021-09-11 01:11:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1115) Magnesium Sulfate

Magnesium sulfate, MgSO4, is a colourless crystalline substance formed by the reaction of magnesium hydroxide with sulfur dioxide and air. A hydrate form of magnesium sulfate called kieserite, MgSO4∙H2O, occurs as a mineral deposit. Synthetically prepared magnesium sulfate is sold as Epsom salt, MgSO4∙7H2O. In industry, magnesium sulfate is used in the manufacture of cements and fertilizers and in tanning and dyeing; in medicine it serves as a purgative. Because of its ability to absorb water readily, the anhydrous form is used as a desiccant (drying agent).

Magnesium sulfate is usually encountered in the form of a hydrate MgSO4·nH2O, for various values of n between 1 and 11. The most common is the heptahydrate MgSO4·7H2O, known as Epsom salt, which is a household chemical with many traditional uses, including bath salts.

The main use of magnesium sulfate is in agriculture, to correct soils deficient in magnesium (an essential plant nutrient because of the role of magnesium in chlorophyll and photosynthesis). The monohydrate is favored for this use; by the mid 1970s, its production was 2.3 million tons per year. The anhydrous form and several hydrates occur in nature as minerals, and the salt is a significant component of the water from some springs.

Preparation

Magnesium sulfate is usually obtained directly from dry lake beds and other natural sources. It can also be prepared by reacting magnesite (magnesium carbonate, MgCO3) or magnesia (oxide, MgO) with sulfuric acid.

Another possible method is to treat seawater or magnesium-containing industrial wastes so as to precipitate magnesium hydroxide and react the precipitate with sulfuric acid.

Physical properties

Magnesium sulfate relaxation is the primary mechanism that causes the absorption of sound in seawater at frequencies above 10 kHz[ (acoustic energy is converted to thermal energy). Lower frequencies are less absorbed by the salt, so that low frequency sound travels farther in the ocean. Boric acid and magnesium carbonate also contribute to absorption.

Uses

Medical

Magnesium sulfate is used both externally (as Epsom salt) and internally.

The main external use is the formulation as bath salts, especially for foot baths to soothe sore feet. Such baths have been claimed to also soothe and hasten recovery from muscle pain, soreness, or injury.[12] Potential health effects of magnesium sulfate are reflected in medical studies on the impact of magnesium on resistant depression and as an analgesic for migraine and chronic pain. Magnesium sulfate has been studied in the treatment of asthma, preeclampsia and eclampsia.

Magnesium sulfate is the usual component of the concentrated salt solution used in isolation tanks to increase its specific gravity to approximately 1.25–1.26. This high density allows an individual to float effortlessly on the surface of water in the closed tank, eliminating as many of the external senses as possible.

In the UK, a medication containing magnesium sulfate and phenol, called "drawing paste", is useful for small boils or localized infections and removing splinters.

Internally, magnesium sulfate may be administered by oral, respiratory, or intravenous routes. Internal uses include replacement therapy for magnesium deficiency, treatment of acute and severe arrhythmias, as a bronchodilator in the treatment of asthma, and preventing eclampsia.

Agriculture

In agriculture, magnesium sulfate is used to increase magnesium or sulfur content in soil. It is most commonly applied to potted plants, or to magnesium-hungry crops such as potatoes, tomatoes, carrots, peppers, lemons, and roses. The advantage of magnesium sulfate over other magnesium soil amendments (such as dolomitic lime) is its high solubility, which also allows the option of foliar feeding. Solutions of magnesium sulfate are also nearly pH neutral, compared with the slightly alkaline salts of magnesium as found in limestone; therefore, the use of magnesium sulfate as a magnesium source for soil does not significantly change the soil pH.

Magnesium sulfate was historically used as a treatment for lead poisoning prior to the development of chelation therapy, as it was hoped that any lead ingested would be precipitated out by the magnesium sulfate and subsequently purged from the digestive system. This application saw particularly widespread use among veterinarians during the early-to-mid 20th century; Epsom salt was already available on many farms for agricultural use, and it was often prescribed in the treatment of farm animals that inadvertently ingested lead.

Food preparation

Magnesium sulfate is used as a brewing salt in making beer. It may also be used as a coagulant for making tofu.

Chemistry

Anhydrous magnesium sulfate is commonly used as a desiccant in organic synthesis owing to its affinity for water and compatibility with most organic compounds. During work-up, an organic phase is treated with anhydrous magnesium sulfate. The hydrated solid is then removed by filtration, decantation, or by distillation (if the boiling point is low enough). Other inorganic sulfate salts such as sodium sulfate and calcium sulfate may be used in the same way.

Construction

Magnesium sulfate is used to prepare specific cements by the reaction between magnesium oxide and magnesium sulfate solution, which are of good binding ability and more resistance than Portland cement. This cement is mainly adopted in the production of lightweight insulation panels. Weakness in water resistance limits its usage.

Magnesium (or sodium) sulfate is also used for testing aggregates for soundness in accordance with ASTM C88 standard, when there are no service records of the material exposed to actual weathering conditions. The test is accomplished by repeated immersion in saturated solutions followed by oven drying to dehydrate the salt precipitated in permeable pore spaces. The internal expansive force, derived from the rehydration of the salt upon re-immersion, simulates the expansion of water on freezing.

Magnesium sulfate is also used to test the resistance of concrete to external sulfate attack (ESA).

Aquaria

Magnesium sulfate heptahydrate is also used to maintain the magnesium concentration in marine aquaria which contain large amounts of stony corals, as it is slowly depleted in their calcification process. In a magnesium-deficient marine aquarium, calcium and alkalinity concentrations are very difficult to control because not enough magnesium is present to stabilize these ions in the saltwater and prevent their spontaneous precipitation into calcium carbonate.

Double salts

Double salts containing magnesium sulfate exist. There are several known as sodium magnesium sulfates and potassium magnesium sulfates. A mixed copper-magnesium sulfate heptahydrate (Mg,Cu)SO4·7H2O was recently found to occur in mine tailings and has been given the mineral name alpersite.

magnesium-sulphate-500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1139 2021-09-13 00:29:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1116) Sphygmomanometer

Sphygmomanometer, instrument for measuring blood pressure. It consists of an inflatable rubber cuff, which is wrapped around the upper arm and is connected to an apparatus that records pressure, usually in terms of the height of a column of mercury or on a dial (an aneroid manometer). An arterial blood pressure reading consists of two numbers, which typically may be recorded as x/y. The x is the systolic pressure, and y is the diastolic pressure. Systole refers to the contraction of the ventricles of the heart, when blood is forced from the heart into the pulmonary and systemic arterial circulation, and diastole refers to the resting period, when the ventricles expand and receive another supply of blood from the atria. At each heartbeat, blood pressure is raised to the systolic level, and, between beats, it drops to the diastolic level. As the cuff is inflated with air, a stethoscope is placed against the skin at the crook of the arm. As the air is released, the first sound heard marks the systolic pressure; as the release continues, a dribbling noise is heard. This marks the diastolic pressure, which is dependent on the elasticity of the arteries.

The first clinically applicable sphygmomanometer was invented in 1881 by Austrian physician Karl Samuel Ritter von Basch. Von Basch introduced the aneroid manometer, which uses a round dial that provides a pressure reading. The pressure is indicated by a needle, which is deflected by air from an inflation device (e.g., a diaphragm or Bourdon tube).

A sphygmomanometer, also known as a blood pressure monitor, or blood pressure gauge, is a device used to measure blood pressure, composed of an inflatable cuff to collapse and then release the artery under the cuff in a controlled manner, and a mercury or aneroid manometer to measure the pressure. Manual sphygmomanometers are used with a stethoscope when using the auscultatory technique.

A sphygmomanometer consists of an inflatable cuff, a measuring unit (the mercury manometer, or aneroid gauge), and a mechanism for inflation which may be a manually operated bulb and valve or a pump operated electrically.

Types

Both manual and digital meters are currently employed, with different trade-offs in accuracy versus convenience.

Manual

A stethoscope is required for auscultation. Manual meters are best used by trained practitioners, and, while it is possible to obtain a basic reading through palpation alone, this yields only the systolic pressure.

Mercury sphygmomanometers are considered the gold standard. They indicate pressure with a column of mercury, which does not require recalibration. Because of their accuracy, they are often used in clinical trials of drugs and in clinical evaluations of high-risk patients, including pregnant women. A frequently used wall mounted mercury sphygmomanometer is also known as a Baumanometer.

Aneroid sphygmomanometers (mechanical types with a dial) are in common use; they may require calibration checks, unlike mercury manometers. Aneroid sphygmomanometers are considered safer than mercury sphygmomanometers, although inexpensive ones are less accurate.[4] A major cause of departure from calibration is mechanical jarring. Aneroids mounted on walls or stands are not susceptible to this particular problem.

Digital

Digital meters employ oscillometric measurements and electronic calculations rather than auscultation. They may use manual or automatic inflation, but both types are electronic, easy to operate without training, and can be used in noisy environments. They measure systolic and diastolic pressures by oscillometric detection, employing either deformable membranes that are measured using differential capacitance, or differential piezoresistance, and they include a microprocessor. They measure mean blood pressure and pulse rate, while systolic and diastolic pressures are obtained less accurately than with manual meters, and calibration is also a concern. Digital oscillometric monitors may not be advisable for some patients, such as those suffering from arteriosclerosis, arrhythmia, preeclampsia, pulsus alternans, and pulsus paradoxus, as their calculations may not correct for these conditions, and in these cases, an analog sphygmomanometer is preferable when used by a trained person.

Digital instruments may use a cuff placed, in order of accuracy and inverse order of portability and convenience, around the upper arm, the wrist, or a finger. Recently, a group of researchers at Michigan State University developed a smartphone based device that uses oscillometry to estimate blood pressure. The oscillometric method of detection used gives blood pressure readings that differ from those determined by auscultation, and vary according to many factors, such as pulse pressure, heart rate and arterial stiffness, although some instruments are claimed also to measure arterial stiffness, and some can detect irregular heartbeats.

Operation

In humans, the cuff is normally placed smoothly and snugly around an upper arm, at roughly the same vertical height as the heart while the subject is seated with the arm supported. Other sites of placement depend on species and may include the flipper or tail. It is essential that the correct size of cuff is selected for the patient. Too small a cuff results in too high a pressure, while too large a cuff results in too low a pressure. For clinical measurements it is usual to measure and record both arms in the initial consultation to determine if the pressure is significantly higher in one arm than the other. A difference of 10 mm Hg may be a sign of coarctation of the aorta. If the arms read differently, the higher reading arm would be used for later readings.[citation needed] The cuff is inflated until the artery is completely occluded.

With a manual instrument, listening with a stethoscope to the brachial artery, the examiner slowly releases the pressure in the cuff at a rate of approximately 2 mm per heart beat. As the pressure in the cuffs falls, a "whooshing" or pounding sound is heard (see Korotkoff sounds) when blood flow first starts again in the artery. The pressure at which this sound began is noted and recorded as the systolic blood pressure. The cuff pressure is further released until the sound can no longer be heard. This is recorded as the diastolic blood pressure. In noisy environments where auscultation is impossible (such as the scenes often encountered in emergency medicine), systolic blood pressure alone may be read by releasing the pressure until a radial pulse is palpated (felt). In veterinary medicine, auscultation is rarely of use, and palpation or visualization of pulse distal to the sphygmomanometer is used to detect systolic pressure.

Digital instruments use a cuff which may be placed, according to the instrument, around the upper arm, wrist, or a finger, in all cases elevated to the same height as the heart. They inflate the cuff and gradually reduce the pressure in the same way as a manual meter, and measure blood pressures by the oscillometric method.

Significance

By observing the mercury in the column, or the aneroid gauge pointer, while releasing the air pressure with a control valve, the operator notes the values of the blood pressure in mm Hg. The peak pressure in the arteries during the cardiac cycle is the systolic pressure, and the lowest pressure (at the resting phase of the cardiac cycle) is the diastolic pressure. A stethoscope, applied lightly over the artery being measured, is used in the auscultatory method. Systolic pressure (first phase) is identified with the first of the continuous Korotkoff sounds. Diastolic pressure is identified at the moment the Korotkoff sounds disappear (fifth phase).

Measurement of the blood pressure is carried out in the diagnosis and treatment of hypertension (high blood pressure), and in many other healthcare scenarios. Manual meters are best used by trained practitioners, and, while it is possible to obtain a basic reading through palpation alone, this yields only the systolic pressure.

History

The sphygmomanometer was invented by Samuel Siegfried Karl Ritter von Basch in the year 1881. Scipione Riva-Rocci introduced a more easily used version in 1896. In 1901, pioneering neurosurgeon Dr. Harvey Cushing brought an example of Riva-Rocci's device to the US, modernized it and popularized it within the medical community. Further improvement came in 1905 when Russian physician Nikolai Korotkov included diastolic blood pressure measurement following his discovery of "Korotkoff sounds." William A. Baum invented the Baumanometer brand in 1916, while working for The Life Extension Institute which performed insurance and employment physicals. In 1981 the first fully automated oscillometric blood pressure cuff was invented by Donald Nunn.

mercury-free-sphygmomanometer-500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1140 2021-09-15 01:33:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1117) British Museum

British Museum, in London, comprehensive national museum with particularly outstanding holdings in archaeology and ethnography. It is located in the Bloomsbury district of the borough of Camden.

Established by act of Parliament in 1753, the museum was originally based on three collections: those of Sir Hans Sloane; Robert Harley, 1st earl of Oxford; and Sir Robert Cotton. The collections (which also included a significant number of manuscripts and other library materials) were housed in Montagu House, Great Russell Street, and were opened to the public in 1759. The museum’s present building, designed in the Greek Revival style by Sir Robert Smirke, was built on the site of Montagu House in the period 1823–52 and has been the subject of several subsequent additions and alterations. Its famous round Reading Room was built in the 1850s; beneath its copper dome laboured such scholars as Karl Marx, Virginia Woolf, Peter Kropotkin, and Thomas Carlyle. In 1881 the original natural history collections were transferred to a new building in South Kensington to form the Natural History Museum, and in 1973 the British Museum’s library was joined by an act of Parliament with a number of other holdings to create the British Library. About half the national library’s holdings were kept at the museum until a new library building was opened at St. Pancras in 1997.

After the books were removed, the interior of the Reading Room was repaired and restored to its original appearance. In addition, the Great Court (designed by Sir Norman Foster), a glass-roofed structure surrounding the Reading Room, was built. The Great Court and the refurbished Reading Room opened to the public in 2000. Also restored in time for the 250th anniversary of the museum’s establishment was the King’s Library (1823–27), the first section of the newly constituted British Museum to have been constructed. It now houses a permanent exhibition on the Age of Enlightenment.

Among the British Museum’s most famous holdings are the Elgin Marbles, consisting mainly of architectural details from the Parthenon at Athens; other Greek sculptures from the Mausoleum of Halicarnassus and from the Temple of Artemis at Ephesus; the Rosetta Stone, which provided the key to reading ancient Egyptian hieroglyphs; the Black Obelisk and other Assyrian relics from the palace and temples at Calah (modern Nimrūd) and Nineveh; exquisite gold, silver, and shell work from the ancient Mesopotamian city of Ur; the so-called Portland Vase, a 1st-century-CE cameo glass vessel found near Rome; treasure from the 7th-century-CE ship burial found at Sutton Hoo, Suffolk; and Chinese ceramics from the Ming and other dynasties.


British-Museum-22.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1141 2021-09-17 00:33:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1118) Ileum

Ileum, the final and longest segment of the small intestine. It is specifically responsible for the absorption of vitamin B12 and the reabsorption of conjugated bile salts. The ileum is about 3.5 metres (11.5 feet) long (or about three-fifths the length of the small intestine) and extends from the jejunum (the middle section of the small intestine) to the ileocecal valve, which empties into the colon (large intestine). The ileum is suspended from the abdominal wall by the mesentery, a fold of serous (moisture-secreting) membrane.

The smooth muscle of the ileum’s walls is thinner than the walls of other parts of the intestines, and its peristaltic contractions are slower. The ileum’s lining is also less permeable than that of the upper small intestine. Small collections of lymphatic tissue (Peyer patches) are embedded in the ileal wall, and specific receptors for bile salts and vitamin B12 are contained exclusively in its lining; about 95 percent of the conjugated bile salts in the intestinal contents is absorbed by the ileum.

Two percent of all humans are born with a congenital ileum malformation, called Meckel diverticulum, that consists of a side channel from 1 to 12 cm (0.4 to 4.7 inches) long extending from the intestinal wall. The malformation occurs when the duct leading from the navel to the small intestine in the fetus fails to atrophy and close. A small number of cases require surgical removal because of intestinal bleeding and inflammation.

Injury or disease affecting the terminal ileum produces vitamin B12 deficiency and extensive diarrhea, the latter resulting from the interference of bile salts on water absorption in the large intestine.

ileum.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1142 2021-09-18 00:48:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1119) Cecum

Cecum, also spelled caecum, pouch or large tubelike structure in the lower abdominal cavity that receives undigested food material from the small intestine and is considered the first region of the large intestine. It is separated from the ileum (the final portion of the small intestine) by the ileocecal valve (also called Bauhin valve), which limits the rate of food passage into the cecum and may help prevent material from returning to the small intestine.

The main functions of the cecum are to absorb fluids and salts that remain after completion of intestinal digestion and absorption and to mix its contents with a lubricating substance, mucus. The internal wall of the cecum is composed of a thick mucous membrane, through which water and salts are absorbed. Beneath that lining is a deep layer of muscle tissue that produces churning and kneading motions.

Variations in cecum size and structure occur among animals. In small herbivores, such as rabbits, for example, the cecum is enlarged and contains bacteria that aid in the digestion of plant matter and facilitate nutrient absorption. Cecum number can also vary; for example, the rock hyrax (Procavia capensis) has two ceca, whereas certain insectivores (such as hedgehogs, moles, and shrews) lack a cecum.

gia-cecum-33ff38ca.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1143 2021-09-19 00:51:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1120) Jejunum

The jejunum is the second part of the small intestine in humans and most higher vertebrates, including mammals, reptiles, and birds. Its lining is specialized for the absorption by enterocytes of small nutrient molecules which have been previously digested by enzymes in the duodenum.

The jejunum lies between the duodenum and the ileum and is considered to start at the suspensory muscle of the duodenum, a location called the duodenojejunal flexure. The division between the jejunum and ileum is not anatomically distinct. In adult humans, the small intestine is usually 6–7 metres (20–23 ft) long (post mortem), about two-fifths of which (about 2.5 m (8.2 ft)) is the jejunum.

Structure

The interior surface of the jejunum—which is exposed to ingested food—is covered in finger–like projections of mucosa, called villi, which increase the surface area of tissue available to absorb nutrients from ingested foodstuffs. The epithelial cells which line these villi have microvilli. The transport of nutrients across epithelial cells through the jejunum and ileum includes the passive transport of sugar fructose and the active transport of amino acids, small peptides, vitamins, and most glucose. The villi in the jejunum are much longer than in the duodenum or ileum.

The pH in the jejunum is usually between 7 and 8 (neutral or slightly alkaline).

The jejunum and the ileum are suspended by mesentery which gives the bowel great mobility within the abdomen. It also contains circular and longitudinal smooth muscle which helps to move food along by a process known as peristalsis.

If the jejunum is impacted by blunt force the emesis reflex (vomiting) will be initiated.

Histology

The jejunum contains very few Brunner's glands (found in the duodenum) or Peyer's patches (found in the ileum). However, there are a few jejunal lymph nodes suspended in its mesentery. The jejunum has many large circular folds in its submucosa called plicae circulares that which increase the surface area for nutrient absorption. The plicae circulares are best developed in the jejunum.

There is no line of demarcation between the jejunum and the ileum. However, there are subtle histological differences:
•    The jejunum has less fat inside its mesentery than the ileum.
•    The jejunum is typically of larger diameter than the ileum.
•    The villi of the jejunum look like long, finger-like projections, and are a histologically identifiable structure.
•    While the length of the entire intestinal tract contains lymphoid tissue, only the ileum has abundant Peyer's patches, which are unencapsulated lymphoid nodules that contain large numbers of lymphocytes and immune cells, like microfold cells.

Function

The lining of the jejunum is specialized for the absorption by enterocytes of small nutrient particles which have been previously digested by enzymes in the duodenum. Once absorbed, nutrients (with the exception of fat, which goes to the lymph) pass from the enterocytes into the enterohepatic circulation and enter the liver via the hepatic portal vein, where the blood is processed.

Other animals

In fish, the divisions of the small intestine are not as clear and the terms middle intestine or mid-gut may be used instead of jejunum.

History

Etymology

Jejunum is derived from the Latin word jējūnus, meaning "fasting." It was so called because this part of the small intestine was frequently found to be void of food following death, due to its intensive peristaltic activity relative to the duodenum and ileum.

The Early Modern English adjective jejune is derived from this word.

das-ist-eine-abbildung-des-duenndarms.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1144 2021-09-21 00:36:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1121) Duodenum

Duodenum, the first part of the small intestine, which receives partially digested food from the stomach and begins the absorption of nutrients. The duodenum is the shortest segment of the intestine and is about 23 to 28 cm (9 to 11 inches) long. It is roughly horseshoe-shaped, with the open end up and to the left, and it lies behind the liver. On anatomic and functional grounds, the duodenum can be divided into four segments: the superior (duodenal bulb), descending, horizontal, and ascending duodenum.

A liquid mixture of food and gastric secretions enters the superior duodenum from the pylorus of the stomach, triggering the release of pancreas-stimulating hormones (e.g., secretin) from glands (crypts of Lieberkühn) in the duodenal wall. So-called Brunner glands in the superior segment provide additional secretions that help to lubricate and protect the mucosal layer of the small intestine. Ducts from the pancreas and gallbladder enter at the major duodenal papilla (papilla of Vater) in the descending duodenum, bringing bicarbonate to neutralize the acid in the gastric secretions, pancreatic enzymes to further digestion, and bile salts to emulsify fat. A separate minor duodenal papilla, also in the descending segment, may receive pancreatic secretions in small amounts. The mucous lining of the last two segments of the duodenum begins the absorption of nutrients, in particular iron and calcium, before the food contents enter the next part of the small intestine, the jejunum.

Inflammation of the duodenum is known as duodenitis, which has various causes, prominent among them infection by the bacterium Helicobacter pylori. H. pylori increases the susceptibility of the duodenal mucosa to damage from unneutralized digestive acids and is a major cause of peptic ulcers, the most common health problem affecting the duodenum. Other conditions that may be associated with duodenitis include celiac disease, Crohn disease, and Whipple disease. The horizontal duodenum, because of its location between the liver, pancreas, and major blood vessels, can become compressed by those structures in people who are severely thin, requiring surgical release to eliminate painful duodenal dilatation, nausea, and vomiting.

cws_58c05398ae54c.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1145 2021-09-23 00:53:53

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1122) Boron nitride

Boron nitride, (chemical formula BN), synthetically produced crystalline compound of boron and nitrogen, an industrial ceramic material of limited but important application, principally in electrical insulators and cutting tools. It is made in two crystallographic forms, hexagonal boron nitride (H-BN) and cubic boron nitride (C-BN).

H-BN is prepared by several methods, including the heating of boric oxide (B2O3) with ammonia (NH3). It is a platy powder consisting, at the molecular level, of sheets of hexagonal rings that slide easily past one another. This structure, similar to that of the carbon mineral graphite, makes H-BN a soft, lubricious material; unlike graphite, though, H-BN is noted for its low electric conductivity and high thermal conductivity. H-BN is frequently molded and then hot-pressed into shapes such as electrical insulators and melting crucibles. It also can be applied with a liquid binder as a temperature-resistant coating for metallurgical, ceramic, or polymer processing machinery.

C-BN is most often made in the form of small crystals by subjecting H-BN to extremely high pressure (six to nine gigapascals) and temperature (1,500° to 2,000° C, or 2,730° to 3,630° F). It is second only to diamond in hardness (approaching the maximum of 10 on the Mohs hardness scale) and, like synthetic diamond, is often bonded onto metallic or metallic-ceramic cutting tools for the machining of hard steels. Owing to its high oxidation temperature (above 1,900° C, or 3,450° F), it has a much higher working temperature than diamond (which oxidizes above 800° C, or 1,475° F).

Boron nitride is a thermally and chemically resistant refractory compound of boron and nitrogen with the chemical formula BN. It exists in various crystalline forms that are isoelectronic to a similarly structured carbon lattice. The hexagonal form corresponding to graphite is the most stable and soft among BN polymorphs, and is therefore used as a lubricant and an additive to cosmetic products. The cubic (zincblende aka sphalerite structure) variety analogous to diamond is called c-BN; it is softer than diamond, but its thermal and chemical stability is superior. The rare wurtzite BN modification is similar to lonsdaleite but slightly softer than the cubic form.

Because of excellent thermal and chemical stability, boron nitride ceramics are traditionally used as parts of high-temperature equipment. Boron nitride has potential use in nanotechnology. Nanotubes of BN can be produced that have a structure similar to that of carbon nanotubes, i.e. graphene (or BN) sheets rolled on themselves, but the properties are very different.

Hexagonal BN (h-BN) is the most widely used polymorph. It is a good lubricant at both low and high temperatures (up to 900 °C, even in an oxidizing atmosphere). h-BN lubricant is particularly useful when the electrical conductivity or chemical reactivity of graphite (alternative lubricant) would be problematic. Another advantage of h-BN over graphite is that its lubricity does not require water or gas molecules trapped between the layers. Therefore, h-BN lubricants can be used even in vacuum, e.g. in space applications. The lubricating properties of fine-grained h-BN are used in cosmetics, paints, dental cements, and pencil leads.

Hexagonal BN was first used in cosmetics around 1940 in Japan. However, because of its high price, h-BN was soon abandoned for this application. Its use was revitalized in the late 1990s with the optimization h-BN production processes, and currently h-BN is used by nearly all leading producers of cosmetic products for foundations, make-up, eye shadows, blushers, kohl pencils, lipsticks and other skincare products.

Because of its excellent thermal and chemical stability, boron nitride ceramics are traditionally used as parts of high-temperature equipment. h-BN can be included in ceramics, alloys, resins, plastics, rubbers, and other materials, giving them self-lubricating properties. Such materials are suitable for construction of e.g. bearings and in steelmaking. Plastics filled with BN have less thermal expansion as well as higher thermal conductivity and electrical resistivity. Due to its excellent dielectric and thermal properties, BN is used in electronics e.g. as a substrate for semiconductors, microwave-transparent windows and as a structural material for seals. It can also be used as dielectric in resistive random access memories.

Hexagonal BN is used in xerographic process and laser printers as a charge leakage barrier layer of the photo drum. In the automotive industry, h-BN mixed with a binder (boron oxide) is used for sealing oxygen sensors, which provide feedback for adjusting fuel flow. The binder utilizes the unique temperature stability and insulating properties of h-BN.

Parts can be made by hot pressing from four commercial grades of h-BN. Grade HBN contains a boron oxide binder; it is usable up to 550–850 °C in oxidizing atmosphere and up to 1600 °C in vacuum, but due to the boron oxide content is sensitive to water. Grade HBR uses a calcium borate binder and is usable at 1600 °C. Grades HBC and HBT contain no binder and can be used up to 3000 °C.

Boron nitride nanosheets (h-BN) can be deposited by catalytic decomposition of borazine at a temperature ~1100 °C in a chemical vapor deposition setup, over areas up to about 10 cm2. Owing to their hexagonal atomic structure, small lattice mismatch with graphene (~2%), and high uniformity they are used as substrates for graphene-based devices. BN nanosheets are also excellent proton conductors. Their high proton transport rate, combined with the high electrical resistance, may lead to applications in fuel cells and water electrolysis.

h-BN has been used since the mid-2000s as a bullet and bore lubricant in precision target rifle applications as an alternative to molybdenum disulfide coating, commonly referred to as "moly". It is claimed to increase effective barrel life, increase intervals between bore cleaning, and decrease the deviation in point of impact between clean bore first shots and subsequent shots.

png-clipart-boron-nitride-polymorphism-hexagonal-crystal-family-mineral-light-irradiation-miscellaneous-chemical-element.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1146 2021-09-25 00:53:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1123) Cesium

Cesium (Cs), also spelled caesium, chemical element of Group 1 (also called Group Ia) of the periodic table, the alkali metal group, and the first element to be discovered spectroscopically (1860), by German scientists Robert Bunsen and Gustav Kirchhoff, who named it for the unique blue lines of its spectrum (Latin caesius, “sky-blue”).

This silvery metal with a golden cast is the most reactive and one of the softest of all metals. It melts at 28.4 °C (83.1 °F), just above room temperature. It is about half as abundant as lead and 70 times as abundant as silver. Cesium occurs in minute quantities (7 parts per million) in Earth’s crust in the minerals pollucite, rhodizite, and lepidolite. Pollucite (Cs4Al4Si9O26∙H2O) is a cesium-rich mineral resembling quartz. It contains 40.1 percent cesium on a pure basis, and impure samples are ordinarily separated by hand-sorting methods to greater than 25 percent cesium. Large pollucite deposits have been found in Zimbabwe and in the lithium-bearing pegmatites at Bernic Lake, Manitoba, Canada. Rhodizite is a rare mineral found in low concentrations in lepidolite and in salt brines and saline deposits.

The primary difficulty associated with the production of pure cesium is that cesium is always found together with rubidium in nature and is also mixed with other alkali metals. Because cesium and rubidium are very similar chemically, their separation presented numerous problems before the advent of ion-exchange methods and ion-specific complexing agents such as crown ethers. Once pure salts have been prepared, it is a straightforward task to convert them to the free metal.

Cesium can be isolated by electrolysis of a molten cesium cyanide/barium cyanide mixture and by other methods, such as reduction of its salts with sodium metal, followed by fractional distillation. Cesium reacts explosively with cold water; it readily combines with oxygen, so it is used in vacuum tubes as a “getter” to clear out the traces of oxygen and other gases trapped in the tube when sealed. The very pure gas-free cesium needed as a “getter” for oxygen in vacuum tubes can be produced as needed by heating cesium azide (CsN3) in a vacuum. Because cesium is strongly photoelectric (easily loses electrons when struck by light), it is used in photoelectric cells, photomultiplier tubes, scintillation counters, and spectrophotometers. It is also used in infrared lamps. Because the cesium atom can be ionized thermally and the positively charged ions accelerated to great speeds, cesium systems could provide extraordinarily high exhaust velocities for plasma propulsion engines for deep-space exploration.

Cesium metal is produced in rather limited amounts because of its relatively high cost. Cesium has application in thermionic power converters that generate electricity directly within nuclear reactors or from the heat produced by radioactive decay. Another potential application of cesium metal is in the production of low-melting NaKCs eutectic alloy.

Atomic cesium is employed in the world’s time standard, the cesium clock. The microwave spectral line emitted by the isotope cesium-133 has a frequency of 9,192,631,770 hertz (cycles per second). This provides the fundamental unit of time. Cesium clocks are so stable and accurate that they are reliable to 1 second in 1.4 million years. Primary standard cesium clocks, such as NIST-F1 in Boulder, Colo., are about as large as a railroad flatcar. Commercial secondary standards are suitcase-sized.

Naturally occurring cesium consists entirely of the nonradioactive isotope cesium-133; a large number of radioactive isotopes from cesium-123 to cesium-144 have been prepared. Cesium-137 is useful in medical and industrial radiology because of its long half-life of 30.17 years. However, as a major component of nuclear fallout and a waste product left over from the production of plutonium and other enriched nuclear fuels, it presents an environmental hazard. Removal of radioactive cesium from contaminated soil at nuclear-weapon-production sites, such as Oak Ridge National Laboratory in Oak Ridge, Tennessee, and the U.S. Department of Energy’s Hanford site near Richland, Washington, is a major cleanup effort.

Cesium is difficult to handle because it reacts spontaneously in air. If a metal sample has a large enough surface area, it can burn to form superoxides. Cesium superoxide has a more reddish cast. Cs2O2 can be formed by oxidation of the metal with the required amount of oxygen, but other reactions of cesium with oxygen are much more complex.

Cesium is the most electropositive and most alkaline element, and thus, more easily than all other elements, it loses its single valence electron and forms ionic bonds with nearly all the inorganic and organic anions. The anion Cs– has also been prepared. Cesium hydroxide (CsOH), containing the hydroxide anion (OH–), is the strongest base known, attacking even glass. Some cesium salts are used in making mineral waters. Cesium forms a number of mercury amalgams. Because of the increased specific volume of cesium, as compared with the lighter alkali metals, there is a lesser tendency for it to form alloy systems with other metals.

Rubidium and cesium are miscible in all proportions and have complete solid solubility; a melting-point minimum of 9 °C (48 °F) is reached.

Element Properties

atomic number  :  55
atomic weight  :  132.90543
melting point  :  28.44 °C (83.19 °F)
boiling point  :  671 °C (1,240 °F)
specific gravity  :  1.873 (at 20 °C, or 68 °F)
oxidation states  :  +1, -1 (rare).

Cesium.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1147 2021-09-27 01:13:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1124) Aluminum

Aluminum (Al), also spelled aluminium, chemical element, a lightweight, silvery-white metal of main Group 13 (IIIa, or boron group) of the periodic table. Aluminum is the most abundant metallic element in Earth’s crust and the most widely used nonferrous metal. Because of its chemical activity, aluminum never occurs in the metallic form in nature, but its compounds are present to a greater or lesser extent in almost all rocks, vegetation, and animals. Aluminum is concentrated in the outer 10 miles (16 km) of Earth’s crust, of which it constitutes about 8 percent by weight; it is exceeded in amount only by oxygen and silicon. The name aluminum is derived from the Latin word alumen, used to describe potash alum, or aluminum potassium sulfate, KAl(SO4)2∙12H2O.

Element Properties

atomic number  :  13
atomic weight  :  26.9815
melting point  :  660 °C (1,220 °F)
boiling point  :  2,467 °C (4,473 °F)
specific gravity  :  2.70 (at 20 °C [68 °F])
valence  :  3

Occurrence, Uses, And Properties

Aluminum occurs in igneous rocks chiefly as aluminosilicates in feldspars, feldspathoids, and micas; in the soil derived from them as clay; and upon further weathering as bauxite and iron-rich laterite. Bauxite, a mixture of hydrated aluminum oxides, is the principal aluminum ore. Crystalline aluminum oxide (emery, corundum), which occurs in a few igneous rocks, is mined as a natural abrasive or in its finer varieties as rubies and sapphires. Aluminum is present in other gemstones, such as topaz, garnet, and chrysoberyl. Of the many other aluminum minerals, alunite and cryolite have some commercial importance.

Crude aluminum was isolated (1825) by Danish physicist Hans Christian Ørsted by reducing aluminum chloride with potassium amalgam. British chemist Sir Humphry Davy had prepared (1809) an iron-aluminum alloy by electrolyzing fused alumina (aluminum oxide) and had already named the element aluminum; the word later was modified to aluminium in England and some other European countries. German chemist Friedrich Wöhler, using potassium metal as the reducing agent, produced aluminum powder (1827) and small globules of the metal (1845), from which he was able to determine some of its properties.

The new metal was introduced to the public (1855) at the Paris Exposition at about the time that it became available (in small amounts at great expense) by the sodium reduction of molten aluminum chloride. When electric power became relatively plentiful and cheap, almost simultaneously Charles Martin Hall in the United States and Paul-Louis-Toussaint Héroult in France discovered (1886) the modern method of commercially producing aluminum: electrolysis of purified alumina (Al2O3) dissolved in molten cryolite (Na3AlF6). During the 1960s aluminum moved into first place, ahead of copper, in world production of nonferrous metals.

Aluminum is added in small amounts to certain metals to improve their properties for specific uses, as in aluminum bronzes and most magnesium-base alloys; or, for aluminum-base alloys, moderate amounts of other metals and silicon are added to aluminum. The metal and its alloys are used extensively for aircraft construction, building materials, consumer durables (refrigerators, air conditioners, cooking utensils), electrical conductors, and chemical and food-processing equipment.

Pure aluminum (99.996 percent) is quite soft and weak; commercial aluminum (99 to 99.6 percent pure) with small amounts of silicon and iron is hard and strong. Ductile and highly malleable, aluminum can be drawn into wire or rolled into thin foil. The metal is only about one-third as dense as iron or copper. Though chemically active, aluminum is nevertheless highly corrosion-resistant, because in air a hard, tough oxide film forms on its surface.

Aluminum is an excellent conductor of heat and electricity. Its thermal conductivity is about one-half that of copper; its electrical conductivity, about two-thirds. It crystallizes in the face-centred cubic structure. All natural aluminum is the stable isotope aluminum-27. Metallic aluminum and its oxide and hydroxide are nontoxic.

Aluminum is slowly attacked by most dilute acids and rapidly dissolves in concentrated hydrochloric acid. Concentrated nitric acid, however, can be shipped in aluminum tank cars because it renders the metal passive. Even very pure aluminum is vigorously attacked by alkalies such as sodium and potassium hydroxide to yield hydrogen and the aluminate ion. Because of its great affinity for oxygen, finely divided aluminum, if ignited, will burn in carbon monoxide or carbon dioxide with the formation of aluminum oxide and carbide, but, at temperatures up to red heat, aluminum is inert to sulfur.

Aluminum can be detected in concentrations as low as one part per million by means of emission spectroscopy. Aluminum can be quantitatively analyzed as the oxide (formula Al2O3) or as a derivative of the organic nitrogen compound 8-hydroxyquinoline. The derivative has the molecular formula Al(C9H6ON)3.

Compounds

Ordinarily, aluminum is trivalent. At elevated temperatures, however, a few gaseous monovalent and bivalent compounds have been prepared (AlCl, Al2O, AlO). In aluminum the configuration of the three outer electrons is such that in a few compounds (e.g., crystalline aluminum fluoride [AlF3] and aluminum chloride [AlCl3]) the bare ion, Al3+, formed by loss of these electrons, is known to occur. The energy required to form the Al3+ ion, however, is very high, and, in the majority of cases, it is energetically more favourable for the aluminum atom to form covalent compounds by way of sp2 hybridization, as boron does. The Al3+ ion can be stabilized by hydration, and the octahedral ion [Al(H2O)6]3+ occurs both in aqueous solution and in several salts.

A number of aluminum compounds have important industrial applications. Alumina, which occurs in nature as corundum, is also prepared commercially in large quantities for use in the production of aluminum metal and the manufacture of insulators, spark plugs, and various other products. Upon heating, alumina develops a porous structure, which enables it to adsorb water vapour. This form of aluminum oxide, commercially known as activated alumina, is used for drying gases and certain liquids. It also serves as a carrier for catalysts of various chemical reactions.

Anodic aluminum oxide (AAO), typically produced via the electrochemical oxidation of aluminum, is a nanostructured aluminum-based material with a very unique structure. AAO contains cylindrical pores that provide for a variety of uses. It is a thermally and mechanically stable compound while also being optically transparent and an electrical insulator. The pore size and thickness of AAO can easily be tailored to fit certain applications, including acting as a template for synthesizing materials into nanotubes and nanorods.

Another major compound is aluminum sulfate, a colourless salt obtained by the action of sulfuric acid on hydrated aluminum oxide. The commercial form is a hydrated crystalline solid with the chemical formula Al2(SO4)3. It is used extensively in paper manufacture as a binder for dyes and as a surface filler. Aluminum sulfate combines with the sulfates of univalent metals to form hydrated double sulfates called alums. The alums, double salts of formula MAl(SO4)2 ·12H2O (where M is a singly charged cation such as K+), also contain the Al3+ ion; M can be the cation of sodium, potassium, rubidium, cesium, ammonium, or thallium, and the aluminum may be replaced by a variety of other M3+ ions—e.g., gallium, indium, titanium, vanadium, chromium, manganese, iron, or cobalt. The most important of such salts is aluminum potassium sulfate, also known as potassium alum or potash alum. These alums have many applications, especially in the production of medicines, textiles, and paints.

The reaction of gaseous chlorine with molten aluminum metal produces aluminum chloride; the latter is the most commonly used catalyst in Friedel-Crafts reactions—i.e., synthetic organic reactions involved in the preparations of a wide variety of compounds, including aromatic ketones and anthroquinone and its derivatives. Hydrated aluminum chloride, commonly known as aluminum chlorohydrate, AlCl3∙H2O, is used as a topical antiperspirant or body deodorant, which acts by constricting the pores. It is one of several aluminum salts employed by the cosmetics industry.

Aluminum hydroxide, Al(OH)3, is used to waterproof fabrics and to produce a number of other aluminum compounds, including salts called aluminates that contain the AlO−2 group. With hydrogen, aluminum forms aluminum hydride, AlH3, a polymeric solid from which are derived the tetrohydroaluminates (important reducing agents). Lithium aluminum hydride (LiAlH4), formed by the reaction of aluminum chloride with lithium hydride, is widely used in organic chemistry—e.g., to reduce aldehydes and ketones to primary and secondary alcohols, respectively.

iStock-538025236.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1148 2021-09-29 00:16:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1125) Calcium

Calcium (Ca), chemical element, one of the alkaline-earth metals of Group 2 (IIa) of the periodic table. It is the most abundant metallic element in the human body and the fifth most abundant element in Earth’s crust.

Element Properties

atomic number  :  20
atomic weight  :  40.078
melting point  :  842 °C (1,548 °F)
boiling point  :  1,484 °C (2,703 °F)
specific gravity  :  1.55 (20 °C, or 68 °F)
oxidation state  :  +2

Occurrence, Properties, And Uses

Calcium does not occur naturally in the free state, but compounds of the element are widely distributed. One calcium compound, lime (calcium oxide, CaO) was extensively used by the ancients. The silvery, rather soft, lightweight metal itself was first isolated (1808) by Sir Humphry Davy after distilling mercury from an amalgam formed by electrolyzing a mixture of lime and mercuric oxide. The name for the element was taken from the Latin word for lime, calx.

Calcium constitutes 3.64 percent of Earth’s crust and 8 percent of the Moon’s crust, and its cosmic abundance is estimated at 4.9 × 104 atoms (on a scale where the abundance of silicon is 106 atoms). As calcite (calcium carbonate), it occurs on Earth in limestone, chalk, marble, dolomite, eggshells, pearls, coral, stalactites, stalagmites, and the shells of many marine animals. Calcium carbonate deposits dissolve in water that contains carbon dioxide to form calcium bicarbonate, Ca(HCO3)2. This process frequently results in the formation of caves and may reverse to deposit limestone as stalactites and stalagmites. As calcium hydroxyl phosphate, it is the principal inorganic constituent of teeth and bones and occurs as the mineral apatite. As calcium fluoride, it occurs as fluorite, or fluorspar. And as calcium sulfate, it occurs as anhydrite. Calcium is found in many other minerals, such as aragonite (a type of calcium carbonate) and gypsum (another form of calcium sulfate), and in many feldspars and zeolites. It is also found in a large number of silicates and aluminosilicates, in salt deposits, and in natural waters, including the sea.

Formerly produced by electrolysis of anhydrous calcium chloride, pure calcium metal is now made commercially by heating lime with aluminum. The metal reacts slowly with oxygen, water vapour, and nitrogen of the air to form a yellow coating of the oxide, hydroxide, and nitride. It burns in air or pure oxygen to form the oxide and reacts rapidly with warm water (and more slowly with cold water) to produce hydrogen gas and calcium hydroxide. On heating, calcium reacts with hydrogen, halogens, boron, sulfur, carbon, and phosphorus. Although it compares favourably with sodium as a reducing agent, calcium is more expensive and less reactive than the latter. In many deoxidizing, reducing, and degasifying applications, however, calcium is preferred because of its lower volatility and is used to prepare chromium, thorium, uranium, zirconium, and other metals from their oxides.

The metal itself is used as an alloying agent for aluminum, copper, lead, magnesium, and other base metals; as a deoxidizer for certain high-temperature alloys; and as a getter in electron tubes. Small percentages of calcium are used in many alloys for special purposes. Alloyed with lead (0.04 percent calcium), for example, it is employed as sheaths for telephone cables and as grids for storage batteries of the stationary type. When added to magnesium-based alloys in amounts from 0.4 to 1 percent, it improves the resistance of degradable orthopedic implants to biological fluids, permitting tissues to heal fully before the implants lose their structural integrity.

Naturally occurring calcium consists of a mixture of six isotopes: calcium-40 (96.94 percent), calcium-44 (2.09 percent), calcium-42 (0.65 percent), and, in smaller proportions, calcium-48, calcium-43, and calcium-46. Calcium-48 undergoes double beta decay with a half-life of roughly 4 × 1019 years, so it is stable for all practical purposes. It is particularly neutron-rich and is used in the synthesis of new heavy nuclei in particle accelerators. The radioactive isotope calcium-41 occurs in trace quantities on Earth through the natural bombardment of calcium-40 by neutrons in cosmic rays.

Calcium is essential to both plant and animal life and is broadly employed as a signal transducer, enzyme cofactor, and structural element (e.g., cell membranes, bones, and teeth). A large number of living organisms concentrate calcium in their shells or skeletons, and in higher animals calcium is the most abundant inorganic element. Many important carbonate and phosphate deposits owe their origin to living organisms.

The human body is 2 percent calcium. Major sources of calcium in the human diet are milk, milk products, fish, and green leafy vegetables. The bone disease rickets occurs when a lack of vitamin D impairs the absorption of calcium from the gastrointestinal tract into the extracellular fluids. The disease especially affects infants and children.

Compounds

The most important calcium compound is calcium carbonate, CaCO3, the major constituent of limestone, marble, chalk, oyster shells, and corals. Calcium carbonate obtained from its natural sources is used as a filler in a variety of products, such as ceramics, glass, plastics, and paint, and as a starting material for the production of calcium oxide. Synthetic calcium carbonate, called “precipitated” calcium carbonate, is employed when high purity is required, as in medicine (antacids and dietary calcium supplements), in food (baking powder), and for laboratory purposes.

Calcium oxide, CaO, also known as lime or more specifically quicklime, is a white or grayish white solid produced in large quantities by roasting calcium carbonate so as to drive off carbon dioxide. At room temperature, CaO will spontaneously absorb carbon dioxide from the atmosphere, reversing the reaction. It will also absorb water, converting itself into calcium hydroxide and releasing heat in the process. The bubbling that accompanies the reaction is the source of its name as “quick,” or living, lime. The reaction of quicklime with water is sometimes used in portable heat sources. One of the oldest known products of a chemical reaction, quicklime is used extensively as a building material. It is sometimes used directly as a fertilizer, although calcium carbonate is usually preferred for that purpose. Large quantities of quicklime are used in various industrial neutralization reactions. Limelights, used in the 19th century in stage lighting, emit a very brilliant white light upon heating a block of calcium oxide to incandescence in an oxyhydrogen flame, hence the expression “to be in the limelight.”

A large amount of calcium oxide also is used as starting material in the production of calcium carbide, CaC2, also known simply as carbide, or calcium acetylide. Colourless when pure (though technical grades are typically grayish brown), this solid decomposes in water, forming flammable acetylene gas and calcium hydroxide, Ca(OH)2. The decomposition reaction is used for the production of acetylene, which serves as an important fuel for welding torches. The drip of water on calcium carbide produces a steady stream of acetylene that is ignited in carbide lamps. Such lamps were commonly used in lighthouse beacons and by miners in the early 20th century and still find some use in spelunking. Calcium carbide also is used to make calcium cyanamide, CaCN2, a fertilizer component and starting material for certain plastic resins.

Calcium hydroxide, also called slaked lime, Ca(OH)2, is obtained by the action of water on calcium oxide. When mixed with water, a small proportion of it dissolves, forming a solution known as limewater, the rest remaining as a suspension called milk of lime. Calcium hydroxide is used as an industrial alkali and as a constituent of mortars, plasters, and cement. It is used in the kraft paper process and as a flocculant in sewage treatment.

Another important compound is calcium chloride, CaCl2, a colourless or white solid produced in large quantities either as a by-product of the manufacture of sodium carbonate by the Solvay process or by the action of hydrochloric acid on calcium carbonate. The anhydrous solid is used as a drying agent and for dust and ice control on roads. Calcium hypochlorite, Ca(ClO2), widely used as bleaching powder, is produced by the action of chlorine on calcium hydroxide. The hydride CaH2, formed by the direct action of the elements, liberates hydrogen when treated with water. Traces of water can be removed from many organic solvents by refluxing them in the presence of CaH2.

Calcium sulfate, CaSO4, is a naturally occurring calcium salt. It is commonly known in its dihydrate form, CaSO4∙2H2O, a white or colourless powder called gypsum. As uncalcined gypsum, the sulfate is employed as a soil conditioner. Calcined gypsum is used in making tile, wallboard, lath, and various plasters. When gypsum is heated to about 120 °C (250 °F), it loses three-quarters of its water, becoming the hemihydrate CaSO4∙1/2H2O, plaster of paris. If mixed with water, plaster of paris can be molded into shapes before it hardens by recrystallizing to dihydrate form. Calcium sulfate may occur in groundwater, causing hardness that cannot be removed by boiling.

Calcium phosphates occur abundantly in nature in several forms and are the principal minerals for the production of phosphate fertilizers and for a range of phosphorus compounds. For example, the tribasic variety (precipitated calcium phosphate), Ca3(PO4)2, is the principal inorganic constituent of bone ash. The acid salt Ca(H2PO4)2, produced by treating mineral phosphates with sulfuric acid, is employed as a plant food and stabilizer for plastics.

The hydrogen sulfite, Ca(HSO3)2, is made by the action of sulfur dioxide on a slurry of Ca(OH)2. Its aqueous solution under pressure dissolves the lignin in wood to leave cellulose fibres and thus finds considerable application in the paper industry.

The fluoride, CaF2, is important to the production of hydrofluoric acid, which is made from CaF2 by the action of sulfuric acid. CaF2 is used in laboratory instruments as a window material for both infrared and ultraviolet radiation.

My-weekly-calcium-intake-iStock-538769816-500x417.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1149 2021-10-01 00:22:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1126) Potassium

Potassium (K), chemical element of Group 1 (Ia) of the periodic table, the alkali metal group, indispensable for both plant and animal life. Potassium was the first metal to be isolated by electrolysis, by the English chemist Sir Humphry Davy, when he obtained the element (1807) by decomposing molten potassium hydroxide (KOH) with a voltaic battery.

Element Properties

atomic number  :  19
atomic weight  :  39.098
melting point  :  63.28 °C (145.90 °F)
boiling point  :  760 °C (1,400 °F)
specific gravity  :  0.862 (at 20 °C, or 68 °F)
oxidation states  :  +1, −1 (rare)

Properties, Occurrence, And Uses

Potassium metal is soft and white with a silvery lustre, has a low melting point, and is a good conductor of heat and electricity. Potassium imparts a lavender colour to a flame, and its vapour is green. It is the seventh most abundant element in Earth’s crust, constituting 2.6 percent of its mass.

The potassium content of the Dead Sea is estimated at approximately 1.7 percent potassium chloride, and many other salty bodies of water are rich in potassium. The waste liquors from certain saltworks may contain up to 40 grams per litre of potassium chloride and are used as a source of potassium.

Most potassium is present in igneous rocks, shale, and sediment in minerals such as muscovite and orthoclase feldspar that are insoluble in water; this makes potassium difficult to obtain. As a result, most commercial potassium compounds (often loosely called potash) are obtained via electrolysis from soluble potassium compounds, such as carnallite (KMgCl3∙6H2O), sylvite (potassium chloride, KCl), polyhalite (K2Ca2Mg[SO4]4∙2H2O), and langbeinite (K2Mg2[SO4]3), which are found in ancient lake beds and seabeds.

Potassium is produced by sodium reduction of molten potassium chloride, KCl, at 870 °C (1,600 °F). Molten KCl is continuously fed into a packed distillation column while sodium vapour is passed up through the column. By condensation of the more volatile potassium at the top of the distillation tower, the reaction Na + KCl → K + NaCl is forced to the right. Efforts to devise a scheme for commercial electrolytic production of potassium have been unsuccessful because there are few salt additives that can reduce the melting point of potassium chloride to temperatures where electrolysis is efficient.

There is little commercial demand for potassium metal itself, and most of it is converted by direct combustion in dry air to potassium superoxide, KO2, which is used in respiratory equipment because it liberates oxygen and removes carbon dioxide and water vapour. (The superoxide of potassium is a yellow solid consisting of K+and O2− ions. It also can be formed by oxidation of potassium amalgam with dry air or oxygen.) The metal is also used as an alloy with sodium as a liquid metallic heat-transfer medium. Potassium reacts very vigorously with water, liberating hydrogen (which ignites) and forming a solution of potassium hydroxide, KOH.

Sodium-potassium alloy (NaK) is used to a limited extent as a heat-transfer coolant in some fast-breeder nuclear reactors and experimentally in gas-turbine power plants. The alloy is also used as a catalyst or reducing agent in organic synthesis.

In addition to the alloys of potassium with lithium and sodium, alloys with other alkali metals are known. Complete miscibility exists in the potassium-rubidium and potassium-cesium binary systems. The latter system forms an alloy melting at approximately −38 °C (−36 °F). Modification of the system by the addition of sodium results in a ternary eutectic melting at approximately −78 °C (−108 °F). The composition of this alloy is 3 percent sodium, 24 percent potassium, and 73 percent cesium. Potassium is essentially immiscible with all the alkaline-earth metals, as well as with zinc, aluminum, and cadmium.

Potassium (as K+) is required by all plants and animals. Plants need it for photosynthesis, regulation of osmosis and growth, and enzyme activation. Every animal has a closely maintained potassium level and a relatively fixed potassium-sodium ratio. Potassium is the primary inorganic cation within the living cell, and sodium is the most abundant cation in extracellular fluids. In higher animals, selective complexants for Na+ and K+ act at cell membranes to provide “active transport.” This active transport transmits electrochemical impulses in nerve and muscle fibres and in balancing the activity of nutrient intake and waste removal from cells. Too little or too much potassium in the body is fatal; however, potassium in the soil ensures the presence of this indispensable element in food.

The potassium content of plants varies considerably, though it is ordinarily in the range of 0.5–2 percent of the dry weight. In humans the ratio of potassium between the cell and plasma is approximately 27:1. The potassium content of muscle tissue is approximately 0.3 percent, whereas that of blood serum is about 0.01–0.02 percent. The dietary requirement for normal growth is approximately 3.3 grams (0.12 ounce) of potassium per day, but the ingestion of more than 20 grams (0.7 ounce) of potassium results in distinct physiological effects. Excess potassium is excreted in the urine, and a significant quantity may be lost during sweating.

Natural potassium consists of three isotopes: potassium-39 (93.26 percent), potassium-41 (6.73 percent), and radioactive potassium-40 (about 0.01 percent); several artificial isotopes have also been prepared. Potassium-39 is normally about 13.5 times more plentiful than potassium-41. The natural radioactivity of potassium is due to beta radiation from the potassium-40 isotope (109 years half-life). The disintegration of potassium-40 is used in geological age calculations (see potassium-argon dating). Potassium easily loses the single 4s electron, so it normally has an oxidation state of +1 in its compounds, although compounds that contain the anion, K−, can also be made.

Principal Compounds And Reactions With Other Elements

Of commercially produced potassium compounds, almost 95 percent of them are used in agriculture as fertilizer. (Potassium compounds are also important to a lesser extent in the manufacture of explosives.) The world supply of potash for fertilizer is about 25 million tons (calculated as K2O, although potassium in fertilizer is most commonly present as KCl). Large deposits of sylvite in Saskatchewan, Canada, provide more than 25 percent of the world’s needs. The other chief sources of potash are Germany, Russia, Belarus, India, Chile, and Israel. Seawater, brines, and ashes of vegetation are also used as sources of potash.

Potassium chloride, KCl, is a naturally occurring potassium salt that, aside from its use as fertilizer, is also a raw material for the production of other important potassium compounds. Electrolysis of potassium chloride yields potassium hydroxide (also called caustic potash), which readily absorbs moisture and is employed in making liquid soaps and detergents and in preparing many potassium salts. Reaction of iodine and potassium hydroxide produces potassium iodide, KI, which is added to table salt and animal feed to protect against iodine deficiency.

Other potassium compounds of economic value include potassium nitrate, also known as saltpetre, or nitre, KNO3, which has wide use as a fertilizer and in fireworks and explosives and has been used as a food preservative; potassium chromate, K2CrO4, which is employed in tanning leather and dyeing textiles; and potassium sulfate, K2SO4, which is used in the production of fertilizers and potassium alums.

The chemical properties of potassium are similar to those of sodium, although the former is considerably more reactive. Potassium differs from sodium in a number of respects. Whereas sodium is essentially unreactive with graphite, potassium reacts to form a series of interlamellar compounds, the richest having the formula KC8. Compounds are formed with carbon–potassium atomic ratios of 8, 16, 24, 36, 48, and 60 to 1. The graphite lattice is expanded during penetration of the potassium between the layers. Potassium reacts with carbon monoxide at temperatures as low as 60 °C (140 °F) to form an explosive carbonyl (K6C6O6), a derivative of hexahydroxybenzene.

Liquid potassium and NaK both are more reactive than liquid sodium with air and oxygen. Potassium reacts violently with water to produce half a mole of hydrogen per mole of potassium and water and generates approximately 47 kilocalories per mole of heat. Potassium can be stored in nitrogen gas with no reaction. It reacts with hydrogen at approximately 350 °C (660 °F) to form the hydride.

Potassium is highly reactive with halogens and detonates when it contacts liquid bromine. Violent explosions also have been observed when mixtures of potassium and halogen acids are subject to shock. Explosions also have occurred when potassium is mixed with a number of metal halide salts or with organic-halogen compounds.

At elevated temperatures, potassium reduces carbon dioxide to carbon monoxide and carbon. Solid carbon dioxide and potassium react explosively when subjected to shock. Oxidation of potassium amalgam with carbon dioxide results in the formation of potassium oxalate (K2C2O4). Potassium is not reactive with benzene, although heavier alkali metals such as cesium react to give organometallic products.

1800x1200_potassium_foods_other.jpg?resize=750px:*


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1150 2021-10-03 01:17:04

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,418

Re: Miscellany

1127) Anemometer

Anemometer, device for measuring the speed of airflow in the atmosphere, in wind tunnels, and in other gas-flow applications. Most widely used for wind-speed measurements is the revolving-cup electric anemometer, in which the revolving cups drive an electric generator. The output of the generator operates an electric meter that is calibrated in wind speed. The useful range of this device is approximately from 5 to 100 knots. A propeller may also be used to drive the electric generator, as in the propeller anemometer. In another type of wind-driven unit, revolving vanes operate a counter, the revolutions being timed by a stopwatch and converted to airspeed. This device is especially suited for the measurement of low airspeeds.

The fact that a stream of air will cool a heated object (the rate of cooling being determined by the speed of the airflow) is the principle underlying the hot-wire anemometer. An electrically heated fine wire is placed in the airflow. As the airflow increases, the wire cools. In the most common type of hot-wire anemometer, the constant-temperature type, power is increased to maintain a constant wire temperature. The input power to the hot wire is then a measure of airspeed, and a meter in the electrical circuit of the hot wire can be calibrated to indicate airspeed. This device is useful for very low airspeeds, below about 5 miles (8 km) per hour. The kata thermometer is a heated-alcohol thermometer; the time it takes to cool is measured and used to determine air current. It is useful for measuring low speeds in studies of air circulation.

A stream of air striking the open end of a tube closed at the other end will build up pressure within the tube. The difference in pressure between the interior of this tube (called a pitot tube) and the surrounding air can be measured and converted to airspeed. Pitot tubes are also used to measure the flow of liquids, particularly in the course of flume studies in fluid mechanics. This anemometer is most useful, however, in strong, steady air streams, such as in wind tunnels and aboard aircraft in flight. With modifications, it can be used to measure supersonic air flow. Another type of pressure anemometer is the Venturi tube, which is open at both ends and of larger diameter at the ends than at the middle. Airspeed is determined by measuring the pressure at the constriction in the tube. Venturi tubes have some applications in industry.

cup-anemometer-500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB