You are not logged in.
1079) Ethanol
Ethanol, also called ethyl alcohol, grain alcohol, or alcohol, a member of a class of organic compounds that are given the general name alcohols; its molecular formula is C2H5OH. Ethanol is an important industrial chemical; it is used as a solvent, in the synthesis of other organic chemicals, and as an additive to automotive gasoline (forming a mixture known as a gasohol). Ethanol is also the intoxicating ingredient of many alcoholic beverages such as beer, wine, and distilled spirits.
There are two main processes for the manufacture of ethanol: the fermentation of carbohydrates (the method used for alcoholic beverages) and the hydration of ethylene. Fermentation involves the transformation of carbohydrates to ethanol by growing yeast cells. The chief raw materials fermented for the production of industrial alcohol are sugar crops such as beets and sugarcane and grain crops such as corn (maize). Hydration of ethylene is achieved by passing a mixture of ethylene and a large excess of steam at high temperature and pressure over an acidic catalyst.
Ethanol produced either by fermentation or by synthesis is obtained as a dilute aqueous solution and must be concentrated by fractional distillation. Direct distillation can yield at best the constant-boiling-point mixture containing 95.6 percent by weight of ethanol. Dehydration of the constant-boiling-point mixture yields anhydrous, or absolute, alcohol. Ethanol intended for industrial use is usually denatured (rendered unfit to drink), typically with methanol, benzene, or kerosene.
Pure ethanol is a colourless flammable liquid (boiling point 78.5 °C [173.3 °F]) with an agreeable ethereal odour and a burning taste. Ethanol is toxic, affecting the central nervous system. Moderate amounts relax the muscles and produce an apparent stimulating effect by depressing the inhibitory activities of the brain, but larger amounts impair coordination and judgment, finally producing coma and death. It is an addictive drug for some persons, leading to the disease alcoholism.
Ethanol is converted in the body first to acetaldehyde and then to carbon dioxide and water, at the rate of about half a fluid ounce, or 15 ml, per hour; this quantity corresponds to a dietary intake of about 100 calories.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1080) Ethyl ether
Ethyl ether, also called diethyl ether, well-known anesthetic, commonly called simply ether, an organic compound belonging to a large group of compounds called ethers; its molecular structure consists of two ethyl groups linked through an oxygen atom, as in C2H5OC2H5.
Ethyl ether is a colourless, volatile, highly flammable liquid (boiling point 34.5° C [94.1° F]) with a powerful, characteristic odour and a hot, sweetish taste. It is a widely used solvent for bromine, iodine, most fatty and resinous substances, volatile oils, pure rubber, and certain vegetable alkaloids.
Ethyl ether is manufactured by the distillation of ethyl alcohol with sulfuric acid. Pure ether (absolute ether), required for medical purposes and in the preparation of Grignard reagents, is prepared by washing the crude ether with a saturated aqueous solution of calcium chloride, then treating with sodium.
Diethyl ether, or simply ether, is an organic compound in the ether class with the formula (C2H5)2O, sometimes abbreviated as Et2O. It is a colorless, highly volatile, sweet-smelling ("Ethereal odour"), extremely flammable liquid. It is commonly used as a solvent in laboratories and as a starting fluid for some engines. It was formerly used as a general anesthetic, until non-flammable drugs were developed, such as halothane. It has been used as a recreational drug to cause intoxication. It is a structural isomer of butanol.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1081) Pauli exclusion principle
Pauli exclusion principle, assertion that no two electrons in an atom can be at the same time in the same state or configuration, proposed (1925) by the Austrian physicist Wolfgang Pauli to account for the observed patterns of light emission from atoms. The exclusion principle subsequently has been generalized to include a whole class of particles of which the electron is only one member.
Subatomic particles fall into two classes, based on their statistical behaviour. Those particles to which the Pauli exclusion principle applies are called fermions; those that do not obey this principle are called bosons. When in a closed system, such as an atom for electrons or a nucleus for protons and neutrons, fermions are distributed so that a given state is occupied by only one at a time.
Particles obeying the exclusion principle have a characteristic value of spin, or intrinsic angular momentum; their spin is always some odd whole-number multiple of one-half. In the modern view of atoms, the space surrounding the dense nucleus may be thought of as consisting of orbitals, or regions, each of which comprises only two distinct states. The Pauli exclusion principle indicates that, if one of these states is occupied by an electron of spin one-half, the other may be occupied only by an electron of opposite spin, or spin negative one-half. An orbital occupied by a pair of electrons of opposite spin is filled: no more electrons may enter it until one of the pair vacates the orbital. An alternative version of the exclusion principle as applied to atomic electrons states that no two electrons can have the same values of all four quantum numbers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1082) Kirchhoff's rules
Kirchhoff’s rules, two statements about multi-loop electric circuits that embody the laws of conservation of electric charge and energy and that are used to determine the value of the electric current in each branch of the circuit.
The first rule, the junction theorem, states that the sum of the currents into a specific junction in the circuit equals the sum of the currents out of the same junction. Electric charge is conserved: it does not suddenly appear or disappear; it does not pile up at one point and thin out at another.
The second rule, the loop equation, states that around each loop in an electric circuit the sum of the emf’s (electromotive forces, or voltages, of energy sources such as batteries and generators) is equal to the sum of the potential drops, or voltages across each of the resistances, in the same loop. All the energy imparted by the energy sources to the charged particles that carry the current is just equivalent to that lost by the charge carriers in useful work and heat dissipation around each loop of the circuit.
On the basis of Kirchhoff’s two rules, a sufficient number of equations can be written involving each of the currents so that their values may be determined by an algebraic solution.
Kirchhoff’s rules are also applicable to complex alternating-current circuits and with modifications to complex magnetic circuits.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1083) Adenosine triphosphate
Adenosine triphosphate (ATP), energy-carrying molecule found in the cells of all living things. ATP captures chemical energy obtained from the breakdown of food molecules and releases it to fuel other cellular processes.
Cells require chemical energy for three general types of tasks: to drive metabolic reactions that would not occur automatically; to transport needed substances across membranes; and to do mechanical work, such as moving muscles. ATP is not a storage molecule for chemical energy; that is the job of carbohydrates, such as glycogen, and fats. When energy is needed by the cell, it is converted from storage molecules into ATP. ATP then serves as a shuttle, delivering energy to places within the cell where energy-consuming activities are taking place.
ATP is a nucleotide that consists of three main structures: the nitrogenous base, adenine; the sugar, ribose; and a chain of three phosphate groups bound to ribose. The phosphate tail of ATP is the actual power source which the cell taps. Available energy is contained in the bonds between the phosphates and is released when they are broken, which occurs through the addition of a water molecule (a process called hydrolysis). Usually only the outer phosphate is removed from ATP to yield energy; when this occurs ATP is converted to adenosine diphosphate (ADP), the form of the nucleotide having only two phosphates.
ATP is able to power cellular processes by transferring a phosphate group to another molecule (a process called phosphorylation). This transfer is carried out by special enzymes that couple the release of energy from ATP to cellular activities that require energy.
Although cells continuously break down ATP to obtain energy, ATP also is constantly being synthesized from ADP and phosphate through the processes of cellular respiration. Most of the ATP in cells is produced by the enzyme ATP synthase, which converts ADP and phosphate to ATP. ATP synthase is located in the membrane of cellular structures called mitochondria; in plant cells, the enzyme also is found in chloroplasts. The central role of ATP in energy metabolism was discovered by Fritz Albert Lipmann and Herman Kalckar in 1941.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1084) Biomolecule
Biomolecule, also called biological molecule, any of numerous substances that are produced by cells and living organisms. Biomolecules have a wide range of sizes and structures and perform a vast array of functions. The four major types of biomolecules are carbohydrates, lipids, nucleic acids, and proteins.
Among biomolecules, nucleic acids, namely DNA and RNA, have the unique function of storing an organism’s genetic code—the sequence of nucleotides that determines the amino acid sequence of proteins, which are of critical importance to life on Earth. There are 20 different amino acids that can occur within a protein; the order in which they occur plays a fundamental role in determining protein structure and function. Proteins themselves are major structural elements of cells. They also serve as transporters, moving nutrients and other molecules in and out of cells, and as enzymes and catalysts for the vast majority of chemical reactions that take place in living organisms. Proteins also form antibodies and hormones, and they influence gene activity.
Likewise, carbohydrates, which are made up primarily of molecules containing atoms of carbon, hydrogen, and oxygen, are essential energy sources and structural components of all life, and they are among the most abundant biomolecules on Earth. They are built from four types of sugar units—monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Lipids, another key biomolecule of living organisms, fulfill a variety of roles, including serving as a source of stored energy and acting as chemical messengers. They also form membranes, which separate cells from their environments and compartmentalize the cell interior, giving rise to organelles, such as the nucleus and the mitochondrion, in higher (more complex) organisms.
All biomolecules share in common a fundamental relationship between structure and function, which is influenced by factors such as the environment in which a given biomolecule occurs. Lipids, for example, are hydrophobic (“water-fearing”); in water, many spontaneously arrange themselves in such a way that the hydrophobic ends of the molecules are protected from the water, while the hydrophilic ends are exposed to the water. This arrangement gives rise to lipid bilayers, or two layers of phospholipid molecules, which form the membranes of cells and organelles. In another example, DNA, which is a very long molecule—in humans, the combined length of all the DNA molecules in a single cell stretched end to end would be about 1.8 metres (6 feet), whereas the cell nucleus is about 6 μm (6 x 10^{-6} metre) in diameter—has a highly flexible helical structure that allows the molecule to become tightly coiled and looped. This structural feature plays a key role in enabling DNA to fit in the cell nucleus, where it carries out its function in coding genetic traits.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1085) Parachute
Parachute, device that slows the vertical descent of a body falling through the atmosphere or the velocity of a body moving horizontally. The parachute increases the body’s surface area, and this increased air resistance slows the body in motion. Parachutes have found wide employment in war and peace for safely dropping supplies and equipment as well as personnel, and they are deployed for slowing a returning space capsule after reentry into Earth’s atmosphere. They are also used in the sport of skydiving.
Development And Military Applications
The Shiji (Records of the Great Historian of China), by 2nd-century-BCE Chinese scholar Sima Qian, includes the tale of a Chinese emperor who survived a jump from an upper story of a burning building by grasping conical straw hats in order to slow his descent. Though likely apocryphal, the story nonetheless demonstrates an understanding of the principle behind parachuting. A 13th-century Chinese manuscript contains a similar report of a thief who absconded with part of a statue by leaping from the tower where it was housed while holding two umbrellas. A report that actual parachutes were used at a Chinese emperor’s coronation ceremony in 1306 has not been substantiated by historical record. The first record of a parachute in the West occurred some two centuries later. A diagram of a pyramidal parachute, along with a brief description of the concept, is found in the Codex Atlanticus, a compilation of some 1,000 pages from Leonardo da Vinci’s notebooks (c. 1478–1518). However, there is no evidence suggesting that da Vinci ever actually constructed such a device.
The modern parachute developed at virtually the same time as the balloon, though the two events were independent of each other. The first person to demonstrate the use of a parachute in action was Louis-Sébastien Lenormand of France in 1783. Lenormand jumped from a tree with two parasols. A few years later, other French aeronauts jumped from balloons. André-Jacques Garnerin was the first to use a parachute regularly, making a number of exhibition jumps, including one of about 8,000 feet (2,400 metres) in England in 1802.
Early parachutes—made of canvas or silk—had frames that held them open (like an umbrella). Later in the 1800s, soft, foldable parachutes of silk were used; these were deployed by a device (attached to the airborne platform from which the jumper was diving) that extracted the parachute from a bag. Only later still, in the early 1900s, did the rip cord that allowed the parachutist to deploy the chute appear.
The first successful descent from an airplane was by Capt. Albert Berry of the United States Army in 1912. But in World War I, although parachutes were used with great frequency by men who needed to escape from tethered observation balloons, they were considered impractical for airplanes, and only in the last stage of the war were they finally introduced. In World War II, however, parachutes were employed extensively, especially by the Germans, for a variety of purposes that included landing special troops for combat, supplying isolated or inaccessible troops, and infiltrating agents into enemy territory. Specialized parachutes were invented during World War II for these tasks. One such German-made parachute—the ring, or ribbon, parachute—was composed of a number of concentric rings of radiating ribbons of fabric with openings between them that allowed some airflow; this chute had high aerodynamic stability and performed heavy-duty functions well, such as dropping heavy cargo loads or braking aircraft in short landing runs. In the 1990s, building upon the knowledge gained from manufacturing square sport parachutes, ram-air parachutes were extensively enlarged, and a platform containing a computer that controls the parachute and guides the platform to its designated target was added for military applications; these parachutes are capable of carrying thousands of pounds of payload to precision landing spots.
Parachutes designed to open at supersonic speeds have radically different contours from conventional canopy chutes; they are made in the form of a cone, with air allowed to escape either through pores of the material or through a large circular opening running around the cone. To permit escape from an aircraft flying at supersonic speeds, the parachute is designed as part of an assembly that includes the ejection seat. A small rocket charge ejects pilot, seat, and parachute; when the pilot is clear of the seat, the parachute opens automatically.
Sport Parachuting
The sport parachute has evolved over the years from the traditional round parachute to the square (actually rectangular) ram-air airfoils commonly seen today. Round parachutes were made of nylon and assembled in a pack attached to a harness worn by the user, which contained the parachute canopy, a small pilot chute that assisted in opening the canopy, and suspension lines. The canopy’s strength was the result of sewing together between 20 and 32 separate panels, or gores, each made of smaller sections, in such a way as to try to confine a tear to the section in which it originated. The pack was fitted to the parachutist’s back or chest and opened by a rip cord, by an automatic timing device, or by a static line—i.e., a line fastened to the aircraft. The harness was so constructed that deceleration (as the parachute opened), gravity, and wind forces were transmitted to the wearer’s body with maximum safety and minimum discomfort.
Early in their design evolution, round parachutes had holes placed into them to allow air to escape out the side, which thus provided some degree of maneuverability to the parachutist, who could selectively close off vents to change direction. These round parachutes had a typical forward speed of 5–7 miles per hour (8–11 km per hour). High-performance round parachutes (known as the Para Commander class) were constructed with the apex (top) of the canopy pulled down to create a higher pressure airflow, which was directed through several vent holes in the rear quadrant of the parachute. These parachutes had a typical forward speed of 10–14 mph and were much more maneuverable than the traditional round parachute. For a brief period of time, single-surface double-keel parachutes known as Rogallo Wings or Para Dactyls made an appearance, but they were soon superseded by high-performance square parachutes, which fly by using the aerodynamic principles of an airfoil (wing) and are extremely maneuverable.
Square parachutes are made of low- (or zero-) porosity nylon composed into cells rather than gores. In flight they resemble a tapered air mattress, with openings at the parachute’s front that allow the air to “ram” into the cell structure and inflate the parachute into its airfoil shape. Forward speeds of between 20 and 30 mph are easily obtained, yet the parachute is also capable of delivering the skydiver to ground with a soft landing because the diver can “flare” the canopy (pull the tail down, which causes the canopy to change its pitch) when nearing the ground. The effect is the same as with an aircraft—changing the pitch of the ram-air “wing” converts much of the forward speed to lift and thus minimizes forward and downward velocities at the time of ground contact. The controls for this type of parachute are toggles that are similar to the types seen on the round parachute, and harnesses are fairly similar to older designs as well. Modern parachutes, however, are nearly always worn on the skydiver’s back and are rarely worn on the chest.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1086) Pollution
Pollution, also called environmental pollution, the addition of any substance (solid, liquid, or gas) or any form of energy (such as heat, sound, or radioactivity) to the environment at a rate faster than it can be dispersed, diluted, decomposed, recycled, or stored in some harmless form. The major kinds of pollution, usually classified by environment, are air pollution, water pollution, and land pollution. Modern society is also concerned about specific types of pollutants, such as noise pollution, light pollution, and plastic pollution. Pollution of all kinds can have negative effects on the environment and wildlife and often impacts human health and well-being.
History Of Pollution
Although environmental pollution can be caused by natural events such as forest fires and active volcanoes, use of the word pollution generally implies that the contaminants have an anthropogenic source—that is, a source created by human activities. Pollution has accompanied humankind ever since groups of people first congregated and remained for a long time in any one place. Indeed, ancient human settlements are frequently recognized by their wastes—shell mounds and rubble heaps, for instance. Pollution was not a serious problem as long as there was enough space available for each individual or group. However, with the establishment of permanent settlements by great numbers of people, pollution became a problem, and it has remained one ever since.
Cities of ancient times were often noxious places, fouled by human wastes and debris. Beginning about 1000 CE, the use of coal for fuel caused considerable air pollution, and the conversion of coal to coke for iron smelting beginning in the 17th century exacerbated the problem. In Europe, from the Middle Ages well into the early modern era, unsanitary urban conditions favoured the outbreak of population-decimating epidemics of disease, from plague to cholera and typhoid fever. Through the 19th century, water and air pollution and the accumulation of solid wastes were largely problems of congested urban areas. But, with the rapid spread of industrialization and the growth of the human population to unprecedented levels, pollution became a universal problem.
By the middle of the 20th century, an awareness of the need to protect air, water, and land environments from pollution had developed among the general public. In particular, the publication in 1962 of Rachel Carson’s book Silent Spring focused attention on environmental damage caused by improper use of pesticides such as DDT and other persistent chemicals that accumulate in the food chain and disrupt the natural balance of ecosystems on a wide scale. In response, major pieces of environmental legislation, such as the Clean Air Act (1970) and the Clean Water Act (1972; United States), were passed in many countries to control and mitigate environmental pollution.
Pollution Control
The presence of environmental pollution raises the issue of pollution control. Great efforts are made to limit the release of harmful substances into the environment through air pollution control, wastewater treatment, solid-waste management, hazardous-waste management, and recycling. Unfortunately, attempts at pollution control are often surpassed by the scale of the problem, especially in less-developed countries. Noxious levels of air pollution are common in many large cities, where particulates and gases from transportation, heating, and manufacturing accumulate and linger. The problem of plastic pollution on land and in the oceans has only grown as the use of single-use plastics has burgeoned worldwide. In addition, greenhouse gas emissions, such as methane and carbon dioxide, continue to drive global warming and pose a great threat to biodiversity and public health.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1087) Air pollution
Air pollution, release into the atmosphere of various gases, finely divided solids, or finely dispersed liquid aerosols at rates that exceed the natural capacity of the environment to dissipate and dilute or absorb them. These substances may reach concentrations in the air that cause undesirable health, economic, or aesthetic effects.
Major Air Pollutants
Criteria pollutants
Clean, dry air consists primarily of nitrogen and oxygen—78 percent and 21 percent respectively, by volume. The remaining 1 percent is a mixture of other gases, mostly argon (0.9 percent), along with trace (very small) amounts of carbon dioxide, methane, hydrogen, helium, and more. Water vapour is also a normal, though quite variable, component of the atmosphere, normally ranging from 0.01 to 4 percent by volume; under very humid conditions the moisture content of air may be as high as 5 percent.
The gaseous air pollutants of primary concern in urban settings include sulfur dioxide, nitrogen dioxide, and carbon monoxide; these are emitted directly into the air from fossil fuels such as fuel oil, gasoline, and natural gas that are burned in power plants, automobiles, and other combustion sources. Ozone (a key component of smog) is also a gaseous pollutant; it forms in the atmosphere via complex chemical reactions occurring between nitrogen dioxide and various volatile organic compounds (e.g., gasoline vapours).
Airborne suspensions of extremely small solid or liquid particles called “particulates” (e.g., soot, dust, smokes, fumes, mists), especially those less than 10 micrometres (μm; millionths of a metre) in size, are significant air pollutants because of their very harmful effects on human health. They are emitted by various industrial processes, coal- or oil-burning power plants, residential heating systems, and automobiles. Lead fumes (airborne particulates less than 0.5 μm in size) are particularly toxic and are an important pollutant of many diesel fuels.
The six major air pollutants listed above have been designated by the U.S. Environmental Protection Agency (EPA) as “criteria” pollutants—criteria meaning that the concentrations of these pollutants in the atmosphere are useful as indicators of overall air quality.
Except for lead, criteria pollutants are emitted in industrialized countries at very high rates, typically measured in millions of tons per year. All except ozone are discharged directly into the atmosphere from a wide variety of sources. They are regulated primarily by establishing ambient air quality standards, which are maximum acceptable concentrations of each criteria pollutant in the atmosphere, regardless of its origin.
Fine particulates
Very small fragments of solid materials or liquid droplets suspended in air are called particulates. Except for airborne lead, which is treated as a separate category, they are characterized on the basis of size and phase (i.e., solid or liquid) rather than by chemical composition. For example, solid particulates between roughly 1 and 100 μm in diameter are called dust particles, whereas airborne solids less than 1 μm in diameter are called fumes.
The particulates of most concern with regard to their effects on human health are solids less than 10 μm in diameter, because they can be inhaled deep into the lungs and become trapped in the lower respiratory system. Certain particulates, such as asbestos fibres, are known carcinogens (cancer-causing agents), and many carbonaceous particulates—e.g., soot—are suspected of being carcinogenic. Major sources of particulate emissions include fossil-fuel power plants, manufacturing processes, fossil-fuel residential heating systems, and gasoline-powered vehicles.
Carbon monoxide
Carbon monoxide is an odourless, invisible gas formed as a result of incomplete combustion. It is the most abundant of the criteria pollutants. Gasoline-powered highway vehicles are the primary source, although residential heating systems and certain industrial processes also emit significant amounts of this gas. Power plants emit relatively little carbon monoxide because they are carefully designed and operated to maximize combustion efficiency. Exposure to carbon monoxide can be acutely harmful since it readily displaces oxygen in the bloodstream, leading to asphyxiation at high enough concentrations and exposure times.
Sulfur dioxide
A colourless gas with a sharp, choking odour, sulfur dioxide is formed during the combustion of coal or oil that contains sulfur as an impurity. Most sulfur dioxide emissions come from power-generating plants; very little comes from mobile sources. This pungent gas can cause eye and throat irritation and harm lung tissue when inhaled.
Sulfur dioxide also reacts with oxygen and water vapour in the air, forming a mist of sulfuric acid that reaches the ground as a component of acid rain. Acid rain is believed to have harmed or destroyed fish and plant life in many thousands of lakes and streams in parts of Europe, the northeastern United States, southeastern Canada, and parts of China. It also causes corrosion of metals and deterioration of the exposed surfaces of buildings and public monuments.
Nitrogen dioxide
Of the several forms of nitrogen oxides, nitrogen dioxide—a pungent, irritating gas—is of most concern. It is known to cause pulmonary edema, an accumulation of excessive fluid in the lungs. Nitrogen dioxide also reacts in the atmosphere to form nitric acid, contributing to the problem of acid rain. In addition, nitrogen dioxide plays a role in the formation of photochemical smog, a reddish brown haze that often is seen in many urban areas and that is created by sunlight-promoted reactions in the lower atmosphere.
Nitrogen oxides are formed when combustion temperatures are high enough to cause molecular nitrogen in the air to react with oxygen. Stationary sources such as coal-burning power plants are major contributors of this pollutant, although gasoline engines and other mobile sources are also significant.
Ozone
A key component of photochemical smog, ozone is formed by a complex reaction between nitrogen dioxide and hydrocarbons in the presence of sunlight. It is considered to be a criteria pollutant in the troposphere—the lowermost layer of the atmosphere—but not in the upper atmosphere, where it occurs naturally and serves to block harmful ultraviolet rays from the Sun. Because nitrogen dioxide and hydrocarbons are emitted in significant quantities by motor vehicles, photochemical smog is common in cities such as Los Angeles, where sunshine is ample and highway traffic is heavy. Certain geographic features, such as mountains that impede air movement, and weather conditions, such as temperature inversions in the troposphere, contribute to the trapping of air pollutants and the formation of photochemical smog.
Lead
Inhaled lead particulates in the form of fumes and dusts are particularly harmful to children, in whom even slightly elevated levels of lead in the blood can cause learning disabilities, seizures, or even death (see lead poisoning). Sources of airborne lead particulates include oil refining, smelting, and other industrial activities. In the past, combustion of gasoline containing a lead-based antiknock additive called tetraethyl lead was a major source of lead particulates. In many countries there is now a complete ban on the use of lead in gasoline. In the United States, lead concentrations in outdoor air decreased more than 90 percent after the use of leaded gasoline was restricted in the mid-1970s and then completely banned in 1996.
Air toxics
Hundreds of specific substances are considered hazardous when present in trace amounts in the air. These pollutants are called air toxics. Many of them cause genetic mutations or cancer; some cause other types of health problems, such as adverse effects on brain tissue or fetal development. Although the total emissions and the number of sources of air toxics are small compared with those for criteria pollutants, these pollutants can pose an immediate health risk to exposed individuals and can cause other environmental problems.
Most air toxics are organic chemicals, comprising molecules that contain carbon, hydrogen, and other atoms. Many are volatile organic compounds (VOCs), organic compounds that readily evaporate. VOCs include pure hydrocarbons, partially oxidized hydrocarbons, and organic compounds containing chlorine, sulfur, or nitrogen. They are widely used as fuels (e.g., propane and gasoline), as paint thinners and solvents, and in the production of plastics. In addition to contributing to air toxicity and urban smog, some VOC emissions act as greenhouse gases and, in so doing, may be a cause of global warming. Some other air toxics are metals or compounds of metals—for example, mercury, element As, and cadmium.
In many countries, standards have been set to control industrial emissions of several air toxics. The first hazardous air pollutants regulated in the United States (outside the workplace environment) were math, asbestos, benzene, beryllium, coke oven emissions, mercury, radionuclides (radioactive isotopes), and vinyl chloride. In 1990 this short list was expanded to include 189 substances. By the end of the 1990s, specific emission control standards were required in the United States for “major sources”—those that release more than 10 tons per year of any of these materials or more than 25 tons per year of any combination of them.
Air toxics may be released in sudden and catastrophic accidents rather than steadily and gradually from many sources. For example, in the Bhopal disaster of 1984, an accidental release of methyl isocyanate at a pesticide factory in Bhopal, Madhya Pradesh state, India, immediately killed at least 3,000 people, eventually caused the deaths of an estimated 15,000 to 25,000 people over the following quarter-century, and injured hundreds of thousands more. The risk of accidental release of very hazardous substances into the air is generally higher for people living in industrialized urban areas. Hundreds of such incidents occur each year, though none has been as severe as the Bhopal event.
Other than in cases of occupational exposure or accidental release, health threats from air toxics are greatest for people who live near large industrial facilities or in congested and polluted urban areas. Most major sources of air toxics are so-called point sources—that is, they have a specific location. Point sources include chemical plants, steel mills, oil refineries, and municipal waste incinerators. Hazardous air pollutants may be released when equipment leaks or when material is transferred, or they may be emitted from smokestacks. Municipal waste incinerators, for example, can emit hazardous levels of dioxins, formaldehyde, and other organic substances, as well as metals such as math, beryllium, lead, and mercury. Nevertheless, proper combustion along with appropriate air pollution control devices can reduce emissions of these substances to acceptable levels.
Hazardous air pollutants also come from “area” sources, which are many smaller sources that release pollutants into the outdoor air in a defined area. Such sources include commercial dry-cleaning facilities, gasoline stations, small metal-plating operations, and woodstoves. Emission of air toxics from area sources are also regulated under some circumstances.
Small area sources account for about 25 percent of all emissions of air toxics. Major point sources account for another 20 percent. The rest—more than half of hazardous air-pollutant emissions—come from motor vehicles. For example, benzene, a component of gasoline, is released as unburned fuel or as fuel vapours, and formaldehyde is one of the by-products of incomplete combustion. Newer cars, however, have emission control devices that significantly reduce the release of air toxics.
Greenhouse gases
Global warming is recognized by almost all atmospheric scientists as a significant environmental problem caused by an increase in levels of certain trace gases in the atmosphere since the beginning of the Industrial Revolution in the mid-18th century. These gases, collectively called greenhouse gases, include carbon dioxide, organic chemicals called chlorofluorocarbons (CFCs), methane, nitrous oxide, ozone, and many others. Carbon dioxide, although not the most potent of the greenhouse gases, is the most important because of the huge volumes emitted into the air by combustion of fossil fuels (e.g., gasoline, oil, coal).
Carbon dioxide is considered a normal component of the atmosphere, and before the Industrial Revolution the average levels of this gas were about 280 parts per million (ppm). By the early 21st century, the levels of carbon dioxide reached 405 ppm, and they continue to increase at a rate of almost 3 ppm per year. Many scientists think that carbon dioxide should be regulated as a pollutant—a position taken by the EPA in 2009 in a ruling that such regulations could be promulgated. International cooperation and agreements, such as the Paris Agreement of 2015, would be necessary to reduce carbon dioxide emissions worldwide.
Air Pollution And Air Movement
Local air quality typically varies over time because of the effect of weather patterns. For example, air pollutants are diluted and dispersed in a horizontal direction by prevailing winds, and they are dispersed in a vertical direction by atmospheric instability. Unstable atmospheric conditions occur when air masses move naturally in a vertical direction, thereby mixing and dispersing pollutants. When there is little or no vertical movement of air (stable conditions), pollutants can accumulate near the ground and cause temporary but acute episodes of air pollution. With regard to air quality, unstable atmospheric conditions are preferable to stable conditions.
The degree of atmospheric instability depends on the temperature gradient (i.e., the rate at which air temperature changes with altitude). In the troposphere (the lowest layer of the atmosphere, where most weather occurs), air temperatures normally decrease as altitude increases; the faster the rate of decrease, the more unstable the atmosphere. Under certain conditions, however, a temporary “temperature inversion” may occur, during which time the air temperature increases with increasing altitude, and the atmosphere is very stable. Temperature inversions prevent the upward mixing and dispersion of pollutants and are the major cause of air pollution episodes. Certain geographic conditions exacerbate the effect of inversions. For example, Los Angeles, situated on a plain on the Pacific coast of California and surrounded by mountains that block horizontal air motion, is particularly susceptible to the stagnation effects of inversions—hence the infamous Los Angeles smog. On the opposite coast of North America another metropolis, New York City, produces greater quantities of pollutants than does Los Angeles but has been spared major air pollution disasters—only because of favourable climatic and geographic circumstances. During the mid-20th century governmental efforts to reduce air pollution increased substantially after several major inversions, such as the Great Smog of London, a weeklong air pollution episode in London in 1952 that was directly blamed for more than 4,000 deaths.
The Global Reach Of Air Pollution
Because some air pollutants persist in the atmosphere and are carried long distances by winds, air pollution transcends local, regional, and continental boundaries, and it also may have an effect on global climate and weather. For example, acid rain has gained worldwide attention since the 1970s as a regional and even continental problem. Acid rain occurs when sulfur dioxide and nitrogen oxides from the burning of fossil fuels combine with water vapour in the atmosphere, forming sulfuric acid and nitric acid mists. The resulting acidic precipitation is damaging to water, forest, and soil resources. It has caused the disappearance of fish from many lakes in the Adirondack Mountains of North America, the widespread death of forests in mountains of Europe, and damage to tree growth in the United States and Canada. Acid rain can also corrode building materials and be hazardous to human health. These problems are not contained by political boundaries. Emissions from the burning of fossil fuels in the middle sections of the United States and Canada are precipitated as acid rain in the eastern regions of those countries, and acid rain in Norway comes largely from industrial areas in Great Britain and continental Europe. The international scope of the problem has led to the signing of international agreements on the limitation of sulfur and nitrogen oxide emissions.
Another global problem caused by air pollution is the ozone depletion in the stratosphere. At ground level (i.e., in the troposphere), ozone is a pollutant, but at altitudes above 12 km (7 miles) it plays a crucial role in absorbing and thereby blocking ultraviolet radiation (UV) from the Sun before it reaches the ground. Exposure to UV radiation has been linked to skin cancer and other health problems. In 1985 it was discovered that a large “ozone hole,” an ozone-depleted region, is present every year between August and November over the continent of Antarctica. The size of this hole is increased by the presence in the atmosphere of chlorofluorocarbons (CFCs); these emanate from aerosol spray cans, refrigerators, industrial solvents, and other sources and are transported to Antarctica by atmospheric circulation. It had already been demonstrated in the mid-1970s that CFCs posed a threat to the global ozonosphere, and in 1978 the use of CFCs as propellants in aerosol cans was banned in the United States. Their use was subsequently restricted in several other countries. In 1987 representatives from more than 45 countries signed the Montreal Protocol, agreeing to place severe limitations on the production of CFCs.
One of the most significant effects of air pollution is on climate change, particularly global warming. As a result of the growing worldwide consumption of fossil fuels, carbon dioxide levels in the atmosphere have increased steadily since 1900, and the rate of increase is accelerating. It has been estimated that if carbon dioxide levels are not reduced, average global air temperatures may rise another 4 °C (7.2 °F) by the end of the 21st century. Such a warming trend might cause melting of the polar ice caps, rising of the sea level, and flooding of the coastal areas of the world. Changes in precipitation patterns caused by global warming might have adverse effects on agriculture and forest ecosystems, and higher temperatures and humidity might increase the incidence of disease in humans and animals in some parts of the world. Implementation of international agreements on reducing greenhouse gases are required to protect global air quality and to mitigate the effects of global warming.
Indoor Air Pollution
Health risks related to indoor air pollution have become an issue of concern because people generally spend most of their time indoors at home and at work. The problem has been exacerbated by well-meaning efforts to lower air-exchange rates in buildings in order to conserve energy; these efforts unfortunately allow contaminants to accumulate indoors. Indoor air pollutants include various combustion products from stoves, kerosene space heaters, and fireplaces, as well as volatile organic compounds (VOCs) from household products (e.g., paints, cleaning agents, and pesticides). Formaldehyde off-gassing from building products (especially particleboard and plywood) and from dry-cleaned textiles can accumulate in indoor air. Bacteria, viruses, molds, animal dander, dust mites, and pollen are biological contaminants that can cause disease and other health problems, especially if they build up in and are spread by central heating or cooling systems. Environmental tobacco smoke, also called secondhand smoke, is an indoor air pollutant in many homes, despite widespread knowledge about the harmful effects of smoking. Secondhand smoke contains many carcinogenic compounds as well as strong irritants. In some geographic regions, naturally occurring radon, a radioactive gas, can seep from the ground into buildings and accumulate to harmful levels. Exposure to all indoor air pollutants can be reduced by appropriate building construction and maintenance methods, limitations on pollutant sources, and provision of adequate ventilation.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1088) Noise pollution
Noise pollution, unwanted or excessive sound that can have deleterious effects on human health and environmental quality. Noise pollution is commonly generated inside many industrial facilities and some other workplaces, but it also comes from highway, railway, and airplane traffic and from outdoor construction activities.
Measuring And Perceiving Loudness
Sound waves are vibrations of air molecules carried from a noise source to the ear. Sound is typically described in terms of the loudness (amplitude) and the pitch (frequency) of the wave. Loudness (also called sound pressure level, or SPL) is measured in logarithmic units called decibels (dB). The normal human ear can detect sounds that range between 0 dB (hearing threshold) and about 140 dB, with sounds between 120dB and 140 dB causing pain (pain threshold). The ambient SPL in a library is about 35 dB, while that inside a moving bus or subway train is roughly 85 dB; building construction activities can generate SPLs as high as 105 dB at the source. SPLs decrease with distance from the source.
The rate at which sound energy is transmitted, called sound intensity, is proportional to the square of the SPL. Because of the logarithmic nature of the decibel scale, an increase of 10 dB represents a 10-fold increase in sound intensity, an increase of 20 dB represents a 100-fold increase in intensity, a 30-dB increase represents a 1,000-fold increase in intensity, and so on. When sound intensity is doubled, on the other hand, the SPL increases by only 3 dB. For example, if a construction drill causes a noise level of about 90 dB, then two identical drills operating side by side will cause a noise level of 93 dB. On the other hand, when two sounds that differ by more than 15 dB in SPL are combined, the weaker sound is masked (or drowned out) by the louder sound. For example, if an 80-dB drill is operating next to a 95-dB dozer at a construction site, the combined SPL of those two sources will be measured as 95 dB; the less intense sound from the compressor will not be noticeable.
Frequency of a sound wave is expressed in cycles per second (cps), but hertz (Hz) is more commonly used (1 cps = 1 Hz). The human eardrum is a very sensitive organ with a large dynamic range, being able to detect sounds at frequencies as low as 20 Hz (a very low pitch) up to about 20,000 Hz (a very high pitch). The pitch of a human voice in normal conversation occurs at frequencies between 250 Hz and 2,000 Hz.
Precise measurement and scientific description of sound levels differ from most subjective human perceptions and opinions about sound. Subjective human responses to noise depend on both pitch and loudness. People with normal hearing generally perceive high-frequency sounds to be louder than low-frequency sounds of the same amplitude. For this reason, electronic sound-level meters used to measure noise levels take into account the variations of perceived loudness with pitch. Frequency filters in the meters serve to match meter readings with the sensitivity of the human ear and the relative loudness of various sounds. The so-called A-weighted filter, for example, is commonly used for measuring ambient community noise. SPL measurements made with this filter are expressed as A-weighted decibels, or dBA. Most people perceive and describe a 6- to 10-dBA increase in an SPL reading to be a doubling of “loudness.” Another system, the C-weighted (dBC) scale, is sometimes used for impact noise levels, such as gunfire, and tends to be more accurate than dBA for the perceived loudness of sounds with low frequency components.
Noise levels generally vary with time, so noise measurement data are reported as time-averaged values to express overall noise levels. There are several ways to do this. For example, the results of a set of repeated sound-level measurements may be reported as L90 = 75 dBA, meaning that the levels were equal to or higher than 75 dBA for 90 percent of the time. Another unit, called equivalent sound levels (Leq), can be used to express an average SPL over any period of interest, such as an eight-hour workday. (Leq is a logarithmic average rather than an arithmetic average, so loud events prevail in the overall result.) A unit called day-night sound level (DNL or Ldn) accounts for the fact that people are more sensitive to noise during the night, so a 10-dBA penalty is added to SPL values that are measured between 10 PM and 7 AM. DNL measurements are very useful for describing overall community exposure to aircraft noise, for example.
Dealing With The Effects Of Noise
Noise is more than a mere nuisance. At certain levels and durations of exposure, it can cause physical damage to the eardrum and the sensitive hair cells of the inner ear and result in temporary or permanent hearing loss. Hearing loss does not usually occur at SPLs below 80 dBA (eight-hour exposure levels are best kept below 85 dBA), but most people repeatedly exposed to more than 105 dBA will have permanent hearing loss to some extent. In addition to causing hearing loss, excessive noise exposure can also raise blood pressure and pulse rates, cause irritability, anxiety, and mental fatigue, and interfere with sleep, recreation, and personal communication. Noise pollution control is therefore of importance in the workplace and in the community. Noise-control ordinances and laws enacted at the local, regional, and national levels can be effective in mitigating the adverse effects of noise pollution.
Environmental and industrial noise is regulated in the United States under the Occupational Safety and Health Act of 1970 and the Noise Control Act of 1972. Under these acts, the Occupational Safety and Health Administration set up industrial noise criteria in order to provide limits on the intensity of sound exposure and on the time duration for which that intensity may be allowed.
Criteria for indoor noise are summarized in three sets of specifications that have been derived by collecting subjective judgments from a large sampling of people in a variety of specific situations. These have developed into the noise criteria (NC) and preferred noise criteria (PNC) curves, which provide limits on the level of noise introduced into the environment. The NC curves, developed in 1957, aim to provide a comfortable working or living environment by specifying the maximum allowable level of noise in octave bands over the entire audio spectrum. The complete set of 11 curves specifies noise criteria for a broad range of situations. The PNC curves, developed in 1971, add limits on low-frequency rumble and high-frequency hiss; hence, they are preferred over the older NC standard. Summarized in the curves, these criteria provide design goals for noise levels for a variety of different purposes. Part of the specification of a work or living environment is the appropriate PNC curve; in the event that the sound level exceeds PNC limits, sound-absorptive materials can be introduced into the environment as necessary to meet the appropriate standards.
Low levels of noise may be overcome using additional absorbing material, such as heavy drapery or sound-absorbent tiles in enclosed rooms. Where low levels of identifiable noise may be distracting or where privacy of conversations in adjacent offices and reception areas may be important, the undesirable sounds may be masked. A small white-noise source such as static or rushing air, placed in the room, can mask the sounds of conversation from adjacent rooms without being offensive or dangerous to the ears of people working nearby. This type of device is often used in offices of doctors and other professionals. Another technique for reducing personal noise levels is through the use of hearing protectors, which are held over the ears in the same manner as an earmuff. By using commercially available earmuff-type hearing protectors, a decrease in sound level can be attained ranging typically from about 10 dB at 100 Hz to more than 30 dB for frequencies above 1,000 Hz.
Outdoor noise limits are also important for human comfort. Standard house construction will provide some shielding from external sounds if the house meets minimum standards of construction and if the outside noise level falls within acceptable limits. These limits are generally specified for particular periods of the day—for example, during daylight hours, during evening hours, and at night during sleeping hours. Because of refraction in the atmosphere owing to the nighttime temperature inversion, relatively loud sounds can be introduced into an area from a rather distant highway, airport, or railroad. One interesting technique for control of highway noise is the erection of noise barriers alongside the highway, separating the highway from adjacent residential areas. The effectiveness of such barriers is limited by the diffraction of sound, which is greater at the lower frequencies that often predominate in road noise, especially from large vehicles. In order to be effective, they must be as close as possible to either the source or the observer of the noise (preferably to the source), thus maximizing the diffraction that would be necessary for the sound to reach the observer. Another requirement for this type of barrier is that it must also limit the amount of transmitted sound in order to bring about significant noise reduction.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1089) Water pollution
Water pollution, the release of substances into subsurface groundwater or into lakes, streams, rivers, estuaries, and oceans to the point where the substances interfere with beneficial use of the water or with the natural functioning of ecosystems. In addition to the release of substances, such as chemicals or microorganisms, water pollution may also include the release of energy, in the form of radioactivity or heat, into bodies of water.
Sewage And Other Water Pollutants
Water bodies can be polluted by a wide variety of substances, including pathogenic microorganisms, putrescible organic waste, plant nutrients, toxic chemicals, sediments, heat, petroleum (oil), and radioactive substances.
Domestic sewage
Domestic sewage is the primary source of pathogens (disease-causing microorganisms) and putrescible organic substances. Because pathogens are excreted in feces, all sewage from cities and towns is likely to contain pathogens of some type, potentially presenting a direct threat to public health. Putrescible organic matter presents a different sort of threat to water quality. As organics are decomposed naturally in the sewage by bacteria and other microorganisms, the dissolved oxygen content of the water is depleted. This endangers the quality of lakes and streams, where high levels of oxygen are required for fish and other aquatic organisms to survive. Sewage-treatment processes reduce the levels of pathogens and organics in wastewater, but they do not eliminate them completely.
Domestic sewage is also a major source of plant nutrients, mainly nitrates and phosphates. Excess nitrates and phosphates in water promote the growth of algae, sometimes causing unusually dense and rapid growths known as algal blooms. When the algae die, oxygen dissolved in the water declines because microorganisms use oxygen to digest algae during the process of decomposition.
Anaerobic organisms (organisms that do not require oxygen to live) then metabolize the organic wastes, releasing gases such as methane and hydrogen sulfide, which are harmful to the aerobic (oxygen-requiring) forms of life. The process by which a lake changes from a clean, clear condition—with a relatively low concentration of dissolved nutrients and a balanced aquatic community—to a nutrient-rich, algae-filled state and thence to an oxygen-deficient, waste-filled condition is called eutrophication. Eutrophication is a naturally occurring, slow, and inevitable process. However, when it is accelerated by human activity and water pollution (a phenomenon called cultural eutrophication), it can lead to the premature aging and death of a body of water.
Toxic waste
Waste is considered toxic if it is poisonous, radioactive, explosive, carcinogenic (causing cancer), mutagenic (causing damage to chromosomes), teratogenic (causing birth defects), or bioaccumulative (that is, increasing in concentration at the higher ends of food chains). Sources of toxic chemicals include improperly disposed wastewater from industrial plants and chemical process facilities (lead, mercury, chromium) as well as surface runoff containing pesticides used on agricultural areas and suburban lawns (chlordane, dieldrin, heptachlor).
Sediment
Sediment (e.g., silt) resulting from soil erosion can be carried into water bodies by surface runoff. Suspended sediment interferes with the penetration of sunlight and upsets the ecological balance of a body of water. Also, it can disrupt the reproductive cycles of fish and other forms of life, and when it settles out of suspension it can smother bottom-dwelling organisms.
Thermal pollution
Heat is considered to be a water pollutant because it decreases the capacity of water to hold dissolved oxygen in solution, and it increases the rate of metabolism of fish. Valuable species of game fish (e.g., trout) cannot survive in water with very low levels of dissolved oxygen. A major source of heat is the practice of discharging cooling water from power plants into rivers; the discharged water may be as much as 15 °C (27 °F) warmer than the naturally occurring water.
Petroleum (oil) pollution
Petroleum (oil) pollution occurs when oil from roads and parking lots is carried in surface runoff into water bodies. Accidental oil spills are also a source of oil pollution—as in the devastating spills from the tanker Exxon Valdez (which released more than 260,000 barrels in Alaska’s Prince William Sound in 1989) and from the Deepwater Horizon oil rig (which released more than 4 million barrels of oil into the Gulf of Mexico in 2010). Oil slicks eventually move toward shore, harming aquatic life and damaging recreation areas.
Groundwater And Oceans
Groundwater—water contained in underground geologic formations called aquifers—is a source of drinking water for many people. For example, about half the people in the United States depend on groundwater for their domestic water supply. Although groundwater may appear crystal clear (due to the natural filtration that occurs as it flows slowly through layers of soil), it may still be polluted by dissolved chemicals and by bacteria and viruses. Sources of chemical contaminants include poorly designed or poorly maintained subsurface sewage-disposal systems (e.g., septic tanks), industrial wastes disposed of in improperly lined or unlined landfills or lagoons, leachates from unlined municipal refuse landfills, mining and petroleum production, and leaking underground storage tanks below gasoline service stations. In coastal areas, increasing withdrawal of groundwater (due to urbanization and industrialization) can cause saltwater intrusion: as the water table drops, seawater is drawn into wells.
Although estuaries and oceans contain vast volumes of water, their natural capacity to absorb pollutants is limited. Contamination from sewage outfall pipes, from dumping of sludge or other wastes, and from oil spills can harm marine life, especially microscopic phytoplankton that serve as food for larger aquatic organisms. Sometimes, unsightly and dangerous waste materials can be washed back to shore, littering beaches with hazardous debris. By 2010, an estimated 4.8 million and 12.7 million tonnes (between 5.3 million and 14 million tons) of plastic debris had been dumped into the oceans annually, and floating plastic waste had accummulated in Earth’s five subtropical gyres that cover 40 percent of the world’s oceans.
Another ocean pollution problem is the seasonal formation of “dead zones” (i.e., hypoxic areas, where dissolved oxygen levels drop so low that most higher forms of aquatic life vanish) in certain coastal areas. The cause is nutrient enrichment from dispersed agricultural runoff and concomitant algal blooms. Dead zones occur worldwide; one of the largest of these (sometimes as large as 22,730 square km [8,776 square miles]) forms annually in the Gulf of Mexico, beginning at the Mississippi River delta.
Sources Of Pollution
Water pollutants come from either point sources or dispersed sources. A point source is a pipe or channel, such as those used for discharge from an industrial facility or a city sewerage system. A dispersed (or nonpoint) source is a very broad, unconfined area from which a variety of pollutants enter the water body, such as the runoff from an agricultural area. Point sources of water pollution are easier to control than dispersed sources because the contaminated water has been collected and conveyed to one single point where it can be treated. Pollution from dispersed sources is difficult to control, and, despite much progress in the building of modern sewage-treatment plants, dispersed sources continue to cause a large fraction of water pollution problems.
Water Quality Standards
Although pure water is rarely found in nature (because of the strong tendency of water to dissolve other substances), the characterization of water quality (i.e., clean or polluted) is a function of the intended use of the water. For example, water that is clean enough for swimming and fishing may not be clean enough for drinking and cooking. Water quality standards (limits on the amount of impurities allowed in water intended for a particular use) provide a legal framework for the prevention of water pollution of all types.
There are several types of water quality standards. Stream standards are those that classify streams, rivers, and lakes on the basis of their maximum beneficial use; they set allowable levels of specific substances or qualities (e.g., dissolved oxygen, turbidity, pH) allowed in those bodies of water, based on their given classification. Effluent (water outflow) standards set specific limits on the levels of contaminants (e.g., biochemical oxygen demand, suspended solids, nitrogen) allowed in the final discharges from wastewater-treatment plants. Drinking-water standards include limits on the levels of specific contaminants allowed in potable water delivered to homes for domestic use. In the United States, the Clean Water Act and its amendments regulate water quality and set minimum standards for waste discharges for each industry as well as regulations for specific problems such as toxic chemicals and oil spills. In the European Union, water quality is governed by the Water Framework Directive, the Drinking Water Directive, and other laws.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1090) Land pollution
Land pollution, the deposition of solid or liquid waste materials on land or underground in a manner that can contaminate the soil and groundwater, threaten public health, and cause unsightly conditions and nuisances.
The waste materials that cause land pollution are broadly classified as municipal solid waste (MSW, also called municipal refuse), construction and demolition (C&D) waste or debris, and hazardous waste. MSW includes nonhazardous garbage, rubbish, and trash from homes, institutions (e.g., schools), commercial establishments, and industrial facilities. Garbage contains moist and decomposable (biodegradable) food wastes (e.g., meat and vegetable scraps); rubbish comprises mostly dry materials such as paper, glass, textiles, and plastic objects; and trash includes bulky waste materials and objects that are not collected routinely for disposal (e.g., discarded mattresses, appliances, pieces of furniture). C&D waste (or debris) includes wood and metal objects, wallboard, concrete rubble, asphalt, and other inert materials produced when structures are built, renovated, or demolished. Hazardous wastes include harmful and dangerous substances generated primarily as liquids but also as solids, sludges, or gases by various chemical manufacturing companies, petroleum refineries, paper mills, smelters, machine shops, dry cleaners, automobile repair shops, and many other industries or commercial facilities. In addition to improper disposal of MSW, C&D waste, and hazardous waste, contaminated effluent from subsurface sewage disposal (e.g., from septic tanks) can also be a cause of land pollution.
The permeability of soil formations underlying a waste-disposal site is of great importance with regard to land pollution. The greater the permeability, the greater the risks from land pollution.Soil consists of a mixture of unconsolidated mineral and rock fragments (gravel, sand, silt, and clay) formed from natural weathering processes. Gravel and sand formations are porous and permeable, allowing the free flow of water through the pores or spaces between the particles. Silt is much less permeable than sand or gravel, because of its small particle and pore sizes, while clay is virtually impermeable to the flow of water, because of its platelike shape and molecular forces.
Until the mid-20th century, solid wastes were generally collected and placed on top of the ground in uncontrolled “open dumps,” which often became breeding grounds for rats, mosquitoes, flies, and other disease carriers and were sources of unpleasant odours, windblown debris, and other nuisances. Dumps can contaminate groundwater as well as pollute nearby streams and lakes. A highly contaminated liquid called leachate is generated from decomposition of garbage and precipitation that infiltrates and percolates downward through the volume of waste material. When leachate reaches and mixes with groundwater or seeps into nearby bodies of surface water, public health and environmental quality are jeopardized. Methane, a poisonous and explosive gas that easily flows through soil, is an eventual by-product of the anaerobic (in the absence of oxygen) decomposition of putrescible solid waste material. Open dumping of solid waste is no longer allowed in many countries. Nevertheless, leachate and methane from old dumps continue to cause land pollution problems in some areas.
A modern technique for land disposal of solid waste involves construction and daily operation and control of so-called sanitary landfills. Sanitary landfills are not dumps; they are carefully planned and engineered facilities designed to control leachate and methane and minimize the risk of land pollution from solid-waste disposal. Sanitary landfill sites are carefully selected and prepared with impermeable bottom liners to collect leachate and prevent contamination of groundwater. Bottom liners typically consist of flexible plastic membranes and a layer of compacted clay. The waste material—MSW and C&D debris—is spread out, compacted with heavy machinery, and covered each day with a layer of compacted soil. Leachate is collected in a network of perforated pipes at the bottom of the landfill and pumped to an on-site treatment plant or nearby public sewerage system. Methane is also collected in the landfill and safely vented to the atmosphere or recovered for use as a fuel known as biogas, or landfill gas. Groundwater-monitoring wells must be placed around the landfill and sampled periodically to ensure proper landfill operation. Completed landfills are capped with a layer of clay or an impermeable membrane to prevent water from entering. A layer of topsoil and various forms of vegetation are placed as a final cover. Completed landfills are often used as public parks or playgrounds.
Hazardous waste differs from MSW and C&D debris in both form and behaviour. Its disposal requires special attention because it can cause serious illnesses or injuries and can pose immediate and significant threats to environmental quality. The main characteristics of hazardous waste include toxicity, reactivity, ignitability, and corrosivity. In addition, waste products that may be infectious or are radioactive are also classified as hazardous waste. Although land disposal of hazardous waste is not always the best option, solid or containerized hazardous wastes can be disposed of by burial in “secure landfills,” while liquid hazardous waste can be disposed of underground in deep-well injection systems if the geologic conditions are suitable. Some hazardous wastes such as dioxins, PCBs, cyanides, halogenated organics, and strong acids are banned from land disposal in the United States, unless they are first treated or stabilized or meet certain concentration limits. Secure landfills must have at least 3 metres (10 feet) of soil between the bottom of the landfill and underlying bedrock or groundwater table (twice that required for municipal solid-waste landfills), a final impermeable cover when completed, and a double impervious bottom liner for increased safety. Underground injection wells (into which liquid waste is pumped under high pressure) must deposit the liquid in a permeable layer of rock that is sandwiched between impervious layers of rock or clay. The wells must also be encased and sealed in three concentric pipes and be at least 400 metres (0.25 mile) from any drinking-water supplies for added safety.
Before modern techniques for disposing of hazardous wastes were legislated and put into practice, the wastes were generally disposed of or stored in surface piles, lagoons, ponds, or unlined landfills. Thousands of those waste sites still exist, now old and abandoned. Also, the illegal but frequent practice of “midnight dumping” of hazardous wastes, as well as accidental spills, has contaminated thousands of industrial land parcels and continues to pose serious threats to public health and environmental quality. Efforts to remediate or clean up such sites will continue for years to come. In 1980 the United States Congress created the Superfund program and authorized billions of dollars toward site remediation; today there are still about 1,300 sites on the Superfund list requiring remediation. The first listed Superfund site—Love Canal, located in Niagara Falls, N.Y.—was not removed from the list until 2004.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1091) Germination
Germination, the sprouting of a seed, spore, or other reproductive body, usually after a period of dormancy. The absorption of water, the passage of time, chilling, warming, oxygen availability, and light exposure may all operate in initiating the process.
In the process of seed germination, water is absorbed by the embryo, which results in the rehydration and expansion of the cells. Shortly after the beginning of water uptake, or imbibition, the rate of respiration increases, and various metabolic processes, suspended or much reduced during dormancy, resume. These events are associated with structural changes in the organelles (membranous bodies concerned with metabolism), in the cells of the embryo.
Germination sometimes occurs early in the development process; the mangrove (Rhizophora) embryo develops within the ovule, pushing out a swollen rudimentary root through the still-attached flower. In peas and corn (maize) the cotyledons (seed leaves) remain underground (e.g., hypogeal germination), while in other species (beans, sunflowers, etc.) the hypocotyl (embryonic stem) grows several inches above the ground, carrying the cotyledons into the light, in which they become green and often leaflike (e.g., epigeal germination).
Seed Dormancy
Dormancy is brief for some seeds—for example, those of certain short-lived annual plants. After dispersal and under appropriate environmental conditions, such as suitable temperature and access to water and oxygen, the seed germinates, and the embryo resumes growth.
The seeds of many species do not germinate immediately after exposure to conditions generally favourable for plant growth but require a “breaking” of dormancy, which may be associated with change in the seed coats or with the state of the embryo itself. Commonly, the embryo has no innate dormancy and will develop after the seed coat is removed or sufficiently damaged to allow water to enter. Germination in such cases depends upon rotting or abrasion of the seed coat in the gut of an animal or in the soil. Inhibitors of germination must be either leached away by water or the tissues containing them destroyed before germination can occur. Mechanical restriction of the growth of the embryo is common only in species that have thick, tough seed coats. Germination then depends upon weakening of the coat by abrasion or decomposition.
In many seeds the embryo cannot germinate even under suitable conditions until a certain period of time has lapsed. The time may be required for continued embryonic development in the seed or for some necessary finishing process—known as afterripening—the nature of which remains obscure.
The seeds of many plants that endure cold winters will not germinate unless they experience a period of low temperature, usually somewhat above freezing. Otherwise, germination fails or is much delayed, with the early growth of the seedling often abnormal. (This response of seeds to chilling has a parallel in the temperature control of dormancy in buds.) In some species, germination is promoted by exposure to light of appropriate wavelengths. In others, light inhibits germination. For the seeds of certain plants, germination is promoted by red light and inhibited by light of longer wavelength, in the “far red” range of the spectrum. The precise significance of this response is as yet unknown, but it may be a means of adjusting germination time to the season of the year or of detecting the depth of the seed in the soil. Light sensitivity and temperature requirements often interact, the light requirement being entirely lost at certain temperatures.
Seedling Emergence
Active growth in the embryo, other than swelling resulting from imbibition, usually begins with the emergence of the primary root, known as the radicle, from the seed, although in some species (e.g., the coconut) the shoot, or plumule, emerges first. Early growth is dependent mainly upon cell expansion, but within a short time cell division begins in the radicle and young shoot, and thereafter growth and further organ formation (organogenesis) are based upon the usual combination of increase in cell number and enlargement of individual cells.
Until it becomes nutritionally self-supporting, the seedling depends upon reserves provided by the parent sporophyte. In angiosperms these reserves are found in the endosperm, in residual tissues of the ovule, or in the body of the embryo, usually in the cotyledons. In gymnosperms food materials are contained mainly in the female gametophyte. Since reserve materials are partly in insoluble form—as starch grains, protein granules, lipid droplets, and the like—much of the early metabolism of the seedling is concerned with mobilizing these materials and delivering, or translocating, the products to active areas. Reserves outside the embryo are digested by enzymes secreted by the embryo and, in some instances, also by special cells of the endosperm.
In some seeds (e.g., castor beans) absorption of nutrients from reserves is through the cotyledons, which later expand in the light to become the first organs active in photosynthesis. When the reserves are stored in the cotyledons themselves, these organs may shrink after germination and die or develop chlorophyll and become photosynthetic.
Environmental factors play an important part not only in determining the orientation of the seedling during its establishment as a rooted plant but also in controlling some aspects of its development. The response of the seedling to gravity is important. The radicle, which normally grows downward into the soil, is said to be positively geotropic. The young shoot, or plumule, is said to be negatively geotropic because it moves away from the soil; it rises by the extension of either the hypocotyl, the region between the radicle and the cotyledons, or the epicotyl, the segment above the level of the cotyledons. If the hypocotyl is extended, the cotyledons are carried out of the soil. If the epicotyl elongates, the cotyledons remain in the soil.
Light affects both the orientation of the seedling and its form. When a seed germinates below the soil surface, the plumule may emerge bent over, thus protecting its delicate tip, only to straighten out when exposed to light (the curvature is retained if the shoot emerges into darkness). Correspondingly, the young leaves of the plumule in such plants as the bean do not expand and become green except after exposure to light. These adaptative responses are known to be governed by reactions in which the light-sensitive pigment phytochrome plays a part. In most seedlings, the shoot shows a strong attraction to light, or a positive phototropism, which is most evident when the source of light is from one direction. Combined with the response to gravity, this positive phototropism maximizes the likelihood that the aerial parts of the plant will reach the environment most favourable for photosynthesis.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1092) Perennial, Biennial, and Annual
Perennial
Perennial, any plant that persists for several years, usually with new herbaceous growth from a part that survives from season to season. Trees and shrubs are perennial, as are some herbaceous flowers and vegetative ground covers. Perennials have only a limited flowering period, but, with maintenance throughout the growing season, they provide a leafy presence and shape to the garden landscape. Popular flowering perennials include bellflowers, chrysanthemums, columbines, larkspurs, hollyhocks, phlox, pinks, poppies, and primroses.
Biennial
Biennial, Any plant that completes its life cycle in two growing seasons. During the first growing season biennials produce roots, stems, and leaves; during the second they produce flowers, fruits, and seeds, and then die. Sugar beets and carrots are examples of biennials.
Annual
Annual, Any plant that completes its life cycle in a single growing season. The dormant seed is the only part of an annual that survives from one growing season to the next. Annuals include many weeds, wildflowers, garden flowers, and vegetables.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1093) Life cycle
Life cycle, in biology, the series of changes that the members of a species undergo as they pass from the beginning of a given developmental stage to the inception of that same developmental stage in a subsequent generation.
In many simple organisms, including bacteria and various protists, the life cycle is completed within a single generation: an organism begins with the fission of an existing individual; the new organism grows to maturity; and it then splits into two new individuals, thus completing the cycle. In higher animals, the life cycle also encompasses a single generation: the individual animal begins with the fusion of male and female reproduction cells (gametes); it grows to reproductive maturity; and it then produces gametes, at which point the cycle begins anew (assuming that fertilization takes place).
In most plants, by contrast, the life cycle is multigenerational. An individual plant begins with the germination of a spore, which grows into a gamete-producing organism (the gametophyte). The gametophyte reaches maturity and forms gametes, which, following fertilization, grow into a spore-producing organism (the sporophyte). Upon reaching reproductive maturity, the sporophyte produces spores, and the cycle starts again. This multigenerational life cycle is called alternation of generations; it occurs in some protists and fungi as well as in plants.
The life cycle characteristic of bacteria is termed haplontic. This term refers to the fact that it encompasses a single generation of organisms whose cells are haploid (i.e., contain one set of chromosomes). The one-generational life cycle of the higher animals is diplontic; it involves only organisms whose body cells are diploid (i.e., contain two sets of chromosomes). Organisms with diplontic cycles produce male/female cells that are haploid, and each of these gametes must combine with another gamete in order to obtain the double set of chromosomes necessary to grow into a complete organism. The life cycle typified by plants is known as diplohaplontic, because it includes both a diploid generation (the sporophyte) and a haploid generation (the gametophyte).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1094) Specific heat
Specific heat, ratio of the quantity of heat required to raise the temperature of a body one degree to that required to raise the temperature of an equal mass of water one degree. The term is also used in a narrower sense to mean the amount of heat, in calories, required to raise the temperature of one gram of a substance by one Celsius degree. The Scottish scientist Joseph Black, in the 18th century, noticed that equal masses of different substances needed different amounts of heat to raise them through the same temperature interval, and, from this observation, he founded the concept of specific heat. In the early 19th century the French physicists Pierre-Louis Dulong and Alexis-Thérèse Petit demonstrated that measurements of specific heats of substances allow calculation of their atomic weights.
Heat capacity
Heat capacity, ratio of heat absorbed by a material to the temperature change. It is usually expressed as calories per degree in terms of the actual amount of material being considered, most commonly a mole (the molecular weight in grams). The heat capacity in calories per gram is called specific heat. The definition of the calorie is based on the specific heat of water, defined as one calorie per degree Celsius.
At sufficiently high temperatures, the heat capacity per atom tends to be the same for all elements. For metals of higher atomic weight, this approximation is already a good one at room temperature, giving rise to the law of Dulong and Petit. For other materials, heat capacity and its temperature variation depend on differences in energy levels for atoms (available quantum states). Heat capacities are measured with some variety of calorimeter, and, using the formulation of the third law of thermodynamics, heat-capacity measurements became important as a means of determining the entropies of various materials.
Dulong–Petit law
Dulong–Petit law, statement that the gram-atomic heat capacity (specific heat times atomic weight) of an element is a constant; that is, it is the same for all solid elements, about six calories per gram atom. The law was formulated (1819) on the basis of observations by the French chemist Pierre-Louis Dulong and the French physicist Alexis-Thérèse Petit. If the specific heat of an element is measured, its atomic weight can be calculated using this empirical law; and many atomic weights were originally so derived. Later it was modified to apply only to metallic elements, and later still low-temperature measurements showed that the heat capacity of all solids tends to become zero at sufficiently low temperature. The law is now used only as an approximation at intermediately high temperatures.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1095) Latent heat
Latent heat, energy absorbed or released by a substance during a change in its physical state (phase) that occurs without changing its temperature. The latent heat associated with melting a solid or freezing a liquid is called the heat of fusion; that associated with vaporizing a liquid or a solid or condensing a vapour is called the heat of vaporization. The latent heat is normally expressed as the amount of heat (in units of joules or calories) per mole or unit mass of the substance undergoing a change of state.
For example, when a pot of water is kept boiling, the temperature remains at 100 °C (212 °F) until the last drop evaporates, because all the heat being added to the liquid is absorbed as latent heat of vaporization and carried away by the escaping vapour molecules. Similarly, while ice melts, it remains at 0 °C (32 °F), and the liquid water that is formed with the latent heat of fusion is also at 0 °C. The heat of fusion for water at 0 °C is approximately 334 joules (79.7 calories) per gram, and the heat of vaporization at 100 °C is about 2,230 joules (533 calories) per gram. Because the heat of vaporization is so large, steam carries a great deal of thermal energy that is released when it condenses, making water an excellent working fluid for heat engines.
Latent heat arises from the work required to overcome the forces that hold together atoms or molecules in a material. The regular structure of a crystalline solid is maintained by forces of attraction among its individual atoms, which oscillate slightly about their average positions in the crystal lattice. As the temperature increases, these motions become increasingly violent until, at the melting point, the attractive forces are no longer sufficient to maintain the stability of the crystal lattice. However, additional heat (the latent heat of fusion) must be added (at constant temperature) in order to accomplish the transition to the even more-disordered liquid state, in which the individual particles are no longer held in fixed lattice positions but are free to move about through the liquid. A liquid differs from a gas in that the forces of attraction between the particles are still sufficient to maintain a long-range order that endows the liquid with a degree of cohesion. As the temperature further increases, a second transition point (the boiling point) is reached where the long-range order becomes unstable relative to the largely independent motions of the particles in the much larger volume occupied by a vapour or gas. Once again, additional heat (the latent heat of vaporization) must be added to break the long-range order of the liquid and accomplish the transition to the largely disordered gaseous state.
Latent heat is associated with processes other than changes among the solid, liquid, and vapour phases of a single substance. Many solids exist in different crystalline modifications, and the transitions between these generally involve absorption or evolution of latent heat. The process of dissolving one substance in another often involves heat; if the solution process is a strictly physical change, the heat is a latent heat. Sometimes, however, the process is accompanied by a chemical change, and part of the heat is that associated with the chemical reaction.
Latent heat is the energy released or absorbed by the material or substance during the changes in its physical state, that occurs without causing any changes in its temperature. The latent heat which is associated with the freezing of liquid or melting of the solid is known as the heat of fusion. Whereas, the latent heat that is associated with the condensing of vapor, or vaporizing of the liquid is known as the heat of vaporization.
Latent Heat is Used for Breaking the Bonds
Normally, the latent heat is expressed as the amount of heat in the units of calories or joules per unit mass or mole of the substance that is changing the state. During the changes in the state of any substance, there may be no increase in the temperature at that time, where the change of the state is occurring, even though the heat is being added. All the added heat energy is used for breaking the bonds, which is essential for completing the change of the state.
Latent Heat of Melting
The latent heat rises from the work, which is required to overcome the forces of attraction which hold the atoms or molecules in the materials. By the forces of attraction among the individual atoms, the regular structure of the crystalline solid is maintained. With the increase in the temperature, the motion of the particles becomes increasingly vibrant and at the melting point, the forces of attraction are not sufficient for the maintenance of the stability of the crystal lattice. However, the additional heat in the terms of latent heat of fusion must have to be added to accomplish the transition to the more disordered liquid state.
Latent Heat of Vaporization
The liquid is different from the gas as the forces between the gasses are even much weaker than the forces between the water molecules. In the water, the forces of attraction are sufficient for maintaining the long-range order that is responsible for the degree of cohesion in the water molecules. With the further increase in the temperature, the second transition point that is the boiling point is reached, where the long-range order starts becoming unstable. Resultantly, the independent motion of the particles is observed and the liquid is changed to the gaseous state. Here the latent heat of vaporization, in terms of additional heat should be added for breaking the longer range of the order of the liquids to accomplish the transition to the greatly disordered gaseous state.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1096) Melting
Melting, change of a solid into a liquid when heat is applied. In a pure crystalline solid, this process occurs at a fixed temperature called the melting point; an impure solid generally melts over a range of temperatures below the melting point of the principal component. Amorphous (non-crystalline) substances such as glass or pitch melt by gradually decreasing in viscosity as temperature is raised, with no sharp transition from solid to liquid.
The structure of a liquid is always less ordered than that of the crystalline solid and, therefore, the liquid commonly occupies a larger volume. The behaviour of ice, which floats on water, and of a few other substances are notable exceptions to the usual decrease in density upon melting.
Melting of a given mass of a solid requires the addition of a characteristic amount of heat, the heat of fusion. In the reverse process, the freezing of the liquid to form the solid, the same quantity of heat must be removed. The heat of fusion of ice, the heat required to melt one gram, is about 80 calories; this amount of heat would raise the temperature of a gram of liquid water from the freezing point (0 °C, or 32 °F) to 80 °C (176 °F).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1096) Picric acid
Picric acid, also called 2,4,6-trinitrophenol, pale yellow, odourless crystalline solid that has been used as a military explosive, as a yellow dye, and as an antiseptic. Picric acid (from Greek pikros, “bitter”) was so named by the 19th-century French chemist Jean-Baptiste-André Dumas because of the extremely bitter taste of its yellow aqueous solution. Percussion or rapid heating can cause it (or its salts with heavy metals, such as copper, silver, or lead) to explode.
Picric acid was first obtained in 1771 by Peter Woulfe, a British chemist, by treating indigo with nitric acid. It was used as a yellow dye, initially for silk, beginning in 1849.
As an explosive, picric acid was formerly of great importance. The French began using it in 1886 as a bursting charge for shells under the name of melinite. By the time of the Russo-Japanese War, picric acid was the most widely used military explosive. Its highly corrosive action on the metal surfaces of shells was a disadvantage, however, and after World War I its use declined. Ammonium picrate, one of the salts of picric acid, is used in modern armour-piercing shells because it is insensitive enough to withstand the severe shock of penetration before detonating.
Picric acid has antiseptic and astringent properties. For medical use it is incorporated in a surface anesthetic ointment or solution and in burn ointments.
Picric acid is a much stronger acid than phenol; it decomposes carbonates and may be titrated with bases. In a basic medium, lead acetate produces a bright yellow precipitate, lead picrate.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1097) Boltzmann constant
Boltzmann constant, (symbol k), a fundamental constant of physics occurring in nearly every statistical formulation of both classical and quantum physics. The constant is named after Ludwig Boltzmann, a 19th-century Austrian physicist, who substantially contributed to the foundation and development of statistical mechanics, a branch of theoretical physics. Having dimensions of energy per degree of temperature, the Boltzmann constant has a value of 1.380649 × {10}^{-23} joule per kelvin (K), or 1.380649 × {10}^{-16} erg per kelvin.
The physical significance of k is that it provides a measure of the amount of energy (i.e., heat) corresponding to the random thermal motions of the particles making up a substance. For a classical system at equilibrium at temperature T, the average energy per degree of freedom is kT/2. In the simplest example of a gas consisting of N noninteracting atoms, each atom has three translational degrees of freedom (it can move in the x-, y-, or z-directions), and so the total thermal energy of the gas is 3NkT/2.
The Boltzmann constant is the proportionality factor that relates the average relative kinetic energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin and the gas constant, and in Planck's law of black-body radiation and Boltzmann's entropy formula. The Boltzmann constant has dimensions of energy divided by temperature, the same as entropy. It is named after the Austrian scientist Ludwig Boltzmann.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1098) Dielectric constant
Dielectric constant, also called relative permittivity or specific inductive capacity, property of an electrical insulating material (a dielectric) equal to the ratio of the capacitance of a capacitor filled with the given material to the capacitance of an identical capacitor in a vacuum without the dielectric material. The insertion of a dielectric between the plates of, say, a parallel-plate capacitor always increases its capacitance, or ability to store opposite charges on each plate, compared with this ability when the plates are separated by a vacuum. If C is the value of the capacitance of a capacitor filled with a given dielectric and C0 is the capacitance of an identical capacitor in a vacuum, the dielectric constant, symbolized by the Greek letter kappa, κ, is simply expressed as κ = C/C0. The dielectric constant is a number without dimensions. In the centimetre-gram-second system, the dielectric constant is identical to the permittivity. It denotes a large-scale property of dielectrics without specifying the electrical behaviour on the atomic scale.
The value of the static dielectric constant of any material is always greater than one, its value for a vacuum. The value of the dielectric constant at room temperature (25 °C, or 77 °F) is 1.00059 for air, 2.25 for paraffin, 78.2 for water, and about 2,000 for barium titanate (BaTiO3) when the electric field is applied perpendicularly to the principal axis of the crystal. Because the value of the dielectric constant for air is nearly the same as that for a vacuum, for all practical purposes air does not increase the capacitance of a capacitor. Dielectric constants of liquids and solids may be determined by comparing the value of the capacitance when the dielectric is in place to its value when the capacitor is filled with air.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1099) Planck's constant
Planck’s constant, (symbol h), fundamental physical constant characteristic of the mathematical formulations of quantum mechanics, which describes the behaviour of particles and waves on the atomic scale, including the particle aspect of light. The German physicist Max Planck introduced the constant in 1900 in his accurate formulation of the distribution of the radiation emitted by a blackbody, or perfect absorber of radiant energy. The significance of Planck’s constant in this context is that radiation, such as light, is emitted, transmitted, and absorbed in discrete energy packets, or quanta, determined by the frequency of the radiation and the value of Planck’s constant. The energy E of each quantum, or each photon, equals Planck’s constant h times the radiation frequency symbolized by the Greek letter nu, ν, or simply E = hν. A modified form of Planck’s constant called h-bar (ℏ), or the reduced Planck’s constant, in which ℏ equals h divided by 2π, is the quantization of angular momentum. For example, the angular momentum of an electron bound to an atomic nucleus is quantized and can only be a multiple of h-bar.
The dimension of Planck’s constant is the product of energy multiplied by time, a quantity called action. Planck’s constant is often defined, therefore, as the elementary quantum of action. Its value in metre-kilogram-second units is defined as exactly 6.62607015 × {10}^{-34} joule second.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1100) Planck's radiation law
Planck’s radiation law, a mathematical relationship formulated in 1900 by German physicist Max Planck to explain the spectral-energy distribution of radiation emitted by a blackbody (a hypothetical body that completely absorbs all radiant energy falling upon it, reaches some equilibrium temperature, and then reemits that energy as quickly as it absorbs it). Planck assumed that the sources of radiation are atoms in a state of oscillation and that the vibrational energy of each oscillator may have any of a series of discrete values but never any value between. Planck further assumed that when an oscillator changes from a state of energy E1 to a state of lower energy E2, the discrete amount of energy E1 − E2, or quantum of radiation, is equal to the product of the frequency of the radiation, symbolized by the Greek letter ν and a constant h, now called Planck’s constant, that he determined from blackbody radiation data; i.e., E1 − E2 = hν.
For a blackbody at temperatures up to several hundred degrees, the majority of the radiation is in the infrared radiation region of the electromagnetic spectrum. At higher temperatures, the total radiated energy increases, and the intensity peak of the emitted spectrum shifts to shorter wavelengths so that a significant portion is radiated as visible light.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1101) Mohs Hardness Scale
The Mohs scale of mineral hardness is a qualitative ordinal scale that characterizes the scratch resistance of different minerals through the ability of a harder material to scratch a softer material. It was created by the German geologist and mineralogist Friedrich Mohs in 1812 and is one of several material science definitions of hardness, some of which are more quantitative.
About Hardness Tests
The hardness test developed by Friedrich Mohs was the first known test to assess resistance of a material to scratching. It is a very simple but inexact comparative test. Perhaps its simplicity has enabled it to become the most widely used hardness test.
Since the Mohs Scale was developed in 1812, many different hardness tests have been invented. These include tests by Brinell, Knoop, Rockwell, Shore and Vickers. Each of these tests uses a tiny “indenter” that is applied to the material being tested with a carefully measured amount of force. Then the size or the depth of the indentation and the amount of force are used to calculate a hardness value.
Why are there so many different hardness tests? The type of test used is determined by the size, shape and other characteristics of the specimens being tested. Although these tests are quite different from the Mohs test there is some correlation between them.
Usage
Despite its simplicity and lack of precision, the Mohs scale is highly relevant for field geologists, who use the scale to roughly identify minerals using scratch kits. The Mohs scale hardness of minerals can be commonly found in reference sheets. Reference materials may be expected to have a uniform Mohs hardness.
The Mineral Hardness Scale
The mineral hardness scale of Mohs is based on the ability of one natural mineral sample to visibly scratch another mineral. All different minerals are the samples of matter used by Mohs. Minerals are naturally found pure substances. Rocks consist of one or more minerals.
Diamonds are at the top of the scale as the hardest known naturally occurring substance when designing the scale. A material’s hardness is measured against the scale by finding the hardest material that can scratch the given material and/or the softest material that can scratch the given material.
“Scratching” a material for Mohs scale purposes means creating visible to the unaided eye non – elastic dislocations. Materials lower on the Mohs scale can often create microscopic, non – elastic dislocations on materials with a higher number of Mohs. While these microscopic dislocations are permanent and sometimes detrimental to the structural integrity of the harder material, for determining a Mohs scale number, they are not considered “scratches.”
Mohs Hardness Scale
Mineral : Hardness
Talc : 1
Gypsum : 2
Calcite : 3
Fluorite : 4
Apatite : 5
Orthoclase : 6
Quartz : 7
Topaz : 8
Corundum : 9
Diamond : 10
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1102) Planet
Planet, (from Greek planētes, “wanderers”), broadly, any relatively large natural body that revolves in an orbit around the Sun or around some other star and that is not radiating energy from internal nuclear fusion reactions. In addition to the above description, some scientists impose additional constraints regarding characteristics such as size (e.g., the object should be more than about 1,000 km [600 miles] across, or a little larger than the largest known asteroid, Ceres), shape (it should be large enough to have been squeezed by its own gravity into a sphere—i.e., roughly 700 km [435 miles] across, depending on its density), or mass (it must have a mass insufficient for its core to have experienced even temporary nuclear fusion). As the term is applied to bodies in Earth’s solar system, the International Astronomical Union (IAU), which is charged by the scientific community with classifying astronomical objects, lists eight planets orbiting the Sun; in order of increasing distance, they are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Pluto also was listed as a planet until 2006. Until the close of the 20th century, the only planets to be recognized were components of Earth’s solar system. At that time astronomers confirmed that other stars have objects that appear to be planets in orbit around them.
Planets Of The Solar System
The idea of what exactly constitutes a planet of the solar system has been traditionally the product of historical and cultural consensus. Ancient sky gazers applied the term planet to the seven celestial bodies that were observed to move appreciably against the background of the apparently fixed stars. These included the Sun and Earth’s Moon, as well as the five planets in the modern sense—Mercury, Venus, Mars, Jupiter, and Saturn—that were readily visible as celestial wanderers before the invention of the telescope. After the idea of an Earth-centred cosmos was dispelled and more distinctions were made about the nature and movement of objects in the sky, the term planet was reserved only for those larger bodies that orbited the Sun. When the giant bodies Uranus and Neptune were discovered in 1781 and 1846, respectively, their obvious kinship with the other known planets left little question regarding their addition to the planetary ranks. So also, at first, appeared to be the case for Pluto when, during a concerted search for a ninth planet, it was observed in 1930 as a seemingly lone object beyond the orbit of Neptune. In later decades, however, Pluto’s planetary status became increasingly questioned by astronomers who noted that its tiny size, unusual orbital characteristics, and composition of ice and rock made it an anomaly among the other recognized planets. After many more Pluto-sized and smaller icy objects were found orbiting beyond Neptune beginning in the 1990s, astronomers recognized that Pluto, far from being unique in its part of the solar system, is almost undoubtedly one of the larger and nearer pieces of this debris, known collectively as the Kuiper belt, that is left over from the formation of the planets.
In August 2006, after intense debate over the question of Pluto’s planetary status, the general assembly of the IAU approved a definition for a solar system planet that excluded Pluto. At the same time, it defined a new distinct class of objects called dwarf planets, for which Pluto qualified. Following the IAU proclamations, many scientists protested the definitions, considering them flawed and unscientific and calling for their reconsideration.
According to the 2006 IAU decision, for a celestial body to be a planet of the solar system, it must meet three conditions: it must be in orbit around the Sun, have been molded by its own gravity into a round or nearly round shape, and have “cleared the neighbourhood around its orbit,” meaning that its mass must be large enough for its gravity to have removed rocky and icy debris from its orbital vicinity. Pluto failed on the third requirement because it orbits partially within, and is considered to be part of, the Kuiper belt.
To be a dwarf planet under the IAU definition, the object must meet the first two conditions described above; in addition, it must not have cleared its neighbourhood, and it must not be a moon of another body. Pluto falls into this category, as do the asteroid Ceres and the large Kuiper belt object Eris, which was discovered in 2005 beyond the orbit of Pluto. By contrast, Charon, by virtue of its being a moon of Pluto, is not a dwarf planet, even though its diameter is more than half that of Pluto. The ranks of dwarf planets will likely be expanded as other objects known or yet to be discovered are determined to meet the conditions of the definition.
In June 2008 the IAU created a new category, plutoids, within the dwarf planet category. Plutoids are dwarf planets that are farther from the Sun than Neptune; that is, they are the largest objects in the Kuiper belt. Two of the dwarf planets, Pluto and Eris, are plutoids; Ceres, because of its location in the asteroid belt, is not.
Of the eight currently recognized planets of the solar system, the inner four, from Mercury to Mars, are called terrestrial planets; those from Jupiter to Neptune are called giant planets or Jovian planets. Between these two main groups is a belt of numerous small bodies called asteroids. After Ceres and other larger asteroids were discovered in the early 19th century, the bodies in this class were also referred to as minor planets or planetoids, but the term asteroid is now used most widely.
Planets Of Other Stars
The planets and other objects that circle the Sun are thought to have formed when part of an interstellar cloud of gas and dust collapsed under its own gravitational attraction and formed a disk-shaped nebula. Further compression of the disk’s central region formed the Sun, while the gas and dust left behind in the midplane of the surrounding disk eventually coalesced to form ever-larger objects and, ultimately, the planets. Astronomers have long wondered if this process of planetary formation could have accompanied the birth of stars other than the Sun. In the glare of their parent stars, however, such small, dim objects would not be easy to detect directly in images made with telescopes from Earth’s vicinity. Instead, astronomers concentrated on attempting to observe them indirectly through the gravitational effects they exert on their parent stars. After decades of searching for such extrasolar planets, astronomers in the early 1990s indirectly identified three planets circling a pulsar (i.e., a rapidly spinning neutron star) called PSR B1257+12. The first discovery of a planet revolving around a star more like the Sun came in 1995 with the announcement of the existence of a massive planet orbiting the star 51 Pegasi. In the first 15 years after these initial discoveries, about 200 planets around other stars were known, and in 2005 astronomers obtained the first direct infrared images of what were interpreted to be extrasolar planets. In size these objects range from a fraction of the mass of Jupiter to more than a dozen times its mass. Astronomers have yet to develop a rigorous, generally accepted definition of planet that will successfully accommodate extrasolar planets and distinguish them from bodies that are more starlike in character (e.g., brown dwarfs).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline