You are not logged in.
2301) Longitude
Gist
Longitude measures distance east or west of the prime meridian. Lines of longitude, also called meridians, are imaginary lines that divide the Earth. They run north to south from pole to pole, but they measure the distance east or west. Longitude is measured in degrees, minutes, and seconds.
Summary
Longitude is a geographic coordinate that specifies the east–west position of a point on the surface of the Earth, or another celestial body. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda. Meridians are imaginary semicircular lines running from pole to pole that connect points with the same longitude. The prime meridian defines 0° longitude; by convention the International Reference Meridian for the Earth passes near the Royal Observatory in Greenwich, south-east London on the island of Great Britain. Positive longitudes are east of the prime meridian, and negative ones are west.
Because of the Earth's rotation, there is a close connection between longitude and time measurement. Scientifically precise local time varies with longitude: a difference of 15° longitude corresponds to a one-hour difference in local time, due to the differing position in relation to the Sun. Comparing local time to an absolute measure of time allows longitude to be determined. Depending on the era, the absolute time might be obtained from a celestial event visible from both locations, such as a lunar eclipse, or from a time signal transmitted by telegraph or radio. The principle is straightforward, but in practice finding a reliable method of determining longitude took centuries and required the effort of some of the greatest scientific minds.
A location's north–south position along a meridian is given by its latitude, which is approximately the angle between the equatorial plane and the normal from the ground at that location.
Longitude is generally given using the geodetic normal or the gravity direction. The astronomical longitude can differ slightly from the ordinary longitude because of vertical deflection, small variations in Earth's gravitational field.
Details
Lines of longitude, also called meridians, are imaginary lines that divide the Earth. They run north to south from pole to pole, but they measure the distance east or west. Longitude is measured in degrees, minutes, and seconds. Although latitude lines are always equally spaced, longitude lines are furthest from each other at the equator and meet at the poles.
Unlike the equator (which is halfway between the Earth’s north and south poles), the prime meridian is an arbitrary line. In 1884, representatives at the International Meridian Conference in Washington, D.C., met to define the meridian that would represent 0 degrees longitude. For its location, the conference chose a line that ran through the telescope at the Royal Observatory in Greenwich, England. At the time, many nautical charts and time zones already used Greenwich as the starting point, so keeping this location made sense. But, if you go to Greenwich with your GPS receiver, you’ll need to walk 102 meters (334 feet) east of the prime meridian markers before your GPS shows 0 degrees longitude. In the 19th century, scientists did not take into account local variations in gravity or the slightly squished shape of the Earth when they determined the location of the prime meridian. Satellite technology, however, allows scientists to more precisely plot meridians so that they are straight lines running north and south, unaffected by local gravity changes. In the 1980s, the International Reference Meridian (IRM) was established as the precise location of 0 degrees longitude. Unlike the prime meridian, the IRM is not a fixed location, but will continue to move as the Earth’s surface shifts.
Lines of longitude, also called meridians, are imaginary lines that divide the Earth. They run north to south from pole to pole, but they measure the distance east or west.
The prime meridian, which runs through Greenwich, England, has a longitude of 0 degrees. It divides the Earth into the eastern and western hemispheres. The antimeridian is on the opposite side of the Earth, at 180 degrees longitude. Though the antimeridian is the basis for the international date line, actual date and time zone boundaries are dependent on local laws. The international date line zigzags around borders near the antimeridian.
Like latitude, longitude is measured in degrees, minutes, and seconds. Although latitude lines are always equally spaced, longitude lines are furthest from each other at the equator and meet at the poles. At the equator, longitude lines are the same distance apart as latitude lines — one degree covers about 111 kilometers (69 miles). But, by 60 degrees north or south, that distance is down to 56 kilometers (35 miles). By 90 degrees north or south (at the poles), it reaches zero.
Navigators and mariners have been able to measure latitude with basic tools for thousands of years. Longitude, however, required more advanced tools and calculations. Starting in the 16th century, European governments began offering huge rewards if anyone could solve “the longitude problem.” Several methods were tried, but the best and simplest way to measure longitude from a ship was with an accurate clock.
A navigator would compare the time at local noon (when the sun is at its highest point in the sky) to an onboard clock that was set to Greenwich Mean Time (the time at the prime meridian). Each hour of difference between local noon and the time in Greenwich equals 15 degrees of longitude. Why? Because the Earth rotates 360 degrees in 24 hours, or 15 degrees per hour. If the sun’s position tells the navigator it’s local noon, and the clock says back in Greenwich, England, it’s 2 p.m., the two-hour difference means the ship’s longitude is 30 degrees west.
But aboard a swaying ship in varying temperatures and salty air, even the most accurate clocks of the age did a poor job of keeping time. It wasn’t until marine chronometers were invented in the 18th century that longitude could be accurately measured at sea.
Accurate clocks are still critical to determining longitude, but now they’re found in GPS satellites and stations. Each GPS satellite is equipped with one or more atomic clocks that provide incredibly precise time measurements, accurate to within 40 nanoseconds (or 40 billionths of a second). The satellites broadcast radio signals with precise timestamps. The radio signals travel at a constant speed (the speed of light), so we can easily calculate the distance between a satellite and GPS receiver if we know precisely how long it took for the signal to travel between them.
Additional Information
Longitude is the measurement east or west of the prime meridian. Longitude is measured by imaginary lines that run around Earth vertically (up and down) and meet at the North and South Poles. These lines are known as meridians. Each meridian measures one arc degree of longitude. The distance around Earth measures 360 degrees.
The meridian that runs through Greenwich, England, is internationally accepted as the line of 0 degrees longitude, or prime meridian. The antimeridian is halfway around the world, at 180 degrees. It is the basis for the International Date Line.
Half of the world, the Eastern Hemisphere, is measured in degrees east of the prime meridian. The other half, the Western Hemisphere, in degrees west of the prime meridian.
Degrees of longitude are divided into 60 minutes. Each minute of longitude can be further divided into 60 seconds. For example, the longitude of Paris, France, is 2° 29' E (2 degrees, 29 minutes east). The longitude for Brasilia, Brazil, is 47° 55' W (47 degrees, 55 minutes west).
A degree of longitude is about 111 kilometers (69 miles) at its widest. The widest areas of longitude are near the Equator, where Earth bulges out. Because of Earth's curvature, the actual distance of a degrees, minutes, and seconds of longitude depends on its distance from the Equator. The greater the distance, the shorter the length between meridians. All meridians meet at the North and South Poles.
Longitude is related to latitude, the measurement of distance north or south of the Equator. Lines of latitude are called parallels. Maps are often marked with parallels and meridians, creating a grid. The point in the grid where parallels and meridians intersect is called a coordinate. Coordinates can be used to locate any point on Earth.
Knowing the exact coordinates of a site (degrees, minutes, and seconds of longitude and latitude) is valuable for military, engineering, and rescue operations. Coordinates can give military leaders the location of weapons or enemy troops. Coordinates help engineers plan the best spot for a building, bridge, well, or other structure. Coordinates help airplane pilots land planes or drop aid packages in specific locations.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2302) Fertilizer
Gist
A fertilizer is a natural or artificial substance containing the chemical elements that improve growth and productiveness of plants. Fertilizers enhance the natural fertility of the soil or replace chemical elements taken from the soil by previous crops.
Summary
A fertilizer (American English) or fertiliser (British English) is any material of natural or synthetic origin that is applied to soil or to plant tissues to supply plant nutrients. Fertilizers may be distinct from liming materials or other non-nutrient soil amendments. Many sources of fertilizer exist, both natural and industrially produced. For most modern agricultural practices, fertilization focuses on three main macro nutrients: nitrogen (N), phosphorus (P), and potassium (K) with occasional addition of supplements like rock flour for micronutrients. Farmers apply these fertilizers in a variety of ways: through dry or pelletized or liquid application processes, using large agricultural equipment, or hand-tool methods.
Historically, fertilization came from natural or organic sources: compost, animal manure, human manure, harvested minerals, crop rotations, and byproducts of human-nature industries (e.g. fish processing waste, or bloodmeal from animal slaughter). However, starting in the 19th century, after innovations in plant nutrition, an agricultural industry developed around synthetically created fertilizers. This transition was important in transforming the global food system, allowing for larger-scale industrial agriculture with large crop yields.
Nitrogen-fixing chemical processes, such as the Haber process invented at the beginning of the 20th century, and amplified by production capacity created during World War II, led to a boom in using nitrogen fertilizers. In the latter half of the 20th century, increased use of nitrogen fertilizers (800% increase between 1961 and 2019) has been a crucial component of the increased productivity of conventional food systems (more than 30% per capita) as part of the so-called "Green Revolution".
The use of artificial and industrially-applied fertilizers has caused environmental consequences such as water pollution and eutrophication due to nutritional runoff; carbon and other emissions from fertilizer production and mining; and contamination and pollution of soil. Various sustainable agriculture practices can be implemented to reduce the adverse environmental effects of fertilizer and pesticide use and environmental damage caused by industrial agriculture.
Details
Soil fertility is the quality of a soil that enables it to provide compounds in adequate amounts and proper balance to promote growth of plants when other factors (such as light, moisture, temperature, and soil structure) are favourable. Where fertility of the soil is not good, natural or manufactured materials may be added to supply the needed plant nutrients. These are called fertilizers, although the term is generally applied to largely inorganic materials other than lime or gypsum.
Essential plant nutrients
In total, plants need at least 16 elements, of which the most important are carbon, hydrogen, oxygen, nitrogen, phosphorus, sulfur, potassium, calcium, and magnesium. Plants obtain carbon from the atmosphere and hydrogen and oxygen from water; other nutrients are taken up from the soil. Although plants contain sodium, iodine, and cobalt, these are apparently not essential. This is also true of silicon and aluminum.
Overall chemical analyses indicate that the total supply of nutrients in soils is usually high in comparison with the requirements of crop plants. Much of this potential supply, however, is bound tightly in forms that are not released to crops fast enough to give satisfactory growth. Because of this, the farmer is interested in measuring the available nutrient supply as contrasted to the total nutrient supply. When the available supply of a given nutrient becomes depleted, its absence becomes a limiting factor in plant growth. Excessive quantities of some nutrients may cause a decrease in yield, however.
Determining nutrient needs
Determination of a crop’s nutrient needs is an essential aspect of fertilizer technology. The appearance of a growing crop may indicate a need for fertilizer, though in some plants the need for more or different nutrients may not be easily observable. If such a problem exists, its nature must be diagnosed, the degree of deficiency must be determined, and the amount and kind of fertilizer needed for a given yield must be found. There is no substitute for detailed examination of plants and soil conditions in the field, followed by simple fertilizer tests, quick tests of plant tissues, and analysis of soils and plants.
Sometimes plants show symptoms of poor nutrition. Chlorosis (general yellow or pale green colour), for example, indicates lack of sulfur and nitrogen. Iron deficiency produces white or pale yellow tissue. Symptoms can be misinterpreted, however. Plant disease can produce appearances resembling mineral deficiency, as can various organisms. Drought or improper cultivation or fertilizer application each may create deficiency symptoms.
After field diagnosis, the conclusions may be confirmed by experiments in a greenhouse or by making strip tests in the field. In strip tests, the fertilizer elements suspected of being deficient are added, singly or in combination, and the resulting plant growth observed. Next it is necessary to determine the extent of the deficiency.
An experiment in the field can be conducted by adding nutrients to the crop at various rates. The resulting response of yield in relation to amounts of nutrients supplied will indicate the supplying power of the unfertilized soil in terms of bushels or tons of produce. If the increase in yield is large, this practice will show that the soil has too little of a given nutrient. Such field experiments may not be practical, because they can cost too much in time and money. Soil-testing laboratories are available in most areas; they conduct chemical soil tests to estimate the availability of nutrients. Commercial soil-testing kits give results that may be very inaccurate, depending on techniques and interpretation. Actually, the most accurate system consists of laboratory analysis of the nutrient content of plant parts, such as the leaf. The results, when correlated with yield response to fertilizer application in field experiments, can give the best estimate of deficiency. Further development of remote sensing techniques, such as infrared photography, are under study and may ultimately become the most valuable technique for such estimates.
The economics of fertilizers
The practical goal is to determine how much nutrient material to add. Since the farmer wants to know how much profit to expect when buying fertilizer, the tests are interpreted as an estimation of increased crop production that will result from nutrient additions. The cost of nutrients must be balanced against the value of the crop or even against alternative procedures, such as investing the money in something else with a greater potential return. The law of diminishing returns is well exemplified in fertilizer technology. Past a certain point, equal inputs of chemicals produce less and less yield increase. The goal of the farmer is to use fertilizer in such a way that the most profitable application rate is employed. Ideal fertilizer application also minimizes excess and ill-timed application, which is not only wasteful for the farmer but also harmful to nearby waterways. Unfortunately, water pollution from fertilizer runoff, which has a sphere of impact that extends far beyond the farmer and the fields, is a negative externality that is not accounted for in the costs and prices of the unregulated market.
Fertilizers can aid in making profitable changes in farming. Operators can reduce costs per unit of production and increase the margin of return over total cost by increasing rates of application of fertilizer on principal cash and feed crops. They are then in a position to invest in soil conservation and other improvements that are needed when shifting acreage from surplus crops to other uses.
Synthetic fertilizers
Modern chemical fertilizers include one or more of the three elements that are most important in plant nutrition: nitrogen, phosphorus, and potassium. Of secondary importance are the elements sulfur, magnesium, and calcium.
Most nitrogen fertilizers are obtained from synthetic ammonia; this chemical compound (NH3) is used either as a gas or in a water solution, or it is converted into salts such as ammonium sulfate, ammonium nitrate, and ammonium phosphate, but packinghouse wastes, treated garbage, sewage, and manure are also common sources of it. Because its nitrogen content is high and is readily converted to ammonia in the soil, urea is one of the most concentrated nitrogenous fertilizers. An inexpensive compound, it is incorporated in mixed fertilizers as well as being applied alone to the soil or sprayed on foliage. With formaldehyde it gives methylene-urea fertilizers, which release nitrogen slowly, continuously, and uniformly, a full year’s supply being applied at one time.
Phosphorus fertilizers include calcium phosphate derived from phosphate rock or bones. The more soluble superphosphate and triple superphosphate preparations are obtained by the treatment of calcium phosphate with sulfuric and phosphoric acid, respectively. Potassium fertilizers, namely potassium chloride and potassium sulfate, are mined from potash deposits. Of commercially produced potassium compounds, almost 95 percent of them are used in agriculture as fertilizer.
Mixed fertilizers contain more than one of the three major nutrients—nitrogen, phosphorus, and potassium. Fertilizer grade is a conventional expression that indicates the percentage of plant nutrients in a fertilizer; thus, a 10–20–10 grade contains 10 percent nitrogen, 20 percent phosphoric oxide, and 10 percent potash. Mixed fertilizers can be formulated in hundreds of ways.
Organic fertilizers and practices
The use of manure and compost as fertilizers is probably almost as old as agriculture. Many traditional farming systems still rely on these sustainable fertilizers, and their use is vital to the productivity of certified organic farms, in which synthetic fertilizers are not permitted.
Farm manure
Among sources of organic matter and plant nutrients, farm manure has been of major importance. Manure is understood to mean the refuse from stables and barnyards, including both excreta and straw or other bedding material. Large amounts of manure are produced by livestock; such manure has value in maintaining and improving soil because of the plant nutrients, humus, and organic substances contained in it.
Due to the potential for harbouring human pathogens, the USDA National Organic Standards mandate that raw manure must be applied no later than 90 or 120 days before harvest, depending on whether the harvested part of the crop is in contact with the ground. Composted manure that has been turned five times in 15 days and reached temperatures between 55 and 77.2 °C (131 and 171 °F) has no restrictions on application times. As manure must be managed carefully in order to derive the most benefit from it, some farmers may be unwilling to expend the necessary time and effort. Manure must be carefully stored to minimize loss of nutrients, particularly nitrogen. It must be applied to the right kind of crop at the proper time. Also, additional fertilizer may be needed, such as phosphoric oxide, in order to gain full value of the nitrogen and potash that are contained in manure.
Manure is fertilizer graded as approximately 0.5–0.25–0.5 (percentages of nitrogen, phosphoric oxide, and potash), with at least two-thirds of the nitrogen in slow-acting forms. Given that these nutrients are mostly in an unmineralized form that cannot be taken up by plants, soil microbes are needed to break down organic matter and transform nutrients into a bioavailable “mineralized” state. In comparison, synthetic fertilizers are already in mineralized form and can be taken up by plants directly. On properly tilled soils, the returns from synthetic fertilizer usually will be greater than from an equivalent amount of manure. However, manure provides many indirect benefits. It supplies humus, which improves the soil’s physical character by increasing its capacity to absorb and store water, by enhancement of aeration, and by favouring the activities of lower organisms. Manure incorporated into the topsoil will help prevent erosion from heavy rain and slow down evaporation of water from the surface. In effect, the value of manure as a mulching material may be greater than is its value as a source of essential plant nutrients.
Green manuring
In reasonably humid areas the practice of green manuring can improve yield and soil qualities. A green manure crop is grown and plowed under for its beneficial effects, although during its growth it may be grazed. These green manure crops are usually annuals, either grasses or legumes, whose roots bear nodule bacteria capable of fixing atmospheric nitrogen. Among the advantages of green manure crops are the addition of nitrogen to the soil, an increase in general fertility, a reduction of erosion, an improvement of physical condition, and a reduction of nutrient loss from leaching. Disadvantages include the chance of not obtaining satisfactory growth; the possibility that the cost of growing the manure crop may exceed the cost of applying commercial nitrogen; possible increases in disease, insect pests, and nematodes (parasitic worms); and possible exhaustion of soil moisture by the crop.
Green manure crops are usually planted in the fall and turned under in the spring before the summer crop is sown. Their value as a source of nitrogen, particularly that of the legumes, is unquestioned for certain crops such as potatoes, cotton, and corn (maize); for other crops, such as peanuts (groundnuts; themselves legumes), the practice is questionable.
Compost
Compost is used in agriculture and gardening primarily as a soil amendment rather than as fertilizer, because it has a low content of plant nutrients. It may be incorporated into the soil or mulched on the surface. Heavy rates of application are common.
Compost is basically a mass of rotted organic matter made from waste plant residues. Addition of nitrogen during decomposition is usually advisable. The result is a crumbly material that when added to soil does not compete with the crop for nitrogen. When properly prepared, it is free of obnoxious odours. Composts commonly contain about 2 percent nitrogen, 0.5 to 1 percent phosphorus, and about 2 percent potassium. The nitrogen of compost becomes available slowly and never approaches that available from inorganic sources. This slow release of nitrogen reduces leaching and extends availability over the whole growing season. Composts are essentially fertilizers with low nutrient content, which explains why large amounts are applied. The maximum benefits of composts on soil structure (better aggregation, pore spacing, and water storage) and on crop yield usually occur after several years of use.
In practical farming, the use of composted plant residues must be compared with the use of fresh residues. More beneficial soil effects usually accrue with less labour by simply turning under fresh residues; also, since one-half the organic matter is lost in composting, fresh residues applied at the same rate will cover twice the area that composted residues would cover. In areas where commercial fertilizers are expensive, labour is cheap, and implements are simple, however, composting meets the need and is a logical practice.
Sewage sludge, the solid material remaining from the treatment of sewage, is not permitted in certified organic farming, though it is used in other, nonorganic settings. After suitable processing, it is sold as fertilizer and as a soil amendment for use on lawns, in parks, and on golf courses. Use of human biosolids in agriculture is controversial, as there are concerns that even treated sewage may harbour harmful bacteria, viruses, pharmaceutical residues, and heavy metals.
Liming
Liming to reduce soil acidity is practiced extensively in humid areas where rainfall leaches calcium and magnesium from the soil, thus creating an acid condition. Calcium and magnesium are major plant nutrients supplied by liming materials. Ground limestone is widely used for this purpose; its active agent, calcium carbonate, reacts with the soil to reduce its acidity. The calcium is then available for plant use. The typical limestones, especially dolomitic, contain magnesium carbonate as well, thus also supplying magnesium to the plant.
Marl and chalk are soft impure forms of limestone and are sometimes used as liming materials, as are oyster shells. Calcium sulfate (gypsum) and calcium chloride, however, are unsuitable for liming, for, although their calcium is readily soluble, they leave behind a residue that is harmful. Organic standards by the European Union and the U.S. Food and Drug Administration restrict certain liming agents; burnt lime and hydrated lime are not permitted for certified organic farms in the U.S., for example.
Lime is applied by mixing it uniformly with the surface layer of the soil. It may be applied at any time of the year on land plowed for spring crops or winter grain or on permanent pasture. After application, plowing, disking, or harrowing will mix it with the soil. Such tillage is usually necessary, because calcium migrates slowly downward in most soils. Lime is usually applied by trucks specially equipped and owned by custom operators.
Methods of application
Fertilizers may be added to soil in solid, liquid, or gaseous forms, the choice depending on many factors. Generally, the farmer tries to obtain satisfactory yield at minimum cost in money and labour.
Manure can be applied as a liquid or a solid. When accumulated as a liquid from livestock areas, it may be stored in tanks until needed and then pumped into a distributing machine or into a sprinkler irrigation system. The method reduces labour, but the noxious odours are objectionable. The solid-manure spreader, which can also be used for compost, conveys the material to the field, shreds it, and spreads it uniformly over the land. The process can be carried out during convenient times, including winter, but rarely when the crop is growing.
Application of granulated or pelleted solid fertilizer has been aided by improved equipment design. Such devices, depending on design, can deposit fertilizer at the time of planting, side-dress a growing crop, or broadcast the material. Solid-fertilizer distributors have a wide hopper with holes in the bottom; distribution is effected by various means, such as rollers, agitators, or endless chains traversing the hopper bottom. Broadcast distributors have a tub-shaped hopper from which the material falls onto revolving disks that distribute it in a broad swath. Fertilizer attachments are available for most tractor-mounted planters and cultivators and for grain drills and some types of plows. They deposit fertilizer with the seed when planted, without damage to the seed, yet the nutrient is readily available during early growth. Placement of the fertilizer varies according to the types of crops; some crops require banding above the seed, while others are more successful when the fertilizer band is below the seed.
The use of liquid and ammonia fertilizers is growing, particularly of anhydrous ammonia, which is handled as a liquid under pressure but changes to gas when released to atmospheric pressure. Anhydrous ammonia, however, is highly corrosive, inflammable, and rather dangerous if not handled properly; thus, application equipment is specialized. Typically, the applicator is a chisel-shaped blade with a pipe mounted on its rear side to conduct the ammonia 13 to 15 cm (5 to 6 inches) below the soil surface. Pipes are fed from a pressure tank mounted above. Mixed liquid fertilizers containing nitrogen, phosphorus, and potassium may be applied directly to the soil surface or as a foliar spray by field sprayers where close-growing crops are raised. Large areas can be covered rapidly by use of aircraft, which can distribute both liquid and dry fertilizer.
Additional Information
Fertilizers have played an essential role in feeding a growing global population. It's estimated that just under half of the people alive today are dependent on synthetic fertilizers.
They can bring environmental benefits too: fertilizers can increase crop yields. By increasing crop yields we can reduce the amount of land we use for agriculture.
But they also create environmental pollution. Many countries overapply fertilizers, leading to the runoff of nutrients into water systems and ecosystems.
A problem we need to tackle is using fertilizers efficiently: yielding its benefits to feed a growing population while reducing the environmental damage that they cause.
Organic and Mineral Fertilizer: Differences and Similarities.
Fertilizers are materials that are applied to soils, or directly to plants, for their ability to supply the essential nutrients needed by crops to grow and improve soil fertility. They are used to increase crop yield and/or quality, as well as to sustain soils’ ability to support future crop production.
Mineral fertilizers are produced from materials mined from naturally occurring nutrient deposits, or from the fixation of nitrogen from the atmosphere into plant-available forms. Mineral fertilizers generally contain high concentrations of a single, or two or three, plant nutrients.
Organic fertilizers are derived from plant matter, animal excreta, sewage and food waste, generally in the form of animal manure, green manure and biosolids. Organic fertilizers provide essential nutrients needed by crops, generally containing a wide variety in low concentrations. They also play an important role in improving soil health.
Organo-mineral fertilizers combine dried organic and mineral fertilizers to provide balanced nutrients along with soil health improvements in a long-lasting, easy-to transport and store form.
They play a key role in reducing micronutrient deficiencies in people:
The fertilizer fortification of staple food crops with micronutrients (also known as agronomic biofortification) has alleviated deficiencies in zinc, selenium and iodine in communities around the world.
The fertilizer industry supports policies that link agriculture, nutrition and health, and the use of micronutrients where they are needed most.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2303) Siren
Gist
A siren is a warning device which makes a long, loud noise. Most fire engines, ambulances, and police cars have sirens.
It is a device for making a loud warning noise.
Summary
A siren is noisemaking device producing a piercing sound of definite pitch. Used as a warning signal, it was invented in the late 18th century by the Scottish natural philosopher John Robison. The name was given it by the French engineer Charles Cagniard de La Tour, who devised an acoustical instrument of the type in 1819. A disk with evenly spaced holes around its edge is rotated at high speed, interrupting at regular intervals a jet of air directed at the holes. The resulting regular pulsations cause a sound wave in the surrounding air. The siren is thus classified as a free aerophone. The sound-wave frequency of its pitch equals the number of air puffs (or holes times number of revolutions) per second. The strident sound results from the high number of overtones (harmonics) present.
Details
A siren is a loud noise-making device. Civil defense sirens are mounted in fixed locations and used to warn of natural disasters or attacks. Sirens are used on emergency service vehicles such as ambulances, police cars, and fire engines. There are two general types: mechanical and electronic.
Many fire sirens (used for summoning volunteer firefighters) serve double duty as tornado or civil defense sirens, alerting an entire community of impending danger. Most fire sirens are either mounted on the roof of a fire station or on a pole next to the fire station. Fire sirens can also be mounted on or near government buildings, on tall structures such as water towers, as well as in systems where several sirens are distributed around a town for better sound coverage. Most fire sirens are single tone and mechanically driven by electric motors with a rotor attached to the shaft. Some newer sirens are electronically driven speakers.
Fire sirens are often called fire whistles, fire alarms, or fire horns. Although there is no standard signaling of fire sirens, some utilize codes to inform firefighters of the location of the fire. Civil defense sirens also used as fire sirens often can produce an alternating "hi-lo" signal (similar to emergency vehicles in many European countries) as the fire signal, or attack (slow wail), typically 3x, as to not confuse the public with the standard civil defense signals of alert (steady tone) and fast wail (fast wavering tone). Fire sirens are often tested once a day at noon and are also called noon sirens or noon whistles.
The first emergency vehicles relied on a bell. In the 1970s, they switched to a duotone airhorn, which was itself overtaken in the 1980s by an electronic wail.
History
Some time before 1799, the siren was invented by the Scottish natural philosopher John Robison. Robison's sirens were used as musical instruments; specifically, they powered some of the pipes in an organ. Robison's siren consisted of a stopcock that opened and closed a pneumatic tube. The stopcock was apparently driven by the rotation of a wheel.
In 1819, an improved siren was developed and named by Baron Charles Cagniard de la Tour. De la Tour's siren consisted of two perforated disks that were mounted coaxially at the outlet of a pneumatic tube. One disk was stationary, while the other disk rotated. The rotating disk periodically interrupted the flow of air through the fixed disk, producing a tone. De la Tour's siren could produce sound under water, suggesting a link with the sirens of Greek mythology; hence the name he gave to the instrument.
Instead of disks, most modern mechanical sirens use two concentric cylinders, which have slots parallel to their length. The inner cylinder rotates while the outer one remains stationary. As air under pressure flows out of the slots of the inner cylinder and then escapes through the slots of the outer cylinder, the flow is periodically interrupted, creating a tone. The earliest such sirens were developed during 1877–1880 by James Douglass and George Slight (1859–1934) of Trinity House; the final version was first installed in 1887 at the Ailsa Craig lighthouse in Scotland's Firth of Clyde. When commercial electric power became available, sirens were no longer driven by external sources of compressed air, but by electric motors, which generated the necessary flow of air via a simple centrifugal fan, which was incorporated into the siren's inner cylinder.
To direct a siren's sound and to maximize its power output, a siren is often fitted with a horn, which transforms the high-pressure sound waves in the siren to lower-pressure sound waves in the open air.
The earliest way of summoning volunteer firemen to a fire was by ringing of a bell, either mounted atop the fire station, or in the belfry of a local church. As electricity became available, the first fire sirens were manufactured. In 1886 French electrical engineer Gustave Trouvé developed a siren to announce the silent arrival of his electric boats. Two early fire siren manufacturers were William A. Box Iron Works, who made the "Denver" sirens as early as 1905, and the Inter-State Machine Company (later the Sterling Siren Fire Alarm Company) who made the ubiquitous Model "M" electric siren, which was the first dual tone siren. The popularity of fire sirens took off by the 1920s, with many manufacturers including the Federal Electric Company and Decot Machine Works creating their own sirens. Since the 1970s, many communities have since deactivated their fire sirens as pagers became available for fire department use. Some sirens still remain as a backup to pager systems.
During the Second World War, the British civil defence used a network of sirens to alert the general population to the imminence of an air raid. A single tone denoted an "all clear". A series of tones denoted an air raid.
Types:
Pneumatic
The pneumatic siren, which is a free aerophone, consists of a rotating disk with holes in it (called a chopper, siren disk or rotor), such that the material between the holes interrupts a flow of air from fixed holes on the outside of the unit (called a stator). As the holes in the rotating disk alternately prevent and allow air to flow it results in alternating compressed and rarefied air pressure, i.e. sound. Such sirens can consume large amounts of energy. To reduce the energy consumption without losing sound volume, some designs of pneumatic sirens are boosted by forcing compressed air from a tank that can be refilled by a low powered compressor through the siren disk.
In United States English language usage, vehicular pneumatic sirens are sometimes referred to as mechanical or coaster sirens, to differentiate them from electronic devices. Mechanical sirens driven by an electric motor are often called "electromechanical". One example is the Q2B siren sold by Federal Signal Corporation. Because of its high current draw (100 amps when power is applied) its application is normally limited to fire apparatus, though it has seen increasing use on type IV ambulances and rescue-squad vehicles. Its distinct tone of urgency, high sound pressure level (123 dB at 10 feet) and square sound waves account for its effectiveness.
In Germany and some other European countries, the pneumatic two-tone (hi-lo) siren consists of two sets of air horns, one high pitched and the other low pitched. An air compressor blows the air into one set of horns, and then it automatically switches to the other set. As this back and forth switching occurs, the sound changes tones. Its sound power varies, but could get as high as approximately 125 dB, depending on the compressor and the horns. Comparing with the mechanical sirens, it uses much less electricity but needs more maintenance.
In a pneumatic siren, the stator is the part which cuts off and reopens air as rotating blades of a chopper move past the port holes of the stator, generating sound. The pitch of the siren's sound is a function of the speed of the rotor and the number of holes in the stator. A siren with only one row of ports is called a single tone siren. A siren with two rows of ports is known as a dual tone siren. By placing a second stator over the main stator and attaching a solenoid to it, one can repeatedly close and open all of the stator ports thus creating a tone called a pulse. If this is done while the siren is wailing (rather than sounding a steady tone) then it is called a pulse wail. By doing this separately over each row of ports on a dual tone siren, one can alternately sound each of the two tones back and forth, creating a tone known as Hi/Lo. If this is done while the siren is wailing, it is called a Hi/Lo wail. This equipment can also do pulse or pulse wail. The ports can be opened and closed to send Morse code. A siren which can do both pulse and Morse code is known as a code siren.
Electronic
Electronic sirens incorporate circuits such as oscillators, modulators, and amplifiers to synthesize a selected siren tone (wail, yelp, pierce/priority/phaser, hi-lo, scan, airhorn, manual, and a few more) which is played through external speakers. It is not unusual, especially in the case of modern fire engines, to see an emergency vehicle equipped with both types of sirens. Often, police sirens also use the interval of a tritone to help draw attention. The first electronic siren that mimicked the sound of a mechanical siren was invented in 1965 by Motorola employees Ronald H. Chapman and Charles W. Stephens.
Other types
Steam whistles were also used as a warning device if a supply of steam was present, such as a sawmill or factory. These were common before fire sirens became widely available, particularly in the former Soviet Union. Fire horns, large compressed air horns, also were and still are used as an alternative to a fire siren. Many fire horn systems were wired to fire pull boxes that were located around a town, and this would "blast out" a code in respect to that box's location. For example, pull box number 233, when pulled, would trigger the fire horn to sound two blasts, followed by a pause, followed by three blasts, followed by a pause, followed by three more blasts. In the days before telephones, this was the only way firefighters would know the location of a fire. The coded blasts were usually repeated several times. This technology was also applied to many steam whistles as well. Some fire sirens are fitted with brakes and dampers, enabling them to sound out codes as well. These units tended to be unreliable, and are now uncommon.
Physics of the sound
Mechanical sirens blow air through a slotted disk or rotor. The cyclic waves of air pressure are the physical form of sound. In many sirens, a centrifugal blower and rotor are integrated into a single piece of material, spun by an electric motor.
Electronic sirens are high efficiency loudspeakers, with specialized amplifiers and tone generation. They usually imitate the sounds of mechanical sirens in order to be recognizable as sirens.
To improve the efficiency of the siren, it uses a relatively low frequency, usually several hundred hertz. Lower frequency sound waves go around corners and through holes better.
Sirens often use horns to aim the pressure waves. This uses the siren's energy more efficiently by aiming it. Exponential horns achieve similar efficiencies with less material.
The frequency, i.e. the cycles per second of the sound of a mechanical siren is controlled by the speed of its rotor, and the number of openings. The wailing of a mechanical siren occurs as the rotor speeds and slows. Wailing usually identifies an attack or urgent emergency.
The characteristic timbre or musical quality of a mechanical siren is caused because it is a triangle wave, when graphed as pressure over time. As the openings widen, the emitted pressure increases. As they close, it decreases. So, the characteristic frequency distribution of the sound has harmonics at odd (1, 3, 5...) multiples of the fundamental. The power of the harmonics roll off in an inverse square to their frequency. Distant sirens sound more "mellow" or "warmer" because their harsh high frequencies are absorbed by nearby objects.
Two tone sirens are often designed to emit a minor third, musically considered a "sad" sound. To do this, they have two rotors with different numbers of openings. The upper tone is produced by a rotor with a count of openings divisible by six. The lower tone's rotor has a count of openings divisible by five. Unlike an organ, a mechanical siren's minor third is almost always physical, not tempered. To achieve tempered ratios in a mechanical siren, the rotors must either be geared, run by different motors, or have very large numbers of openings. Electronic sirens can easily produce a tempered minor third.
A mechanical siren that can alternate between its tones uses solenoids to move rotary shutters that cut off the air supply to one rotor, then the other. This is often used to identify a fire warning.
When testing, a frightening sound is not desirable. So, electronic sirens then usually emit musical tones:
Westminster chimes is common. Mechanical sirens sometimes self-test by "growling", i.e. operating at low speeds.
In music
Sirens are also used as musical instruments. They have been prominently featured in works by avant-garde and contemporary classical composers. Examples include Edgard Varèse's compositions Amériques (1918–21, rev. 1927), Hyperprism (1924), and Ionisation (1931); math Avraamov's Symphony of Factory Sirens (1922); George Antheil's Ballet Mécanique (1926); Dimitri Shostakovich's Symphony No. 2 (1927), and Henry Fillmore's "The Klaxon: March of the Automobiles" (1929), which features a klaxophone.
In popular music, sirens have been used in The Chemical Brothers' "Song to the Siren" (1992) and in a CBS News 60 Minutes segment played by percussionist Evelyn Glennie. A variation of a siren, played on a keyboard, are the opening notes of the REO Speedwagon song "Ridin' the Storm Out". Some heavy metal bands also use air raid type siren intros at the beginning of their shows. The opening measure of Money City Maniacs 1998 by Canadian band Sloan uses multiple sirens overlapped.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2304) Stainless Steel
Gist
Stainless steel is a corrosion-resistant alloy of iron, chromium and, in some cases, nickel and other metals. Completely and infinitely recyclable, stainless steel is the “green material” par excellence.
Stainless steel is the name of a family of iron-based alloys known for their corrosion and heat resistance. One of the main characteristics of stainless steel is its minimum chromium content of 10.5%, which gives it its superior resistance to corrosion in comparison to other types of steels.
Summary
Stainless steels are a family of ferrous alloys containing less than 1.2% carbon and over 10.5% chromium and are protected by a passive surface layer of chromium and iron oxides and hydroxides that protects them efficiently from corrosion.
Stainless steel is a family of alloy steels containing low carbon steel with a minimum chromium content of 10% or more by weight. The name originates from the fact that stainless steel does not stain, corrode or rust as easily as ordinary steel; however, ‘stain-less’ is not ‘stain-proof’ in all conditions. It is important to select the correct type and grade of stainless steel for a particular application. In many cases, manufacturing rooms, processing lines, equipment and machines will be subject to requirements from authorities, manufacturers or customers.
The addition of chromium gives the steel its unique stainless, corrosion-resistant properties. The chromium, when in contact with oxygen, forms a natural barrier of adherent chromium(III) oxide (Cr2O3), commonly called ‘ceramic,’ which is a ‘passive film’ resistant to further oxidation or rusting. This event is called passivation and is seen in other metals, such as aluminum and silver, but unlike in these metals this passive film is transparent on stainless steel. This invisible, self repairing and relatively inert film is only a few microns thick so the metal stays shiny. If damaged mechanically or chemically, the film is self-healing, meaning the layer quickly reforms, providing that oxygen is present, even if in very small amounts. This protective oxide or ceramic coating is common to most corrosion resistant materials. Similarly, anodizing is an electrolytic passivation process used to increase the thickness of the natural oxide layer on the surface of metals such as aluminum, titanium, and zinc among others. Passivation is not a useful treatment for iron or carbon steel because these metals exfoliate when oxidized, i.e. the iron(III) oxide (rust) flakes off, constantly exposing the underlying metal to corrosion.
The corrosion resistance and other useful properties of stainless steel can be enhanced by increasing the chromium content and the addition of other alloying elements such as molybdenum, nickel and nitrogen. There are more than 60 grades of stainless steel, however, the entire group can be divided into five classes (cast stainless steels, in general, are similar to the equivalent wrought alloys). Each is identified by the alloying elements which affect their microstructure and for which each is named.
Details
Stainless steel, also known as inox, corrosion-resistant steel (CRES), and rustless steel, is an alloy of iron that is resistant to rusting and corrosion. It contains iron with chromium and other elements such as molybdenum, carbon, nickel and nitrogen depending on its specific use and cost. Stainless steel's resistance to corrosion results from the 10.5%, or more, chromium content which forms a passive film that can protect the material and self-heal in the presence of oxygen.
The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products. Some grades are also suitable for forging and casting.
The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants.
Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table.
Properties:
Corrosion resistance
Although stainless steel does rust, this only affects the outer few layers of atoms, its chromium content shielding deeper layers from oxidation.
The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means:
* increasing chromium content to more than 11%
* adding nickel to at least 8%
* adding molybdenum (which also improves resistance to pitting corrosion)
Strength
The most common type of stainless steel, 304, has a tensile yield strength around 210 MPa (30,000 psi) in the annealed condition. It can be strengthened by cold working to a strength of 1,050 MPa (153,000 psi) in the full-hard condition.
The strongest commonly available stainless steels are precipitation hardening alloys such as 17-4 PH and Custom 465. These can be heat treated to have tensile yield strengths up to 1,730 MPa (251,000 psi).
Melting point
Stainless steel is a steel, and as such its melting point is near that of ordinary steel, and much higher than the melting points of aluminium or copper. As with most alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a single temperature. This temperature range goes from 1,400 to 1,530 °C (2,550 to 2,790 °F; 1,670 to 1,800 K; 3,010 to 3,250 °R) depending on the specific consistency of the alloy in question.
Conductivity
Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the non-electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments.
Magnetism
Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself.
Wear
Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminum and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall.
Density
The density of stainless steel ranges from 7.5 to 8.0 g/{cm}^3 (0.27 to 0.29 lb/cu in) depending on the alloy.
Additional Information
Stainless steel is any one of a family of alloy steels usually containing 10 to 30 percent chromium. In conjunction with low carbon content, chromium imparts remarkable resistance to corrosion and heat. Other elements, such as nickel, molybdenum, titanium, aluminum, niobium, copper, nitrogen, sulfur, phosphorus, or selenium, may be added to increase corrosion resistance to specific environments, enhance oxidation resistance, and impart special characteristics.
Most stainless steels are first melted in electric-arc or basic oxygen furnaces and subsequently refined in another steelmaking vessel, mainly to lower the carbon content. In the argon-oxygen decarburization process, a mixture of oxygen and argon gas is injected into the liquid steel. By varying the ratio of oxygen and argon, it is possible to remove carbon to controlled levels by oxidizing it to carbon monoxide without also oxidizing and losing expensive chromium. Thus, cheaper raw materials, such as high-carbon ferrochromium, may be used in the initial melting operation.
There are more than 100 grades of stainless steel. The majority are classified into five major groups in the family of stainless steels: austenitic, ferritic, martensitic, duplex, and precipitation-hardening. Austenitic steels, which contain 16 to 26 percent chromium and up to 35 percent nickel, usually have the highest corrosion resistance. They are not hardenable by heat treatment and are nonmagnetic. The most common type is the 18/8, or 304, grade, which contains 18 percent chromium and 8 percent nickel. Typical applications include aircraft and the dairy and food-processing industries. Standard ferritic steels contain 10.5 to 27 percent chromium and are nickel-free; because of their low carbon content (less than 0.2 percent), they are not hardenable by heat treatment and have less critical anticorrosion applications, such as architectural and auto trim. Martensitic steels typically contain 11.5 to 18 percent chromium and up to 1.2 percent carbon with nickel sometimes added. They are hardenable by heat treatment, have modest corrosion resistance, and are employed in cutlery, surgical instruments, wrenches, and turbines. Duplex stainless steels are a combination of austenitic and ferritic stainless steels in equal amounts; they contain 21 to 27 percent chromium, 1.35 to 8 percent nickel, 0.05 to 3 percent copper, and 0.05 to 5 percent molybdenum. Duplex stainless steels are stronger and more resistant to corrosion than austenitic and ferritic stainless steels, which makes them useful in storage-tank construction, chemical processing, and containers for transporting chemicals. Precipitation-hardening stainless steel is characterized by its strength, which stems from the addition of aluminum, copper, and niobium to the alloy in amounts less than 0.5 percent of the alloy’s total mass. It is comparable to austenitic stainless steel with respect to its corrosion resistance, and it contains 15 to 17.5 percent chromium, 3 to 5 percent nickel, and 3 to 5 percent copper. Precipitation-hardening stainless steel is used in the construction of long shafts.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2305) Earphones / Headphones
Gist
Headphones are also known as earphones or, colloquially, cans. Circumaural (around the ear) and supra-aural (over the ear) headphones use a band over the top of the head to hold the drivers in place. Another type, known as earbuds or earpieces, consists of individual units that plug into the user's ear canal.
They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earphones or, colloquially, cans.
Headphones are a type of hardware output device that can be connected to a computer's line-out or speakers port, as well as wirelessly using Bluetooth. They are also referred to as earbuds. You can watch a movie or listen to audio without bothering anyone nearby by using headphones.
Summary
A headphone is a small loudspeaker (earphone) held over the ear by a band or wire worn on the head. Headphones are commonly employed in situations in which levels of surrounding noise are high, as in an airplane math, or where a user such as a switchboard operator needs to keep the hands free, or where the listener is moving about or wants to listen without disturbing other people. A headphone may be equipped with one earphone or two and may include a miniature microphone, in which case it is called a headset. For listening to stereophonically reproduced sound, stereo headphones may be used, with separate channels of sound being fed to the two earphones.
An earphone is a small loudspeaker held or worn close to the listener’s ear or within the outer ear. Common forms include the hand-held telephone receiver; the headphone, in which one or two earphones are held in place by a band worn over the head; and the plug earphone, which is inserted in the outer opening of the ear. The conversion of electrical to acoustical signals is effected by any of the devices used in larger loudspeakers; the highest fidelity is provided by the so-called dynamic earphone, which ordinarily is made part of a headphone and equipped with a cushion to isolate the ears from other sound sources.
Details
Headphones are a pair of small loudspeaker drivers worn on or around the head over a user's ears. They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earphones or, colloquially, cans. Circumaural (around the ear) and supra-aural (over the ear) headphones use a band over the top of the head to hold the drivers in place. Another type, known as earbuds or earpieces, consists of individual units that plug into the user's ear canal. A third type are bone conduction headphones, which typically wrap around the back of the head and rest in front of the ear canal, leaving the ear canal open. In the context of telecommunication, a headset is a combination of a headphone and microphone.
Headphones connect to a signal source such as an audio amplifier, radio, CD player, portable media player, mobile phone, video game console, or electronic musical instrument, either directly using a cord, or using wireless technology such as Bluetooth, DECT or FM radio. The first headphones were developed in the late 19th century for use by switchboard operators, to keep their hands free. Initially, the audio quality was mediocre and a step forward was the invention of high fidelity headphones.
Headphones exhibit a range of different audio reproduction quality capabilities. Headsets designed for telephone use typically cannot reproduce sound with the high fidelity of expensive units designed for music listening by audiophiles. Headphones that use cables typically have either a 1/4 inch (6.4 mm) or 1/8 inch (3.2 mm) phone jack for plugging the headphones into the audio source. Some headphones are wireless, using Bluetooth connectivity to receive the audio signal by radio waves from source devices like cellphones and digital players. As a result of the Walkman effect, beginning in the 1980s, headphones started to be used in public places such as sidewalks, grocery stores, and public transit. Headphones are also used by people in various professional contexts, such as audio engineers mixing sound for live concerts or sound recordings and DJs, who use headphones to cue up the next song without the audience hearing, aircraft pilots and call center employees. The latter two types of employees use headphones with an integrated microphone.
History
Headphones grew out of the need to free up a person's hands when operating a telephone. By the 1880s, soon after the invention of the telephone, telephone switchboard operators began to use head apparatuses to mount the telephone receiver. The receiver was mounted on the head by a clamp which held it next to the ear. The head mount freed the switchboard operator's hands, so that they could easily connect the wires of the telephone callers and receivers. The head-mounted telephone receiver in the singular form was called a headphone. These head-mounted phone receivers, unlike modern headphones, only had one earpiece.
By the 1890s a listening device with two earpieces was developed by the British company Electrophone. The device created a listening system through the phone lines that allowed the customer to connect into live feeds of performances at theaters and opera houses across London. Subscribers to the service could listen to the performance through a pair of massive earphones that connected below the chin and were held by a long rod.
French engineer Ernest Mercadier in 1891 patented a set of in-ear headphones. He was awarded U.S. Patent No. 454,138 for "improvements in telephone-receivers...which shall be light enough to be carried while in use on the head of the operator." The German company Siemens Brothers at this time was also selling headpieces for telephone operators which had two earpieces, although placed outside the ear. These headpieces by Siemens Brothers looked fairly similar to modern headphones. The majority of headgear used by telephone operators continued to have only one earpiece.
Modern headphones subsequently evolved out of the emerging field of wireless telegraphy, which was the beginning stage of radio broadcasting. Some early wireless telegraph developers chose to use the telephone receiver's speaker as the detector for the electrical signal of the wireless receiving circuit. By 1902 wireless telegraph innovators, such as Lee de Forest, were using two jointly head-mounted telephone receivers to hear the signal of the receiving circuit. The two head-mounted telephone receivers were called in the singular form "head telephones". By 1908 the headpiece began to be written simply as "head phones", and a year later the compound word "headphones" began to be used.
One of the earliest companies to make headphones for wireless operators was the Holtzer-Cabot Company in 1909. They were also makers of head receivers for telephone operators and normal telephone receivers for the home. Another early manufacturer of headphones was Nathaniel Baldwin. He was the first major supplier of headsets to the U.S. Navy. In 1910 he invented a prototype telephone headset due to his inability to hear sermons during Sunday service. He offered it for testing to the navy, which promptly ordered 100 of them because of their good quality. Wireless Specialty Apparatus Co., in partnership with Baldwin Radio Company, set up a manufacturing facility in Utah to fulfill orders.
These early headphones used moving iron drivers, with either single-ended or balanced armatures. The common single-ended type used voice coils wound around the poles of a permanent magnet, which were positioned close to a flexible steel diaphragm. The audio current through the coils varied the magnetic field of the magnet, exerting a varying force on the diaphragm, causing it to vibrate, creating sound waves. The requirement for high sensitivity meant that no damping was used, so the frequency response of the diaphragm had large peaks due to resonance, resulting in poor sound quality. These early models lacked padding, and were often uncomfortable to wear for long periods. Their impedance varied; headphones used in telegraph and telephone work had an impedance of 75 ohms. Those used with early wireless radio had more turns of finer wire to increase sensitivity. Impedance of 1,000 to 2,000 ohms was common, which suited both crystal sets and triode receivers. Some very sensitive headphones, such as those manufactured by Brandes around 1919, were commonly used for early radio work.
In early powered radios, the headphone was part of the vacuum tube's plate circuit and carried dangerous voltages. It was normally connected directly to the positive high voltage battery terminal, and the other battery terminal was securely grounded. The use of bare electrical connections meant that users could be shocked if they touched the bare headphone connections while adjusting an uncomfortable headset.
In 1958, John C. Koss, an audiophile and jazz musician from Milwaukee, produced the first stereo headphones.
Smaller earbud type earpieces, which plugged into the user's ear canal, were first developed for hearing aids. They became widely used with transistor radios, which commercially appeared in 1954 with the introduction of the Regency TR-1. The most popular audio device in history, the transistor radio changed listening habits, allowing people to listen to radio anywhere. The earbud uses either a moving iron driver or a piezoelectric crystal to produce sound. The 3.5 mm radio and phone connector, which is the most commonly used in portable application today, has been used at least since the Sony EFM-117J transistor radio, which was released in 1964. Its popularity was reinforced with its use on the Walkman portable tape player in 1979.
Applications
Headphones may be used with stationary CD and DVD players, home theater, personal computers, or portable devices (e.g., digital audio player/MP3 player, mobile phone), as long as these devices are equipped with a headphone jack. Cordless headphones are not connected to their source by a cable. Instead, they receive a radio or infrared signal encoded using a radio or infrared transmission link, such as FM, Bluetooth or Wi-Fi. These are battery-powered receiver systems, of which the headphone is only a component. Cordless headphones are used with events such as a Silent disco or Silent Gig.
In the professional audio sector, headphones are used in live situations by disc jockeys with a DJ mixer, and sound engineers for monitoring signal sources. In radio studios, DJs use a pair of headphones when talking to the microphone while the speakers are turned off to eliminate acoustic feedback while monitoring their own voice. In studio recordings, musicians and singers use headphones to play or sing along to a backing track or band. In military applications, audio signals of many varieties are monitored using headphones.
Wired headphones are attached to an audio source by a cable. The most common connectors are 6.35 mm (1/4 inch) and 3.5 mm phone connectors. The larger 6.35 mm connector is more common on fixed location home or professional equipment. The 3.5 mm connector remains the most widely used connector for portable application today. Adapters are available for converting between 6.35 mm and 3.5 mm devices.
As active component, wireless headphones tend to be costlier due to the necessity for internal hardware such as a battery, a charging controller, a speaker driver, and a wireless transceiver, whereas wired headphones are a passive component, outsourcing speaker driving to the audio source.
Some headphone cords are equipped with a serial potentiometer for volume control.
Wired headphones may be equipped with a non-detachable cable or a detachable auxiliary male-to-male plug, as well as some with two ports to allow connecting another wired headphone in a parallel circuit, which splits the audio signal to share with another participant, but can also be used to hear audio from two inputs simultaneously. An external audio splitter can retrofit this ability.
Applications for audiometric testing
Various types of specially designed headphones or earphones are also used to evaluate the status of the auditory system in the field of audiology for establishing hearing thresholds, medically diagnosing hearing loss, identifying other hearing related disease, and monitoring hearing status in occupational hearing conservation programs. Specific models of headphones have been adopted as the standard due to the ease of calibration and ability to compare results between testing facilities.
Supra-aural style headphones are historically the most commonly used in audiology as they are the easiest to calibrate and were considered the standard for many years. Commonly used models are the Telephonics Dynamic Headphone (TDH) 39, TDH-49, and TDH-50. In-the-ear or insert style earphones are used more commonly today as they provide higher levels of interaural attenuation, introduce less variability when testing 6,000 and 8,000 Hz, and avoid testing issues resulting from collapsed ear canals. A commonly used model of insert earphone is the Etymotic Research ER-3A. Circum-aural earphones are also used to establish hearing thresholds in the extended high frequency range (8,000 Hz to 20,000 kHz). Along with Etymotic Research ER-2A insert earphones, the Sennheiser HDA300 and Koss HV/1A circum-aural earphones are the only models that have reference equivalent threshold sound pressure level values for the extended high frequency range as described by ANSI standards.
Audiometers and headphones must be calibrated together. During the calibration process, the output signal from the audiometer to the headphones is measured with a sound level meter to ensure that the signal is accurate to the reading on the audiometer for sound pressure level and frequency. Calibration is done with the earphones in an acoustic coupler that is intended to mimic the transfer function of the outer ear. Because specific headphones are used in the initial audiometer calibration process, they cannot be replaced with any other set of headphones, even from the same make and model.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2306) Graphite
Gist
Graphite is a type of crystal carbon and a half-metal along with being one of the renowned carbon allotropes. Under the conditions that are ideal, it would be one of the most stable forms of carbon available. To define the standard state of heat for making compounds of carbons.
Graphite is used in pencils, lubricants, crucibles, foundry facings, polishes, brushes for electric motors, and cores of nuclear reactors.
Summary
Graphite is a mineral consisting of carbon. Graphite has a greasy feel and leaves a black mark, thus the name from the Greek verb graphein, “to write.”
Graphite has a layered structure that consists of rings of six carbon atoms arranged in widely spaced horizontal sheets. Graphite thus crystallizes in the hexagonal system, in contrast to diamond, another form of carbon, that crystallizes in the octahedral or tetrahedral system. Such pairs of differing forms of the same element usually are rather similar in their physical properties, but not so in this case. Graphite is dark gray to black, opaque, and very soft (with a Mohs scale hardness of 1.5), while diamond may be colorless and transparent and is the hardest naturally occurring substance (with a Mohs scale hardness of 10). Graphite is very soft because the individual layers of carbon atoms are not as tightly bound together as the atoms within the layer. It is an excellent conductor of heat and electricity.
Graphite is formed by the metamorphosis of sediments containing carbonaceous material, by the reaction of carbon compounds with hydrothermal solutions or magmatic fluids, or possibly by the crystallization of magmatic carbon. It occurs as isolated scales, large masses, or veins in older crystalline rocks, gneiss, schist, quartzite, and marble and also in granites, pegmatites, and carbonaceous clay slates. Small isometric crystals of graphitic carbon (possibly pseudomorphs after diamond) found in meteoritic iron are called cliftonite.
Naturally occurring graphite is classified into three types: amorphous, flake, and vein. Amorphous is the most common kind and is formed by metamorphism under low pressures and temperatures. It is found in coal and shale and has the lowest carbon content, typically 70 to 90 percent, of the three types. Flake graphite appears in flat layers and is formed by metamorphism under high pressures and temperatures. It is the most commonly used type and has a carbon content between 85 and 98 percent. Vein graphite is the rarest form and is likely formed when carbon compounds react with hydrothermal solutions or magmatic fluids. Vein graphite can have a purity greater than 99 percent and is commercially mined only in Sri Lanka.
Graphite was first synthesized accidentally by Edward G. Acheson while he was performing high-temperature experiments on carborundum. He found that at about 4,150 °C (7,500 °F) the silicon in the carborundum vaporized, leaving the carbon behind in graphitic form. Acheson was granted a patent for graphite manufacture in 1896, and commercial production started in 1897. Since 1918 petroleum coke, small and imperfect graphite crystals surrounded by organic compounds, has been the major raw material in the production of 99 to 99.5 percent pure graphite.
Graphite is used in pencils, lubricants, crucibles, foundry facings, polishes, brushes for electric motors, and cores of nuclear reactors. Its high thermal and electrical conductivity make it a key part of steelmaking, where it is used as electrodes in electric arc furnaces. In the early 21st century, global demand for graphite has increased because of its use as the anode in lithium-ion batteries for electric vehicles. About 75 percent of graphite is mined in China, with significant amounts mined in Madagascar, Mozambique, and Brazil.
Details
Graphite is a crystalline allotrope (form) of the element carbon. It consists of many stacked layers of graphene typically in the excess of hundred(s) of layers. Graphite occurs naturally and is the most stable form of carbon under standard conditions. Synthetic and natural graphite are consumed on a large scale (1.3 million metric tons per year in 2022) for uses in many critical industries including refractories (50%), lithium-ion batteries (18%), foundries (10%), lubricants (5%), among others (17%). Under extremely high pressures and extremely high temperatures it converts to diamond. It is a good conductor of both heat and electricity
Types and varieties:
Natural graphite
Graphite occurs naturally in ores that can be classified into one of two categories either amorphous (microcrystalline) or crystalline (flake or lump/chip) which is determined by the ore morphology, crystallinity, and grain size. All naturally occurring graphite deposits are formed from the metamorphism of carbonaceous sedimentary rocks, and the ore type is due to its geologic setting. Coal that has been thermally metamorphosed is the typical source of amorphous graphite. Crystalline flake graphite is mined from carbonaceous metamorphic rocks, while lump or chip graphite is mined from veins which occur in high-grade metamorphic regions. There are serious negative environmental impacts to graphite mining.
Synthetic graphite
Synthetic graphite is graphite of high purity produced by thermal graphitization at temperatures in excess of 2,100 °C from hydrocarbon materials most commonly by a process known as the Acheson process. The high temperatures are maintained for weeks, and are required not only to form the graphite from the precursor carbons but to also vaporize any impurities that may be present, including hydrogen, nitrogen, sulfur, organics, and metals. This is why synthetic graphite is highly pure in excess of 99.9% C purity, but typically has lower density, conductivity and a higher porosity than its natural equivalent. Synthetic graphite can also be formed into very large flakes (cm) while maintaining its high purity unlike almost all sources of natural graphite. Synthetic graphite has also been known to be formed by other methods including by chemical vapor deposition from hydrocarbons at temperatures above 2,500 K (2,230 °C), by decomposition of thermally unstable carbides or by crystallizing from metal melts supersaturated with carbon.
Biographite
Biographite is a commercial product proposal for reducing the carbon footprint of lithium iron phosphate (LFP) batteries. It is produced from forestry waste and similar byproducts by a company in New Zealand using a novel process called thermo-catalytic graphitisation which project is supported by grants from interested parties including a forestry company in Finland and a battery maker in Hong Kong.
Natural graphite:
Occurrence
Graphite occurs in metamorphic rocks as a result of the reduction of sedimentary carbon compounds during metamorphism. It also occurs in igneous rocks and in meteorites. Minerals associated with graphite include quartz, calcite, micas and tourmaline. The principal export sources of mined graphite are in order of tonnage: China, Mexico, Canada, Brazil, and Madagascar. Significant unexploited graphite resources also exists in Colombia's Cordillera Central in the form of graphite-bearing schists.
In meteorites, graphite occurs with troilite and silicate minerals. Small graphitic crystals in meteoritic iron are called cliftonite. Some microscopic grains have distinctive isotopic compositions, indicating that they were formed before the Solar System. They are one of about 12 known types of minerals that predate the Solar System and have also been detected in molecular clouds. These minerals were formed in the ejecta when supernovae exploded or low to intermediate-sized stars expelled their outer envelopes late in their lives. Graphite may be the second or third oldest mineral in the Universe.
Structure
Graphite consists of sheets of trigonal planar carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower.
There are two allotropic forms called alpha (hexagonal) and beta (rhombohedral), differing in terms of the stacking of the graphene layers: stacking in alpha graphite is ABA, as opposed to ABC stacking in the energetically less stable beta graphite. Rhombohedral graphite cannot occur in pure form. Natural graphite, or commercial natural graphite, contains 5 to 15% rhombohedral graphite and this may be due to intensive milling. The alpha form can be converted to the beta form through shear forces, and the beta form reverts to the alpha form when it is heated to 1300 °C for four hours.
Thermodynamics
The equilibrium pressure and temperature conditions for a transition between graphite and diamond is well established theoretically and experimentally. The pressure changes linearly between 1.7 GPa at 0 K and 12 GPa at 5000 K (the diamond/graphite/liquid triple point). However, the phases have a wide region about this line where they can coexist. At normal temperature and pressure, 20 °C (293 K) and 1 standard atmosphere (0.10 MPa), the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about 4500 K, diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at 2000 K, a pressure of 35 GPa is needed.
Other properties
The acoustic and thermal properties of graphite are highly anisotropic, since phonons propagate quickly along the tightly bound planes, but are slower to travel from one plane to another. Graphite's high thermal stability and electrical and thermal conductivity facilitate its widespread use as electrodes and refractories in high temperature material processing applications. However, in oxygen-containing atmospheres graphite readily oxidizes to form carbon dioxide at temperatures of 700 °C and above.
Graphite is an electrical conductor, hence useful in such applications as arc lamp electrodes. It can conduct electricity due to the vast electron delocalization within the carbon layers (a phenomenon called aromaticity). These valence electrons are free to move, so are able to conduct electricity. However, the electricity is primarily conducted within the plane of the layers. The conductive properties of powdered graphite allow its use as pressure sensor in carbon microphones.
Graphite and graphite powder are valued in industrial applications for their self-lubricating and dry lubricating properties. However, the use of graphite is limited by its tendency to facilitate pitting corrosion in some stainless steel, and to promote galvanic corrosion between dissimilar metals (due to its electrical conductivity). It is also corrosive to aluminium in the presence of moisture. For this reason, the US Air Force banned its use as a lubricant in aluminium aircraft, and discouraged its use in aluminium-containing automatic weapons. Even graphite pencil marks on aluminium parts may facilitate corrosion. Another high-temperature lubricant, hexagonal boron nitride, has the same molecular structure as graphite. It is sometimes called white graphite, due to its similar properties.
When a large number of crystallographic defects bind these planes together, graphite loses its lubrication properties and becomes what is known as pyrolytic graphite. It is also highly anisotropic, and diamagnetic, thus it will float in mid-air above a strong magnet. (If it is made in a fluidized bed at 1000–1300 °C then it is isotropic turbostratic, and is used in blood-contacting devices like mechanical heart valves and is called pyrolytic carbon, and is not diamagnetic. Pyrolytic graphite and pyrolytic carbon are often confused but are very different materials.)
Natural and crystalline graphites are not often used in pure form as structural materials, due to their shear-planes, brittleness, and inconsistent mechanical properties.
Additional Information
Graphite is a mineral composed of stacked sheets of carbon atoms with a hexagonal crystal structure. It is the most stable form of pure carbon under standard conditions. Graphite is very soft, has a low specific gravity, is relatively non-reactive, and has high electrical and thermal conductivity.
Graphite occurs naturally in igneous and metamorphic rocks, where high temperatures and pressures compress carbon into graphite. Graphite can also be created synthetically by heating materials with high carbon content (e.g. petroleum coke or coal-tar pitch). The carbon-rich material is heated to 2500 to 3000 degrees Celsius, which is hot enough to "purify" the material of contaminants, allowing the carbon to form its hexagonal sheets.
Graphite is extremely soft and breaks into thin flexible flakes that easily slide over one another, resulting in a greasy feel. Due to this, graphite is a good "dry" lubricant and can be used in applications where wet lubricants (like lubricating oil) cannot.
Carbon has several other allotropes, or forms, that occur naturally, each with their own crystal structure. One form is graphene, which is a single layer of carbon atoms in a hexagonal pattern. Another well-known allotrope of carbon, are diamonds. Although also composed of pure carbon, diamonds are almost entirely different in their physical properties.
Uses
Graphite is used in a number of applications that require high temperatures and need a material that will not melt or disintegrate. Graphite is used to make the crucibles for the steel industry. Graphite is also used as a neutron moderator in certain nuclear reactors, like the Soviet RBMK, due to its ability to slow down fast-moving neutrons.
Other common uses of graphite include:
* Pencil lead
* Lubricant
* Electrodes in batteries
* Brake linings for heavy vehicles.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2307) Ulna
Gist
The ulna is one of the two forearm long bones that, in conjunction with the radius, make up the antebrachium. The bone spans from the elbow to the wrist on the medial side of the forearm when in anatomical position. In comparison to the radius, the ulna is described to be larger and longer.
Summary
Ulna is inner of two bones of the forearm when viewed with the palm facing forward. (The other, shorter bone of the forearm is the radius.) The upper end of the ulna presents a large C-shaped notch—the semilunar, or trochlear, notch—which articulates with the trochlea of the humerus (upper arm bone) to form the elbow joint. The projection that forms the upper border of this notch is called the olecranon process; it articulates behind the humerus in the olecranon fossa and may be felt as the point of the elbow. The projection that forms the lower border of the trochlear notch, the coronoid process, enters the coronoid fossa of the humerus when the elbow is flexed. On the outer side is the radial notch, which articulates with the head of the radius. The head of the bone is elsewhere roughened for muscle attachment. The shaft is triangular in cross section; an interosseous ridge extends its length and provides attachment for the interosseous membrane connecting the ulna and the radius. The lower end of the bone presents a small cylindrical head that articulates with the radius at the side and the wrist bones below. Also at the lower end is a styloid process, medially, that articulates with a disk between it and the cuneiform (os triquetrum) wrist bone.
The ulna is present in all land vertebrates. In amphibians and some reptiles the radius and ulna do not articulate. The elbow joint evolved first among birds and mammals. The radius tends to be slender in birds; but the ulna is more often reduced in mammals, especially in those adapted for running and, in the case of bats, flying.
Details:
The ulna or ulnar bone (pl.: ulnae or ulnas) is a long bone in the forearm stretching from the elbow to the wrist. It is on the same side of the forearm as the little finger, running parallel to the radius, the forearm's other long bone. Longer and thinner than the radius, the ulna is considered to be the smaller long bone of the lower arm. The corresponding bone in the lower leg is the fibula.
Structure
The ulna is a long bone found in the forearm that stretches from the elbow to the wrist, and when in standard anatomical position, is found on the medial side of the forearm. It is broader close to the elbow, and narrows as it approaches the wrist.
Close to the elbow, the ulna has a bony process, the olecranon process, a hook-like structure that fits into the olecranon fossa of the humerus. This prevents hyperextension and forms a hinge joint with the trochlea of the humerus. There is also a radial notch for the head of the radius, and the ulnar tuberosity to which muscles attach.
Close to the wrist, the ulna has a styloid process.
Near the elbow
Near the elbow, the ulna has two curved processes, the olecranon and the coronoid process; and two concave, articular cavities, the semilunar and radial notches.
The olecranon is a large, thick, curved eminence, situated at the upper and back part of the ulna. It is bent forward at the summit so as to present a prominent lip which is received into the olecranon fossa of the humerus in extension of the forearm. Its base is contracted where it joins the body and the narrowest part of the upper end of the ulna. Its posterior surface, directed backward, is triangular, smooth, subcutaneous, and covered by a bursa. Its superior surface is of quadrilateral form, marked behind by a rough impression for the insertion of the triceps brachii; and in front, near the margin, by a slight transverse groove for the attachment of part of the posterior ligament of the elbow joint. Its anterior surface is smooth, concave, and forms the upper part of the semilunar notch. Its borders present continuations of the groove on the margin of the superior surface; they serve for the attachment of ligaments: the back part of the ulnar collateral ligament medially, and the posterior ligament laterally. From the medial border a part of the flexor carpi ulnaris arises; while to the lateral border the anconeus is attached.
The coronoid process is a triangular eminence projecting forward from the upper and front part of the ulna. Its base is continuous with the body of the bone, and of considerable strength. Its apex is pointed, slightly curved upward, and in flexion of the forearm is received into the coronoid fossa of the humerus. Its upper surface is smooth, concave, and forms the lower part of the semilunar notch. Its antero-inferior surface is concave, and marked by a rough impression for the insertion of the brachialis. At the junction of this surface with the front of the body is a rough eminence, the tuberosity of the ulna, which gives insertion to a part of the brachialis; to the lateral border of this tuberosity the oblique cord is attached. Its lateral surface presents a narrow, oblong, articular depression, the radial notch. Its medial surface, by its prominent, free margin, serves for the attachment of part of the ulnar collateral ligament. At the front part of this surface is a small rounded eminence for the origin of one head of the flexor digitorum superficialis; behind the eminence is a depression for part of the origin of the flexor digitorum profundus; descending from the eminence is a ridge which gives origin to one head of the pronator teres. Frequently, the flexor pollicis longus arises from the lower part of the coronoid process by a rounded bundle of muscular fibers.
The semilunar notch is a large depression, formed by the olecranon and the coronoid process, and serving as articulation with the trochlea of the humerus. About the middle of either side of this notch is an indentation, which contracts it somewhat, and indicates the junction of the olecranon and the coronoid process. The notch is concave from above downward, and divided into a medial and a lateral portion by a smooth ridge running from the summit of the olecranon to the tip of the coronoid process. The medial portion is the larger, and is slightly concave transversely; the lateral is convex above, slightly concave below.
The radial notch is a narrow, oblong, articular depression on the lateral side of the coronoid process; it receives the circumferential articular surface of the head of the radius. It is concave from before backward, and its prominent extremities serve for the attachment of the annular ligament.
Body
The body of the ulna at its upper part is prismatic in form, and curved so as to be convex behind and lateralward; its central part is straight; its lower part is rounded, smooth, and bent a little lateralward. It tapers gradually from above downward, and has three borders and three surfaces.
Borders
* The volar border (margo volaris; anterior border) begins above at the prominent medial angle of the coronoid process, and ends below in front of the styloid process. Its upper part, well-defined, and its middle portion, smooth and rounded, give origin to the flexor digitorum profundus; its lower fourth serves for the origin of the pronator quadratus. This border separates the volar from the medial surface.
* The dorsal border (margo dorsalis; posterior border) begins above at the apex of the triangular subcutaneous surface at the back part of the olecranon, and ends below at the back of the styloid process; it is well-marked in the upper three-fourths, and gives attachment to an aponeurosis which affords a common origin to the flexor carpi ulnaris, the extensor carpi ulnaris, and the flexor digitorum profundus; its lower fourth is smooth and rounded. This border separates the medial from the dorsal surface.
* The interosseous crest (crista interossea; external or interosseous border) begins above by the union of two lines, which converge from the extremities of the radial notch and enclose between them a triangular space for the origin of part of the supinator; it ends below at the head of the ulna. Its upper part is sharp, its lower fourth smooth and rounded. This crest gives attachment to the interosseous membrane, and separates the volar from the dorsal surface.
Surfaces
* The volar surface (facies volaris; anterior surface), much broader above than below, is concave in its upper three-fourths, and gives origin to the flexor digitorum profundus; its lower fourth, also concave, is covered by the pronator quadratus. The lower fourth is separated from the remaining portion by a ridge, directed obliquely downward and medialward, which marks the extent of origin of the pronator quadratus. At the junction of the upper with the middle third of the bone is the nutrient canal, directed obliquely upward.
* The dorsal surface (facies dorsalis; posterior surface) directed backward and lateralward, is broad and concave above; convex and somewhat narrower in the middle; narrow, smooth, and rounded below. On its upper part is an oblique ridge, which runs from the dorsal end of the radial notch, downward to the dorsal border; the triangular surface above this ridge receives the insertion of the anconeus, while the upper part of the ridge affords attachment to the supinator. Below this the surface is subdivided by a longitudinal ridge, sometimes called the perpendicular line, into two parts: the medial part is smooth, and covered by the extensor carpi ulnaris; the lateral portion, wider and rougher, gives origin from above downward to the supinator, the abductor pollicis longus, the extensor pollicis longus, and the extensor indicis proprius.
* The medial surface (facies medialis; internal surface) is broad and concave above, narrow and convex below. Its upper three-fourths give origin to the flexor digitorum profundus; its lower fourth is subcutaneous.
Near the wrist
Near the wrist, the ulnar, with two eminences; the lateral and larger is a rounded, articular eminence, termed the head of the ulna; the medial, narrower and more projecting, is a non-articular eminence, the ulnar styloid process.
* The head of the ulna presents an articular surface, part of which, of an oval or semilunar form, is directed downward, and articulates with the upper surface of the triangular articular disk which separates it from the wrist-joint; the remaining portion, directed lateralward, is narrow, convex, and received into the ulnar notch of the radius.
* The styloid process projects from the medial and back part of the bone; it descends a little lower than the head, and its rounded end affords attachment to the ulnar collateral ligament of the wrist-joint.
* The head is separated from the styloid process by a depression for the attachment of the apex of the triangular articular disk, and behind, by a shallow groove for the tendon of the extensor carpi ulnaris.
Microanatomy
The ulna is a long bone. The long, narrow medullary cavity of the ulna is enclosed in a strong wall of cortical tissue which is thickest along the interosseous border and dorsal surface. At the extremities the compact layer thins. The compact layer is continued onto the back of the olecranon as a plate of close spongy bone with lamellae parallel. From the inner surface of this plate and the compact layer below it trabeculae arch forward toward the olecranon and coronoid and cross other trabeculae, passing backward over the medullary cavity from the upper part of the shaft below the coronoid. Below the coronoid process there is a small area of compact bone from which trabeculae curve upward to end obliquely to the surface of the semilunar notch which is coated with a thin layer of compact bone. The trabeculae at the lower end have a more longitudinal direction.
Development
The ulna is ossified from three centers: one each for the body, the wrist end, and the elbow end, near the top of the olecranon. Ossification begins near the middle of the body of the ulna, about the eighth week of fetal life, and soon extends through the greater part of the bone.
At birth, the ends are cartilaginous. About the fourth year or so, a center appears in the middle of the head, and soon extends into the ulnar styloid process. About the tenth year, a center appears in the olecranon near its extremity, the chief part of this process being formed by an upward extension of the body. The upper epiphysis joins the body about the sixteenth, the lower about the twentieth year.
Function:
Joints
The ulna forms part of the wrist joint and elbow joints. Specifically, the ulna joins (articulates) with:
* trochlea of the humerus, at the right side elbow as a hinge joint with semilunar trochlear notch of the ulna.
* the radius, near the elbow as a pivot joint, this allows the radius to cross over the ulna in pronation.
* the distal radius, where it fits into the ulnar notch.
* the radius along its length via the interosseous membrane that forms a syndesmosis joint.
Additional Information
The ulna is a long thin bone with a small distal head that bears the styloid process, and an expanded proximal end. The proximal end terminates in the olecranon process and bears the semilunar notch on its upper surface. In man, the head of the ulna does not articulate with any of the bones of the carpus. In the rat, the ulna may articulate via its styloid process with the triquetrum and possibly the pisiform bone (see below for a discussion of this point). The radius and ulna are connected more or less throughout their length by an interosseus ligament, which contributes to the origins of some muscles of the forearm.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2308) Battery
Gist
A battery is a device that converts chemical energy contained within its active materials directly into electric energy by means of an electrochemical oxidation-reduction (redox) reaction. This type of reaction involves the transfer of electrons from one material to another via an electric circuit.
Batteries consist of two electrical terminals called the cathode and the anode, separated by a chemical material called an electrolyte. To accept and release energy, a battery is coupled to an external circuit.
A chemical reaction between the metals and the electrolyte frees more electrons in one metal than it does in the other. The metal that frees more electrons develops a positive charge, and the other metal develops a negative charge.
Summary
A battery, in electricity and electrochemistry, is any of a class of devices that convert chemical energy directly into electrical energy. Although the term battery, in strict usage, designates an assembly of two or more galvanic cells capable of such energy conversion, it is commonly applied to a single cell of this kind.
Every battery (or cell) has a cathode, or positive plate, and an anode, or negative plate. These electrodes must be separated by and are often immersed in an electrolyte that permits the passage of ions between the electrodes. The electrode materials and the electrolyte are chosen and arranged so that sufficient electromotive force (measured in volts) and electric current (measured in amperes) can be developed between the terminals of a battery to operate lights, machines, or other devices. Since an electrode contains only a limited number of units of chemical energy convertible to electrical energy, it follows that a battery of a given size has only a certain capacity to operate devices and will eventually become exhausted. The active parts of a battery are usually encased in a box with a cover system (or jacket) that keeps air outside and the electrolyte solvent inside and that provides a structure for the assembly.
Commercially available batteries are designed and built with market factors in mind. The quality of materials and the complexity of electrode and container design are reflected in the market price sought for any specific product. As new materials are discovered or the properties of traditional ones improved, however, the typical performance of even older battery systems sometimes increases by large percentages.
Batteries are divided into two general groups: (1) primary batteries and (2) secondary, or storage, batteries. Primary batteries are designed to be used until the voltage is too low to operate a given device and are then discarded. Secondary batteries have many special design features, as well as particular materials for the electrodes, that permit them to be reconstituted (recharged). After partial or complete discharge, they can be recharged by the application of direct current (DC) voltage. While the original state is usually not restored completely, the loss per recharging cycle in commercial batteries is only a small fraction of 1 percent even under varied conditions.
Details
An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons that will flow through an external electric circuit to the positive terminal. When a battery is connected to an external electric load, a redox reaction converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell.
Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones.
Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines.
History:
Invention
Benjamin Franklin first used the term "battery" in 1749 when he was doing experiments with electricity using a set of linked Leyden jar capacitors. Franklin grouped a number of the jars into what he described as a "battery", using the military term for weapons functioning together. By multiplying the number of holding vessels, a stronger charge could be stored, and more power would be available on discharge.
Italian physicist Alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800. This was a stack of copper and zinc plates, separated by brine-soaked paper disks, that could produce a steady current for a considerable length of time. Volta did not understand that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834.
Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. The Daniell cell, invented in 1836 by British chemist John Frederic Daniell, was the first practical source of electricity, becoming an industry standard and seeing widespread adoption as a power source for electrical telegraph networks. It consisted of a copper pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode.
These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile and potentially dangerous. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.
Batteries in vacuum tube devices historically used a wet cell for the "A" battery (to provide power to the filament) and a dry cell for the "B" battery (to provide the plate voltage).
Future
Between 2010 and 2018, annual battery demand grew by 30%, reaching a total of 180 GWh in 2018. Conservatively, the growth rate is expected to be maintained at an estimated 25%, culminating in demand reaching 2600 GWh in 2030. In addition, cost reductions are expected to further increase the demand to as much as 3562 GWh.
Important reasons for this high rate of growth of the electric battery industry include the electrification of transport, and large-scale deployment in electricity grids, supported by anthropogenic climate change-driven moves away from fossil-fuel combusted energy sources to cleaner, renewable sources, and more stringent emission regimes.
Distributed electric batteries, such as those used in battery electric vehicles (vehicle-to-grid), and in home energy storage, with smart metering and that are connected to smart grids for demand response, are active participants in smart power supply grids. New methods of reuse, such as echelon use of partly-used batteries, add to the overall utility of electric batteries, reduce energy storage costs, and also reduce pollution/emission impacts due to longer lives. In echelon use of batteries, vehicle electric batteries that have their battery capacity reduced to less than 80%, usually after service of 5–8 years, are repurposed for use as backup supply or for renewable energy storage systems.
Grid scale energy storage envisages the large-scale use of batteries to collect and store energy from the grid or a power plant and then discharge that energy at a later time to provide electricity or other grid services when needed. Grid scale energy storage (either turnkey or distributed) are important components of smart power supply grids.
Chemistry and principles
Batteries convert chemical energy directly to electrical energy. In many cases, the electrical energy released is the difference in the cohesive or bond energies of the metals, oxides, or molecules undergoing the electrochemical reaction. For instance, energy can be stored in Zn or Li, which are high-energy metals because they are not stabilized by d-electron bonding, unlike transition metals. Batteries are designed so that the energetically favorable redox reaction can occur only when electrons move through the external part of the circuit.
A battery consists of some number of voltaic cells. Each cell consists of two half-cells connected in series by a conductive electrolyte containing metal cations. One half-cell includes electrolyte and the negative electrode, the electrode to which anions (negatively charged ions) migrate; the other half-cell includes electrolyte and the positive electrode, to which cations (positively charged ions) migrate. Cations are reduced (electrons are added) at the cathode, while metal atoms are oxidized (electrons are removed) at the anode. Some cells use different electrolytes for each half-cell; then a separator is used to prevent mixing of the electrolytes while allowing ions to flow between half-cells to complete the electrical circuit.
The voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries, but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. The high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.
Almost any liquid or moist object that has enough ions to be electrically conductive can serve as the electrolyte for a cell. As a novelty or science demonstration, it is possible to insert two electrodes made of different metals into a lemon, potato, etc. and generate small amounts of electricity.
A voltaic pile can be made from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile generates a very low voltage but, when many are stacked in series, they can replace normal batteries for a short time.
Types:
Primary and secondary batteries
Batteries are classified into primary and secondary forms:
* Primary batteries are designed to be used until exhausted of energy then discarded. Their chemical reactions are generally not reversible, so they cannot be recharged. When the supply of reactants in the battery is exhausted, the battery stops producing current and is useless.
* Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by applying electric current to the cell. This regenerates the original chemical reactants, so they can be used, recharged, and used again multiple times.
Some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the electrodes. Secondary batteries are not indefinitely rechargeable due to dissipation of the active materials, loss of electrolyte and internal corrosion.
Primary batteries, or primary cells, can produce current immediately on assembly. These are most commonly used in portable devices that have low current drain, are used only intermittently, or are used well away from an alternative power source, such as in alarm and communication circuits where other electric power is only intermittently available. Disposable primary cells cannot be reliably recharged, since the chemical reactions are not easily reversible and active materials may not return to their original forms. Battery manufacturers recommend against attempting to recharge primary cells. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms. Common types of disposable batteries include zinc–carbon batteries and alkaline batteries.
Secondary batteries, also known as secondary cells, or rechargeable batteries, must be charged before first use; they are usually assembled with active materials in the discharged state. Rechargeable batteries are (re)charged by applying electric current, which reverses the chemical reactions that occur during discharge/use. Devices to supply the appropriate current are called chargers. The oldest form of rechargeable battery is the lead–acid battery, which are widely used in automotive and boating applications. This technology contains liquid electrolyte in an unsealed container, requiring that the battery be kept upright and the area be well ventilated to ensure safe dispersal of the hydrogen gas it produces during overcharging. The lead–acid battery is relatively heavy for the amount of electrical energy it can supply. Its low manufacturing cost and its high surge current levels make it common where its capacity (over approximately 10 Ah) is more important than weight and handling issues. A common application is the modern car battery, which can, in general, deliver a peak current of 450 amperes.
Composition
Many types of electrochemical cells have been produced, with varying chemical processes and designs, including galvanic cells, electrolytic cells, fuel cells, flow cells and voltaic piles.
A wet cell battery has a liquid electrolyte. Other names are flooded cell, since the liquid covers all internal parts or vented cell, since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells. Molten salt batteries are primary or secondary batteries that use a molten salt as electrolyte. They operate at high temperatures and must be well insulated to retain heat.
A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.
A reserve battery can be stored unassembled (unactivated and supplying no power) for a long period (perhaps years). When the battery is needed, then it is assembled (e.g., by adding electrolyte); once assembled, the battery is charged and ready to work. For example, a battery for an electronic artillery fuze might be activated by the impact of firing a gun. The acceleration breaks a capsule of electrolyte that activates the battery and powers the fuze's circuits. Reserve batteries are usually designed for a short service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water.
On 28 February 2017, the University of Texas at Austin issued a press release about a new type of solid-state battery, developed by a team led by lithium-ion battery inventor John Goodenough, "that could lead to safer, faster-charging, longer-lasting rechargeable batteries for handheld mobile devices, electric cars and stationary energy storage". The solid-state battery is also said to have "three times the energy density", increasing its useful life in electric vehicles, for example. It should also be more ecologically sound since the technology uses less expensive, earth-friendly materials such as sodium extracted from seawater. They also have much longer life.
Sony has developed a biological battery that generates electricity from sugar in a way that is similar to the processes observed in living organisms. The battery generates electricity through the use of enzymes that break down carbohydrates.
The sealed valve regulated lead–acid battery (VRLA battery) is popular in the automotive industry as a replacement for the lead–acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life. VRLA batteries immobilize the electrolyte. The two types are:
* Gel batteries (or "gel cell") use a semi-solid electrolyte.
* Absorbed Glass Mat (AGM) batteries absorb the electrolyte in a special fiberglass matting.
Other portable rechargeable batteries include several sealed "dry cell" types, that are useful in applications such as mobile phones and laptop computers. Cells of this type (in order of increasing power density and cost) include nickel–cadmium (NiCd), nickel–zinc (NiZn), nickel–metal hydride (NiMH), and lithium-ion (Li-ion) cells. Li-ion has by far the highest share of the dry cell rechargeable market. NiMH has replaced NiCd in most applications due to its higher capacity, but NiCd remains in use in power tools, two-way radios, and medical equipment.
In the 2000s, developments include batteries with embedded electronics such as USBCELL, which allows charging an AA battery through a USB connector, nanoball batteries that allow for a discharge rate about 100x greater than current batteries, and smart battery packs with state-of-charge monitors and battery protection circuits that prevent damage on over-discharge. Low self-discharge (LSD) allows secondary cells to be charged prior to shipping.
Lithium–sulfur batteries were used on the longest and highest solar-powered flight.
Consumer and industrial grades
Batteries of all types are manufactured in consumer and industrial grades. Costlier industrial-grade batteries may use chemistries that provide higher power-to-size ratio, have lower self-discharge and hence longer life when not in use, more resistance to leakage and, for example, ability to handle the high temperature and humidity associated with medical autoclave sterilization.
Additional Information:
Battery
A battery is a device that stores energy and then discharges it by converting chemical energy into electricity. Typical batteries most often produce electricity by chemical means through the use of one or more electrochemical cells. Many different materials can and have been used in batteries, but the common battery types are alkaline, lithium-ion, lithium-polymer, and nickel-metal hydride. Batteries can be connected to each other in a series circuit or a parallel circuit.
There is a wide variety of batteries that are available for purchase, and these different types of batteries are used in different devices. Large batteries are used to start cars, while much smaller batteries can power hearing aids. Overall, batteries are extremely important in everyday life.
Cells
A cell is a single unit that produces electricity through some method. Generally speaking, cells generate power through a thermal, chemical or optical process.
A typical cell has two terminals (referred to as electrodes) immersed in a chemical (referred to as the electrolyte). The two electrodes are separated by a porous wall or bridge which allows electric charge to pass from one side to the other through the electrolyte. The anode—the negative terminal—gains electrons while the cathode—the positive terminal—loses electrons. This exchange of electrons allows a difference in potential or voltage difference to be developed between the two terminals—allowing electricity to flow.
There can be a vast number of cells in a battery, from a single cell in an AA battery, to more than 7,100 cells in the 85 kWh Tesla Model S battery.
Primary cells ("dry")
In these cells a chemical action between the electrodes and electrolyte causes a permanent change, meaning they are not rechargeable. These batteries are single use, which results in more waste from the use of these batteries since they are disposed of after a relatively short period of time.
Secondary cells ("wet")
This type of cell (referred to as wet due to using a liquid electrolyte) generates a current through a secondary cell in the opposite direction of the first/normal cell. This causes the chemical action to go in reverse, effectively being restored, meaning that they are rechargeable. These batteries can be more expensive to purchase but generate less waste as they can be used several times.
Battery Capacity
Batteries are often rated in terms of their output voltage and capacity. The capacity is how long a particular battery will last in Ah (Ampere hours):
* A battery with a capacity of 1 Ah will last for one hour operating at 1 A.
Batteries can also be rated by their energy capacity. This is either done in watt-hours or kilowatt-hours.
* A battery with a capacity of 1 kWh will last for one hour while outputting 1 kW of electricity.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2309) Precipitation (chemistry)
Gist
In an aqueous solution, precipitation is the "sedimentation of a solid material (a precipitate) from a liquid solution". The solid formed is called the precipitate. In case of an inorganic chemical reaction leading to precipitation, the chemical reagent causing the solid to form is called the precipitant.
Summary
Chemical precipitation is the most common technology used in removing dissolved (ionic) metals from solutions, such as process wastewaters containing toxic metals. The ionic metals are converted to an insoluble form (particle) by the chemical reaction between the soluble metal compounds and the precipitating reagent. The particles formed by this reaction are removed from solution by settling and/or filtration. The unit operations typically required in this technology include neutralization, precipitation, coagulation/flocculation, solids/liquid separation, and dewatering.
The effectiveness of a chemical precipitation process is dependent on several factors, including the type and concentration of ionic metals present in solution, the precipitant used, the reaction conditions (especially the pH of the solution), and the presence of other constituents that may inhibit the precipitation reaction.
The most widely used chemical precipitation process is hydroxide precipitation (also referred to as precipitation by pH), in which metal hydroxides are formed by using calcium hydroxide (lime) or sodium hydroxide (caustic) as the precipitant. Each dissolved metal has a distinct pH value at which the optimum hydroxide precipitation occurs—from 7.5 for chromium to 11.0 for cadmium. Metal hydroxides are amphoteric, which means that they are increasingly soluble at both low and high pH values. Therefore, the optimum pH for precipitation of one metal may cause another metal to solubilize or start to go back into solution.
Details
In an aqueous solution, precipitation is the "sedimentation of a solid material (a precipitate) from a liquid solution". The solid formed is called the precipitate. In case of an inorganic chemical reaction leading to precipitation, the chemical reagent causing the solid to form is called the precipitant.
The clear liquid remaining above the precipitated or the centrifuged solid phase is also called the supernate or supernatant.
The notion of precipitation can also be extended to other domains of chemistry (organic chemistry and biochemistry) and even be applied to the solid phases (e.g. metallurgy and alloys) when solid impurities segregate from a solid phase.
Supersaturation
The precipitation of a compound may occur when its concentration exceeds its solubility. This can be due to temperature changes, solvent evaporation, or by mixing solvents. Precipitation occurs more rapidly from a strongly supersaturated solution.
The formation of a precipitate can be caused by a chemical reaction. When a barium chloride solution reacts with sulphuric acid, a white precipitate of barium sulfate is formed. When a potassium iodide solution reacts with a lead(II) nitrate solution, a yellow precipitate of lead(II) iodide is formed.
Inorganic chemistry
Precipitate formation is useful in the detection of the type of cation in a salt. To do this, an alkali first reacts with the unknown salt to produce a precipitate that is the hydroxide of the unknown salt. To identify the cation, the color of the precipitate and its solubility in excess are noted. Similar processes are often used in sequence – for example, a barium nitrate solution will react with sulfate ions to form a solid barium sulfate precipitate, indicating that it is likely that sulfate ions are present.
A common example of precipitation from aqueous solution is that of silver chloride. When silver nitrate (AgNO3) is added to a solution of potassium chloride (KCl) the precipitation of a white solid (AgCl) is observed.
Colloidal suspensions
Without sufficient attraction forces (e.g., Van der Waals force) to aggregate the solid particles together and to remove them from solution by gravity (settling), they remain in suspension and form colloids. Sedimentation can be accelerated by high speed centrifugation. The compact mass thus obtained is sometimes referred to as a 'pellet'.
Digestion and precipitates ageing
Digestion, or precipitate ageing, happens when a freshly formed precipitate is left, usually at a higher temperature, in the solution from which it precipitates. It results in purer and larger recrystallized particles. The physico-chemical process underlying digestion is called Ostwald ripening.
Organic chemistry
While precipitation reactions can be used for making pigments, removing ions from solution in water treatment, and in classical qualitative inorganic analysis, precipitation is also commonly used to isolate the products of an organic reaction during workup and purification operations. Ideally, the product of the reaction is insoluble in the solvent used for the reaction. Thus, it precipitates as it is formed, preferably forming pure crystals. An example of this would be the synthesis of porphyrins in refluxing propionic acid. By cooling the reaction mixture to room temperature, crystals of the porphyrin precipitate, and are collected by filtration on a Büchner filter.
Porphyrin synthesis
Precipitation may also occur when an antisolvent (a solvent in which the product is insoluble) is added, drastically reducing the solubility of the desired product. Thereafter, the precipitate may be easily separated by decanting, filtration, or by centrifugation. An example would be the synthesis of Cr3+tetraphenylporphyrin chloride: water is added to the dimethylformamide (DMF) solution in which the reaction occurred, and the product precipitates. Precipitation is useful in purifying many other products: e.g., crude bmim-Cl is taken up in acetonitrile, and dropped into ethyl acetate, where it precipitates.
Biochemistry
Proteins purification and separation can be performed by precipitation in changing the nature of the solvent or the value of its relative permittivity (e.g., by replacing water by ethanol), or by increasing the ionic strength of the solution. As proteins have complex tertiary and quaternary structures due to their specific folding and various weak intermolecular interactions (e.g., hydrogen bridges), these superstructures can be modified and proteins denaturated and precipitated. Another important application of an antisolvent is in ethanol precipitation of DNA.
Metallurgy and alloys
In solid phases, precipitation occurs if the concentration of one solid is above the solubility limit in the host solid, due to e.g. rapid quenching or ion implantation, and the temperature is high enough that diffusion can lead to segregation into precipitates. Precipitation in solids is routinely used to synthesize nanoclusters.
In metallurgy, precipitation from a solid solution is also a way to strengthen alloys.
Precipitation of ceramic phases in metallic alloys such as zirconium hydrides in zircaloy cladding of nuclear fuel pins can also render metallic alloys brittle and lead to their mechanical failure. Correctly mastering the precise temperature and pressure conditions when cooling down spent nuclear fuels is therefore essential to avoid damaging their cladding and to preserve the integrity of the spent fuel elements on the long term in dry storage casks and in geological disposal conditions.
Industrial processes
Hydroxide precipitation is probably the most widely used industrial precipitation process in which metal hydroxides are formed by adding calcium hydroxide (slaked lime) or sodium hydroxide (caustic soda) as precipitant.
History
Powders derived from different precipitation processes have also historically been known as 'flowers'.
Additional Information
Precipitation is the formation of a solid in a solution during a chemical reaction. When the reaction occurs, the solid formed is called the precipitate, and the liquid remaining above the solid is called the supernate.
Uses of precipitation reactions
Precipitation reactions can be used for making pigments, removing salts from water in water treatment, and for qualitative chemical analysis.
This effect is useful in many industrial and scientific applications whereby a chemical reaction may produce a solid that can be collected from the solution by various methods (e.g. filtration, decanting, centrifugation). Precipitation from a solid solution is also a useful way to strengthen alloys; this process is known as solid solution strengthening.
Mechanism
Precipitation can occur when an insoluble substance is formed in the solution due to a chemical reaction or when the solution has been supersaturated by a compound. The formation of a precipitate is a sign of a chemical change. In most situations, the solid forms (“falls”) out of the solute phase, and sink to the bottom of the solution (though it will float if it is less dense than the solvent, or form a suspension).
The solid may reach the bottom of a container by means of settling, sedimentation, or centrifugation.
An important stage of the precipitation process is the onset of nucleation. The creation of a hypothetical solid particle includes the formation of an interface, which requires some energy based on the relative surface energy of the solid and the solution. If this energy is not available, and no suitable nucleation surface is available, supersaturation occurs.
Cation Sensitivity
Precipitate formation is useful in the detection of the type of cation in salt. To do this, an alkali first reacts with the unknown salt to produce a precipitate which is the hydroxide of the unknown salt. To identify the cation, the color of the precipitate and its solubility in excess are noted. Similar processes are often used to separate chemically similar elements, such as the Alkali earth metals.
Digestion
Digestion, or precipitate ageing, happens when a freshly-formed precipitate is left, usually at a higher temperature, in the solution from which it is precipitated. It results in cleaner and bigger particles. The physico-chemical process underlying digestion is called Ostwald ripening.
Coprecipitation
Coprecipitation is the carrying down by a precipitate of substances normally soluble under the conditions employed. It is an important issue in chemical analysis, where it is often undesirable, but in some cases it can be exploited. In gravimetric analysis, it is a problem because undesired impurities often coprecipitate with the analyte, resulting in excess mass. On the other hand, in the analysis of trace elements, as is often the case in radiochemistry, coprecipitation is often the only way of separating an element.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2310) Crystallization
Gist
Crystallization, or crystallisation, is the process of atoms or molecules arranging into a well-defined, rigid crystal lattice in order to minimize their energetic state. The smallest entity of a crystal lattice is called a unit cell, which can accept atoms or molecules to grow a macroscopic crystal.
Summary
Crystallization is not only a natural process but also a chemical method used for the separation of solid and liquid in which solid crystals are formed from homogeneous solution. The solution should be in supersaturated condition for crystallization process. Therefore, the solution should contain higher concentration of dissolved solutes namely molecules or ions than it would be present in the equilibrium state of the saturated solution, which can be achieved by several common methods used in industries such as chemical reaction, cooling of solution, changing the pH, and addition of second solvent to decrease the solute solubility. Other techniques such as solvent evaporation are also be employed.
Nucleation and crystal growth are the two major steps in crystallization process. Nucleation involves the formation of cluster of nanometer (nm) distances by the homogeneously dispersed solute molecules in the solvent that results in increasing the concentration of solute in a small region and stabilization of the current operating conditions. Nuclei are the compact crystal structures that form stable clusters. Unstable clusters cause redissolving of crystals in the solution. Hence, to develop stable nuclei, clusters should have a critical size which depends on the working factors such as irregularities, temperature, supersaturation, etc. In nucleation stage, the periodic and definite arrangement of atoms defines the structure of crystal. “Crystal structure” specifies the internal atom arrangement and not the macroscopic properties of the crystal size and shape.
The crystal growth refers to critical cluster size that can be achieved by subsequent growth of nuclei. As long as the phenomenon of supersaturation persists, nucleation and growth continue to occur simultaneously. Supersaturation is essential for the crystallization process and therefore the nucleation rate and crystal growth are determined by the obtainable solution’s supersaturation condition. Different sizes and shapes of the crystals depend on the solution conditions that determine whether crystal growth or nucleation is predominant over others. Solid–liquid system reaches equilibrium after exhaustion of the supersaturation which starts completing crystallization till modifications are made to supersaturate the solution over again.
Details
Crystallization is the process by which solids form, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, cooling rate, and in the case of liquid crystals, time of fluid evaporation.
Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc.
The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances).
Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal.
Process
The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties. Nucleation is the step where the solute molecules or atoms dispersed in the solvent start to gather into clusters, on the microscopic scale (elevating solute concentration in a small region), that become stable under the current operating conditions. These stable clusters constitute the nuclei. Therefore, the clusters need to reach a critical size in order to become stable nuclei. Such critical size is dictated by many different factors (temperature, supersaturation, etc.). It is at the stage of nucleation that the atoms or molecules arrange in a defined and periodic manner that defines the crystal structure – note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties of the crystal (size and shape), although those are a result of the internal crystal structure.
The crystal growth is the subsequent size increase of the nuclei that succeed in achieving the critical cluster size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization, as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either nucleation or growth may be predominant over the other, dictating crystal size.
Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon called polymorphism. Certain polymorphs may be metastable, meaning that although it is not in thermodynamic equilibrium, it is kinetically stable and requires some input of energy to initiate a transformation to the equilibrium phase. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by varying factors such as temperature, such as in the transformation of anatase to rutile phases of titanium dioxide.
In nature
Snowflakes are a very well-known example, where subtle differences in crystal growth conditions result in different geometries.
Crystallized honey
There are many examples of natural process that involve crystallization.
Geological time scale process examples include:
* Natural (mineral) crystal formation;
* Stalactite/stalagmite, rings formation;
Human time scale process examples include:
* Snow flakes formation;
* Honey crystallization (nearly all types of honey crystallize).
Methods
Crystal formation can be divided into two types, where the first type of crystals are composed of a cation and anion, also known as a salt, such as sodium acetate. The second type of crystals are composed of uncharged species, for example menthol.
Crystal formation can be achieved by various methods, such as: cooling, evaporation, addition of a second solvent to reduce the solubility of the solute (technique known as antisolvent or drown-out), solvent layering, sublimation, changing the cation or anion, as well as other methods.
The formation of a supersaturated solution does not guarantee crystal formation, and often a seed crystal or scratching the glass is required to form nucleation sites.
A typical laboratory technique for crystal formation is to dissolve the solid in a solution in which it is partially soluble, usually at high temperatures to obtain supersaturation. The hot mixture is then filtered to remove any insoluble impurities. The filtrate is allowed to slowly cool. Crystals that form are then filtered and washed with a solvent in which they are not soluble, but is miscible with the mother liquor. The process is then repeated to increase the purity in a technique known as recrystallization.
For biological molecules in which the solvent channels continue to be present to retain the three dimensional structure intact, microbatch crystallization under oil and vapor diffusion methods have been the common methods.
Additional Information
Crystallization is the process of forming crystals by a substance passing from a gas or liquid to the solid state (sublimation or fusion) or coming out of solution (precipitation or evaporation). In the fusion method a solid is melted by heating, and crystals form as the melt cools and solidifies. Ice crystals and monoclinic sulfur are formed in this way. Crystallization is an important laboratory and industrial technique for purifying and separating compounds.
Crystals grow by precipitation out of a supersaturated solution or a cooling melt. The atoms or ions coalesce into tiny "seeds" around which further particles build up the lattice layers. If alum powder is dissolved in a little hot water with a drop of sulfuric acid and placed in a jar as shown, alum crystals will grow as the solution cools. Slower cooling gives larger crystals. Cooling molten sulfur causes the formation of sulfur crystals.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2311) Lithotripsy
Gist
Lithotripsy is a procedure that uses shock waves to break up stones in the kidney and parts of the ureter (tube that carries urine from your kidneys to your bladder). After the procedure, the tiny pieces of stones pass out of your body in your urine.
Summary
Lithotripsy is a type of medical procedure. It uses shock waves or a laser to break down stones in the kidney, gallbladder, or ureters. The main types are extracorporeal shock wave lithotripsy (ESWL) and laser lithotripsy.
The remaining particles of small stones will exit the body when the person urinates.
It is common to develop stones in the kidneys, gallbladder, or ureters.
Sometimes, stones are small enough to leave the body during urination without a person noticing. However, large kidney or ureter stones can cause pain and block the flow of urine.
If stones do not pass, they can damage the kidneys and urinary tract. When medications do not help, a lithotripsy procedure can break the stones down into smaller pieces so that they can come out in the urine.
The two main types of lithotripsy are extracorporeal shock wave lithotripsy (ESWL) and laser lithotripsy. Laser lithotripsy is sometimes known as flexible ureteroscopy and laser lithotripsy (FURSL) because doctors use a tool called a ureteroscope.
Both procedures can help eliminate bothersome stones quickly and effectively. The type of treatment a doctor recommends will depend on a range of factors, such as the type of stones the person has and their overall health.
ESWL
ESWL uses shock waves to break down stones.
During this procedure, a doctor will use a machine called a lithotripter to aim sound waves directly at the stones through the body.
The sound waves break down the stones into small pieces. They are designed to affect the stone, but they can also harm other tissues in the body if the doctor does not carefully administer and monitor them.
The procedure takes about 1 hour and usually happens in a hospital. In most cases, a person can go home the same day. After the treatment, they should pass the stone particles over several days or weeks through urination.
It is important to note that there can be complications with this treatment. One complication can be bleeding due to damage to the kidney.
FURSL
This procedure involves using an endoscope to treat stones in the ureter. An endoscope is a flexible tube with a light and camera attached that helps a doctor see inside an organ or body cavity.
The doctor can see the stones using the ureteroscope and use a laser to break them down. The procedure takes about 30 minutes, and most people can go home the same day.
However, the procedure may take up to 2 hours depending on the number of stones the doctor needs to remove and their hardness.
The broken stone fragments should pass easily through urine in the days and weeks following the procedure.
Success rates
The success of any one method will depend on stone composition, density, size, and location.
One systematic review found that FURSL had a success rate of 93.7% for stones around 2.5 centimeters in size. The study reported that 10.1% of people experienced some complications, however.
How to prepare
Before the lithotripsy procedure, a doctor will run tests to determine the number of stones a person has, as well as their size and location.
It is likely that the doctor will use a non-contrast CT scan to diagnose kidney stones because this test is highly sensitive and specific.
They will also use a standard abdominal X-ray known as a kidney, ureters, bladders (KUB) to find calcium-containing stones.
A person should let the doctor know if they are taking any medications in advance. Before the procedure, they may need to stop taking certain medications, including blood thinners and over-the-counter pain relievers such as aspirin and ibuprofen. This is because these can interfere with the ability of the blood to clot.
Blood clotting is essential to stop any bleeding that might occur during or after the procedure.
Lithotripsy usually takes place under general anesthetics, which means that the person will be asleep and will not feel any pain. Typically, people will need to fast for 8–12 hours before receiving anesthetics.
Anyone who is undergoing lithotripsy should also plan to have someone drive them home, as anesthetics can cause drowsiness and nausea for several hours after the procedure.
Details
Lithotripsy is a procedure involving the physical destruction of hardened masses like kidney stones, bezoars or gallstones, which may be done non-invasively. The term is derived from the Greek words meaning "breaking (or pulverizing) stones".
Uses
Lithotripsy is a non-invasive procedure used to break up hardened masses like kidney stones, bezoars or gallstones.
Contraindications
Commonly cited absolute contraindications to shock wave lithotripsy (SWL) include pregnancy, coagulopathy or use of platelet aggregation inhibitors, aortic aneurysms, severe untreated hypertension, and untreated urinary tract infections.
Techniques
* Extracorporeal shock wave therapy (lithotripsy)
* Intracorporeal (endoscopic lithotripsy):
** Laser lithotripsy: effective for larger stones (>2 cm) with good stone-free and complication rates.
** Electro hydraulic lithotripsy
** Mechanical lithotripsy
** Ultrasonic lithotripsy: safer for small stones (<10 mm)
History
Surgery was the only method to remove stones too large to pass until French surgeon and urologist Jean Civiale in 1832 invented a surgical instrument (the lithotrite) to crush stones inside the urinary bladder without having to open the abdomen. To remove a calculus, Civiale inserted his instrument through the urethra and bored holes in the stone. Afterwards, he crushed it with the same instrument and aspirated the resulting fragments or let them flow normally with urine.
Lithotripsy replaced using lithotrites as the most common treatment beginning in the mid 1980s. In extracorporeal shock wave lithotripsy (ESWL), external shockwaves are focused at the stone to pulverize it. Ureteroscopic methods use a rigid or flexible scope to reach the stone and direct mechanical or light energy at it. Endoscopy can use lasers as well as other modes of energy delivery: ultrasound or electrohydraulics.
ESWL was first used on kidney stones in 1980. It is also applied to gallstones and pancreatic stones. External shockwaves are focused and pulverize the stone which is located by imaging. The first shockwave lithotriptor approved for human use was the Dornier HM3 (human model 3) derived from a device used for testing aerospace parts. Second generation devices used piezoelectricity or electromagnetism generators. American Urological Association guidelines consider ESWL a potential primary treatment for stones between 4 mm and 2 cm.
Electrohydraulic lithotripsy is an industrial technique for fragmenting rocks by using electrodes to create shockwaves. It was applied to bile duct stones in 1975. It can damage tissue and is mostly used in biliary tract specialty centers. Pneumatic mechanical devices have been used with endoscopes, commonly for large and hard stones.
Laser lithotripsy was introduced in the 1980s. Pulsed dye lasers emit 504 nm (cyan-colored) light that is delivered to the stone by optical fibers through a scope. Holmium:YAG lasers were developed more recently and produce smaller fragments.
Additional Information
Lithotripsy is a noninvasive (the skin is not pierced) procedure used to treat kidney stones that are too large to pass through the urinary tract. Lithotripsy treats kidney stones by sending focused ultrasonic energy or shock waves directly to the stone first located with fluoroscopy (a type of X-ray “movie”) or ultrasound (high frequency sound waves). The shock waves break a large stone into smaller stones that will pass through the urinary system. Lithotripsy allows persons with certain types of stones in the urinary system to avoid an invasive surgical procedure for stone removal. In order to aim the waves, your doctor must be able to see the stones under X-ray or ultrasound.
The introduction of lithotripsy in the early 1980s revolutionized the treatment of patients with kidney stone disease. Patients who once required major surgery to remove their stones could be treated with lithotripsy, and not even require an incision. As such, lithotripsy is the only non-invasive treatment for kidney stones, meaning no incision or internal telescopic device is required.
Lithotripsy involves the administration of a series of shock waves to the targeted stone. The shock waves, which are generated by a machine called a lithotripter, are focused by x-ray onto the kidney stone. The shock waves travel into the body, through skin and tissue, reaching the stone where they break it into small fragments. For several weeks following treatment, those small fragments are passed out of the body in the urine.
In the two-plus decades since lithotripsy was first performed in the United States, we have learned a great deal about how different patients respond to this technology. It turns out that we can identify some patients who will be unlikely to experience a successful outcome following lithotripsy, whereas we may predict that other patients will be more likely to clear their stones. Although many of these parameters are beyond anyone's control, such as the stone size and location in the kidney, there are other maneuvers that can be done during lithotripsy treatment that may positively influence the outcome of the procedure. At the Brady Urological Institute, our surgeons have researched techniques to make lithotripsy safer and more effective, and we incorporate our own findings as well as those of other leading groups to provide a truly state of the art treatment.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2312) Amplifier
Gist
An amplifier is an electronic device that increases the voltage, current, or power of a signal. Amplifiers are used in wireless communications and broadcasting, and in audio equipment of all kinds. They can be categorized as either weak-signal amplifiers or power amplifiers.
Amplifier is an electronic device which amplifies the input power of the signal. The types of the amplifier are: 2.1 Voltage amplifier: The voltage amplifier increase the input voltage. 2.2 Current amplifier: Current amplifier increase the input current. 2.3 Power amplifier: A power amplifier increase the input power.
Summary
An amplifier is an electronic device that increases the voltage, current, or power of a signal. Amplifiers are used in wireless communications and broadcasting, and in audio equipment of all kinds. They can be categorized as either weak-signal amplifiers or power amplifiers.
Types of amplifiers
* Weak-signal amplifiers are used primarily in wireless receivers. They are also employed in acoustic pickups, audio tape players, and compact disc players. A weak-signal amplifier is designed to deal with exceedingly small input signals, in some cases measuring only a few nanovolts (units of 10-9 volt). Such amplifiers must generate minimal internal noise while increasing the signal voltage by a large factor. The most effective device for this application is the field-effect transistor. The specification that denotes the effectiveness of a weak-signal amplifier is sensitivity, defined as the number of microvolts (units of {10}^{-6} volt) of signal input that produce a certain ratio of signal output to noise output (usually 10 to 1).
* Power amplifiers are used in wireless transmitters, broadcast transmitters, and hi-fi audio equipment. The most frequently-used device for power amplification is the bipolar transistor. However, vacuum tubes, once considered obsolete, are becoming increasingly popular, especially among musicians. Many professional musicians believe that the vacuum tube (known as a "valve" in England) provides superior fidelity.
Two important considerations in power amplification are power output and efficiency. Power output is measured in watts or kilowatts. Efficiency is the ratio of signal power output to total power input (wattage demanded of the power supply or battery). This value is always less than 1. It is typically expressed as a percentage. In audio applications, power amplifiers are 30 to 50 percent efficient. In wireless communications and broadcasting transmitters, efficiency ranges from about 50 to 70 percent. In hi-fi audio power amplifiers, distortion is also an important factor. This is a measure of the extent to which the output waveform is a faithful replication of the input waveform. The lower the distortion, in general, the better the fidelity of the output sound.
Details
An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal (a time-varying voltage or current). It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude (magnitude of the voltage or current) of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.
An amplifier can be either a separate piece of equipment or an electrical circuit contained within another device. Amplification is fundamental to modern electronics, and amplifiers are widely used in almost all electronic equipment. Amplifiers can be categorized in different ways. One is by the frequency of the electronic signal being amplified. For example, audio amplifiers amplify signals in the audio (sound) range of less than 20 kHz, RF amplifiers amplify frequencies in the radio frequency range between 20 kHz and 300 GHz, and servo amplifiers and instrumentation amplifiers may work with very low frequencies down to direct current. Amplifiers can also be categorized by their physical placement in the signal chain; a preamplifier may precede other signal processing stages, for example, while a power amplifier is usually used after other amplifier stages to provide enough output power for the final use of the signal. The first practical electrical device which could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Today most amplifiers use transistors.
History:
Vacuum tubes
The first practical prominent device that could amplify was the triode vacuum tube, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Vacuum tubes were used in almost all amplifiers until the 1960s–1970s when transistors replaced them. Today, most amplifiers use transistors, but vacuum tubes continue to be used in some applications.
The development of audio communication technology in form of the telephone, first patented in 1876, created the need to increase the amplitude of electrical signals to extend the transmission of signals over increasingly long distances. In telegraphy, this problem had been solved with intermediate devices at stations that replenished the dissipated energy by operating a signal recorder and transmitter back-to-back, forming a relay, so that a local energy source at each intermediate station powered the next leg of transmission. For duplex transmission, i.e. sending and receiving in both directions, bi-directional relay repeaters were developed starting with the work of C. F. Varley for telegraphic transmission. Duplex transmission was essential for telephony and the problem was not satisfactorily solved until 1904, when H. E. Shreeve of the American Telephone and Telegraph Company improved existing attempts at constructing a telephone repeater consisting of back-to-back carbon-granule transmitter and electrodynamic receiver pairs. The Shreeve repeater was first tested on a line between Boston and Amesbury, MA, and more refined devices remained in service for some time. After the turn of the century it was found that negative resistance mercury lamps could amplify, and were also tried in repeaters, with little success.
The development of thermionic valves which began around 1902, provided an entirely electronic method of amplifying signals. The first practical version of such devices was the Audion triode, invented in 1906 by Lee De Forest, which led to the first amplifiers around 1912. Since the only previous device which was widely used to strengthen a signal was the relay used in telegraph systems, the amplifying vacuum tube was first called an electron relay. The terms amplifier and amplification, derived from the Latin amplificare, (to enlarge or expand), were first used for this new capability around 1915 when triodes became widespread.
The amplifying vacuum tube revolutionized electrical technology. It made possible long-distance telephone lines, public address systems, radio broadcasting, talking motion pictures, practical audio recording, radar, television, and the first computers. For 50 years virtually all consumer electronic devices used vacuum tubes. Early tube amplifiers often had positive feedback (regeneration), which could increase gain but also make the amplifier unstable and prone to oscillation. Much of the mathematical theory of amplifiers was developed at Bell Telephone Laboratories during the 1920s to 1940s. Distortion levels in early amplifiers were high, usually around 5%, until 1934, when Harold Black developed negative feedback; this allowed the distortion levels to be greatly reduced, at the cost of lower gain. Other advances in the theory of amplification were made by Harry Nyquist and Hendrik Wade Bode.
The vacuum tube was virtually the only amplifying device, other than specialized power devices such as the magnetic amplifier and amplidyne, for 40 years. Power control circuitry used magnetic amplifiers until the latter half of the twentieth century when power semiconductor devices became more economical, with higher operating speeds. The old Shreeve electroacoustic carbon repeaters were used in adjustable amplifiers in telephone subscriber sets for the hearing impaired until the transistor provided smaller and higher quality amplifiers in the 1950s.
Transistors
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Brattain in 1947 at Bell Labs, where William Shockley later invented the bipolar junction transistor (BJT) in 1948. They were followed by the invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. Due to MOSFET scaling, the ability to scale down to increasingly small sizes, the MOSFET has since become the most widely used amplifier.
The replacement of bulky electron tubes with transistors during the 1960s and 1970s created a revolution in electronics, making possible a large class of portable electronic devices, such as the transistor radio developed in 1954. Today, use of vacuum tubes is limited to some high power applications, such as radio transmitters, as well as some musical instrument and high-end audiophile amplifiers.
Beginning in the 1970s, more and more transistors were connected on a single chip thereby creating higher scales of integration (such as small-scale, medium-scale and large-scale integration) in integrated circuits. Many amplifiers commercially available today are based on integrated circuits.
For special purposes, other active elements have been used. For example, in the early days of the satellite communication, parametric amplifiers were used. The core circuit was a diode whose capacitance was changed by an RF signal created locally. Under certain conditions, this RF signal provided energy that was modulated by the extremely weak satellite signal received at the earth station.
Advances in digital electronics since the late 20th century provided new alternatives to the conventional linear-gain amplifiers by using digital switching to vary the pulse-shape of fixed amplitude signals, resulting in devices such as the Class-D amplifier.
Properties
Amplifier properties are given by parameters that include:
* Gain, the ratio between the magnitude of output and input signals
* Bandwidth, the width of the useful frequency range
* Efficiency, the ratio between the power of the output and total power consumption
* Linearity, the extent to which the proportion between input and output amplitude is the same for high amplitude and low amplitude input
* Noise, a measure of undesired noise mixed into the output
* Output dynamic range, the ratio of the largest and the smallest useful output levels
* Slew rate, the maximum rate of change of the output
* Rise time, settling time, ringing and overshoot that characterize the step response
* Stability, the ability to avoid self-oscillation
Amplifiers are described according to the properties of their inputs, their outputs, and how they relate. All amplifiers have gain, a multiplication factor that relates the magnitude of some property of the output signal to a property of the input signal. The gain may be specified as the ratio of output voltage to input voltage (voltage gain), output power to input power (power gain), or some combination of current, voltage, and power. In many cases the property of the output that varies is dependent on the same property of the input, making the gain unitless (though often expressed in decibels (dB)).
Most amplifiers are designed to be linear. That is, they provide constant gain for any normal input level and output signal. If an amplifier's gain is not linear, the output signal can become distorted. There are, however, cases where variable gain is useful. Certain signal processing applications use exponential gain amplifiers.
Amplifiers are usually designed to function well in a specific application, for example: radio and television transmitters and receivers, high-fidelity ("hi-fi") stereo equipment, microcomputers and other digital equipment, and guitar and other instrument amplifiers. Every amplifier includes at least one active device, such as a vacuum tube or transistor.
Negative feedback
Negative feedback is a technique used in most modern amplifiers to increase bandwidth, reduce distortion, and control gain. In a negative feedback amplifier part of the output is fed back and added to the input in the opposite phase, subtracting from the input. The main effect is to reduce the overall gain of the system. However, any unwanted signals introduced by the amplifier, such as distortion are also fed back. Since they are not part of the original input, they are added to the input in opposite phase, subtracting them from the input. In this way, negative feedback also reduces nonlinearity, distortion and other errors introduced by the amplifier. Large amounts of negative feedback can reduce errors to the point that the response of the amplifier itself becomes almost irrelevant as long as it has a large gain, and the output performance of the system (the "closed loop performance") is defined entirely by the components in the feedback loop. This technique is used particularly with operational amplifiers (op-amps).
Non-feedback amplifiers can achieve only about 1% distortion for audio-frequency signals. With negative feedback, distortion can typically be reduced to 0.001%. Noise, even crossover distortion, can be practically eliminated. Negative feedback also compensates for changing temperatures, and degrading or nonlinear components in the gain stage, but any change or nonlinearity in the components in the feedback loop will affect the output. Indeed, the ability of the feedback loop to define the output is used to make active filter circuits.
Another advantage of negative feedback is that it extends the bandwidth of the amplifier. The concept of feedback is used in operational amplifiers to precisely define gain, bandwidth, and other parameters entirely based on the components in the feedback loop.
Negative feedback can be applied at each stage of an amplifier to stabilize the operating point of active devices against minor changes in power-supply voltage or device characteristics.
Some feedback, positive or negative, is unavoidable and often undesirable—introduced, for example, by parasitic elements, such as inherent capacitance between input and output of devices such as transistors, and capacitive coupling of external wiring. Excessive frequency-dependent positive feedback can produce parasitic oscillation and turn an amplifier into an oscillator.
Additional Information
Amplifier, in electronics, is a device that responds to a small input signal (voltage, current, or power) and delivers a larger output signal that contains the essential waveform features of the input signal. Amplifiers of various types are widely used in such electronic equipment as radio and television receivers, high-fidelity audio equipment, and computers. Amplifying action can be provided by electromechanical devices (e.g., transformers and generators) and vacuum tubes, but most electronic systems now employ solid-state microcircuits as amplifiers. Such an integrated circuit consists of many thousands of transistors and related devices on a single tiny silicon chip.
A single amplifier is usually insufficient to raise the output to the desired level. In such cases the output of the first amplifier is fed into a second, whose output is fed to a third, and so on, until the output level is satisfactory. The result is cascade, or multistage amplification. Long-distance telephone, radio, television, electronic control and measuring instruments, radar, and countless other devices all depend on this basic process of amplification. The overall amplification of a multistage amplifier is the product of the gains of the individual stages.
There are various schemes for the coupling of cascading electronic amplifiers, depending upon the nature of the signal involved in the amplification process. Solid-state microcircuits have generally proved more advantageous than vacuum-tube circuits for the direct coupling of successive amplifier stages. Transformers can be used for coupling, but they are bulky and expensive.
An electronic amplifier can be designed to produce a magnified output signal identical in every respect to the input signal. This is linear operation. If the output is altered in shape after passing through the amplifier, amplitude distortion exists. If the amplifier does not amplify equally at all frequencies, the result is called frequency distortion, or discrimination (as in emphasizing bass or treble sounds in music recordings).
When the power required from the output of the amplifier is so large as to preclude the use of electronic devices, dynamoelectric and magnetic amplifiers find wide application.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2313) Hearing Range
Gist
The "normal" hearing frequency range is between 20 Hz and 20,000 Hz. This range of hearing is influenced by age, occupation and gender. As we age, hearing sensitivity at high frequencies decreases by about 12 kHz.
The human range is commonly given as 20 to 20,000 Hz, although there is considerable variation between individuals, especially at high frequencies, and a gradual loss of sensitivity to higher frequencies with age is considered normal. Sensitivity also varies with frequency, as shown by equal-loudness contours.
The human ear is most sensitive to and most easily detects frequencies of 1,000 to 4,000 hertz, but at least for normal young ears the entire audible range of sounds extends from about 20 to 20,000 hertz. Sound waves of still higher frequency are referred to as ultrasonic, although they can be heard by other mammals.
Summary
Hearing is the process by which the ear transforms sound vibrations in the external environment into nerve impulses that are conveyed to the brain, where they are interpreted as sounds. Sounds are produced when vibrating objects, such as the plucked string of a guitar, produce pressure pulses of vibrating air molecules, better known as sound waves. The ear can distinguish different subjective aspects of a sound, such as its loudness and pitch, by detecting and analyzing different physical characteristics of the waves. Pitch is the perception of the frequency of sound waves—i.e., the number of wavelengths that pass a fixed point in a unit of time. Frequency is usually measured in cycles per second, or hertz. The human ear is most sensitive to and most easily detects frequencies of 1,000 to 4,000 hertz, but at least for normal young ears the entire audible range of sounds extends from about 20 to 20,000 hertz. Sound waves of still higher frequency are referred to as ultrasonic, although they can be heard by other mammals. Loudness is the perception of the intensity of sound—i.e., the pressure exerted by sound waves on the tympanic membrane. The greater their amplitude or strength, the greater the pressure or intensity, and consequently the loudness, of the sound. The intensity of sound is measured and reported in decibels (dB), a unit that expresses the relative magnitude of a sound on a logarithmic scale. Stated in another way, the decibel is a unit for comparing the intensity of any given sound with a standard sound that is just perceptible to the normal human ear at a frequency in the range to which the ear is most sensitive. On the decibel scale, the range of human hearing extends from 0 dB, which represents a level that is all but inaudible, to about 130 dB, the level at which sound becomes painful.
Details
Hearing range describes the frequency range that can be heard by humans or other animals, though it can also refer to the range of levels. The human range is commonly given as 20 to 20,000 Hz, although there is considerable variation between individuals, especially at high frequencies, and a gradual loss of sensitivity to higher frequencies with age is considered normal. Sensitivity also varies with frequency, as shown by equal-loudness contours. Routine investigation for hearing loss usually involves an audiogram which shows threshold levels relative to a normal.
Several animal species can hear frequencies well beyond the human hearing range. Some dolphins and bats, for example, can hear frequencies over 100 kHz. Elephants can hear sounds at 16 Hz–12 kHz, while some whales can hear infrasonic sounds as low as 7 Hz.
Physiology
The hairs in hair cells, stereocilia, range in height from 1 µm, for auditory detection of very high frequencies, to 50 µm or more in some vestibular systems.
Measurement
A basic measure of hearing is afforded by an audiogram, a graph of the absolute threshold of hearing (minimum discernible sound level) at various frequencies throughout an organism's nominal hearing range.
Behavioural hearing tests or physiological tests can be used to find the hearing thresholds of humans and other animals. For humans, the test involves tones being presented at specific frequencies (pitch) and intensities (loudness). When the subject hears the sound, they indicate this by raising a hand or pressing a button. The lowest intensity they can hear is recorded. The test varies for children; their response to the sound can be indicated by a turn of the head or by using a toy. The child learns what to do upon hearing the sound, such as placing a toy man in a boat. A similar technique can be used when testing animals, where food is used as a reward for responding to the sound. The information on different mammals' hearing was obtained primarily by behavioural hearing tests.
Physiological tests do not need the patient to respond consciously.
Humans
In humans, sound waves funnel into the ear via the external ear canal and reach the eardrum (tympanic membrane). The compression and rarefaction of these waves set this thin membrane in motion, causing sympathetic vibration through the middle ear bones (the ossicles: malleus, incus, and stapes), the basilar fluid in the cochlea, and the hairs within it, called stereocilia. These hairs line the cochlea from base to apex, and the part stimulated and the intensity of stimulation gives an indication of the nature of the sound. Information gathered from the hair cells is sent via the auditory nerve for processing in the brain.
The commonly stated range of human hearing is 20 to 20,000 Hz. Under ideal laboratory conditions, humans can hear sound as low as 12 Hz and as high as 28 kHz, though the threshold increases sharply at 15 kHz in adults, corresponding to the last auditory channel of the cochlea. The human auditory system is most sensitive to frequencies between 2,000 and 5,000 Hz. Individual hearing range varies according to the general condition of a human's ears and nervous system. The range shrinks during life, usually beginning at around the age of eight with the upper frequency limit being reduced. Women lose their hearing somewhat less often than men. This is due to a lot of social and external factors. For example, men spend more time in noisy places, and this is associated not only with work but also with hobbies and other activities. Women have a sharper hearing loss after menopause. In women, hearing decrease is worse at low and partially medium frequencies, while men are more likely to suffer from hearing loss at high frequencies.
Audiograms of human hearing are produced using an audiometer, which presents different frequencies to the subject, usually over calibrated headphones, at specified levels. The levels are weighted with frequency relative to a standard graph known as the minimum audibility curve, which is intended to represent "normal" hearing. The threshold of hearing is set at around 0 phon on the equal-loudness contours (i.e. 20 micropascals, approximately the quietest sound a young healthy human can detect), but is standardised in an ANSI standard to 1 kHz. Standards using different reference levels, give rise to differences in audiograms. The ASA-1951 standard, for example, used a level of 16.5 dB SPL (sound pressure level) at 1 kHz, whereas the later ANSI-1969/ISO-1963 standard uses 6.5 dB SPL, with a 10 dB correction applied for older people.
Other primates
Several primates, especially small ones, can hear frequencies far into the ultrasonic range. Measured with a 60 dB SPL signal, the hearing range for the Senegal bushbaby is 92 Hz–65 kHz, and 67 Hz–58 kHz for the ring-tailed lemur. Of 19 primates tested, the Japanese macaque had the widest range, 28 Hz–34.5 kHz, compared with 31 Hz–17.6 kHz for humans.
Cats
Cats have excellent hearing and can detect an extremely broad range of frequencies. They can hear higher-pitched sounds than humans or most dogs, detecting frequencies from 55 Hz up to 79 kHz. Cats do not use this ability to hear ultrasound for communication but it is probably important in hunting, since many species of rodents make ultrasonic calls. Cat hearing is also extremely sensitive and is among the best of any mammal, being most acute in the range of 500 Hz to 32 kHz. This sensitivity is further enhanced by the cat's large movable outer ears (their pinnae), which both amplify sounds and help a cat sense the direction from which a noise is coming.
Dogs
The hearing ability of a dog is dependent on breed and age, though the range of hearing is usually around 67 Hz to 45 kHz. As with humans, some dog breeds' hearing ranges narrow with age, such as the German shepherd and miniature poodle. When dogs hear a sound, they will move their ears towards it in order to maximize reception. In order to achieve this, the ears of a dog are controlled by at least 18 muscles, which allow the ears to tilt and rotate. The ear's shape also allows the sound to be heard more accurately. Many breeds often have upright and curved ears, which direct and amplify sounds.
As dogs hear higher frequency sounds than humans, they have a different acoustic perception of the world. Sounds that seem loud to humans often emit high-frequency tones that can scare away dogs. Whistles which emit ultrasonic sound, called dog whistles, are used in dog training, as a dog will respond much better to such levels. In the wild, dogs use their hearing capabilities to hunt and locate food. Domestic breeds are often used to guard property due to their increased hearing ability. So-called "Nelson" dog whistles generate sounds at frequencies higher than those audible to humans but well within the range of a dog's hearing.
Bats
Bats have evolved very sensitive hearing to cope with their nocturnal activity. Their hearing range varies by species; at the lowest it can be 1 kHz for some species and for other species the highest reaches up to 200 kHz. Bats that can detect 200 kHz cannot hear very well below 10 kHz. In any case, the most sensitive range of bat hearing is narrower: about 15 kHz to 90 kHz.
Bats navigate around objects and locate their prey using echolocation. A bat will produce a very loud, short sound and assess the echo when it bounces back. Bats hunt flying insects; these insects return a faint echo of the bat's call. The type of insect, how big it is and distance can be determined by the quality of the echo and time it takes for the echo to rebound. There are two types of call constant frequency (CF), and frequency modulated (FM) that descend in pitch. Each type reveals different information; CF is used to detect an object, and FM is used to assess its distance. The pulses of sound produced by the bat last only a few thousandths of a second; silences between the calls give time to listen for the information coming back in the form of an echo. Evidence suggests that bats use the change in pitch of sound produced via the Doppler effect to assess their flight speed in relation to objects around them. The information regarding size, shape and texture is built up to form a picture of their surroundings and the location of their prey. Using these factors a bat can successfully track change in movements and therefore hunt down their prey.
Mice
Mice have large ears in comparison to their bodies. They hear higher frequencies than humans; their frequency range is 1 kHz to 70 kHz. They do not hear the lower frequencies that humans can; they communicate using high-frequency noises some of which are inaudible by humans. The distress call of a young mouse can be produced at 40 kHz. The mice use their ability to produce sounds out of predators' frequency ranges to alert other mice of danger without exposing themselves, though notably, cats' hearing range encompasses the mouse's entire vocal range. The squeaks that humans can hear are lower in frequency and are used by the mouse to make longer distance calls, as low-frequency sounds can travel farther than high-frequency sounds.
Birds
Hearing is birds' second most important sense and their ears are funnel-shaped to focus sound. The ears are located slightly behind and below the eyes, and they are covered with soft feathers – the auriculars – for protection. The shape of a bird's head can also affect its hearing, such as owls, whose facial discs help direct sound toward their ears.
The hearing range of birds is most sensitive between 1 kHz and 4 kHz, but their full range is roughly similar to human hearing, with higher or lower limits depending on the bird species. No kind of bird has been observed to react to ultrasonic sounds, but certain kinds of birds can hear infrasonic sounds. "Birds are especially sensitive to pitch, tone and rhythm changes and use those variations to recognize other individual birds, even in a noisy flock. Birds also use different sounds, songs and calls in different situations, and recognizing the different noises is essential to determine if a call is warning of a predator, advertising a territorial claim or offering to share food."
"Some birds, most notably oilbirds, also use echolocation, just as bats do. These birds live in caves and use their rapid chirps and clicks to navigate through dark caves where even sensitive vision may not be useful enough."
Pigeons can hear infrasound. With the average pigeon being able to hear sounds as low as 0.5 Hz, they can detect distant storms, earthquakes and even volcanoes. This also helps them to navigate.
Insects
Greater wax moths (Galleria mellonella) have the highest recorded sound frequency range that has been recorded so far. They can hear frequencies up to 300 kHz. This is likely to help them evade bats.
Fish
Fish have a narrow hearing range compared to most mammals. Goldfish and catfish do possess a Weberian apparatus and have a wider hearing range than the tuna.
Marine mammals:
Dolphins
As aquatic environments have very different physical properties than land environments, there are differences in how marine mammals hear compared with land mammals. The differences in auditory systems have led to extensive research on aquatic mammals, specifically on dolphins.
Researchers customarily divide marine mammals into five hearing groups based on their range of best underwater hearing. : Low-frequency baleen whales like blue whales (7 Hz to 35 kHz); Mid-frequency toothed whales like most dolphins and sperm whales (150 Hz to 160 kHz) ; High-frequency toothed whales like some dolphins and porpoises (275 Hz to 160 kHz); seals (50 Hz to 86 kHz); fur seals and sea lions (60 Hz to 39 kHz).
The auditory system of a land mammal typically works via the transfer of sound waves through the ear canals. Ear canals in seals, sea lions, and walruses are similar to those of land mammals and may function the same way. In whales and dolphins, it is not entirely clear how sound is propagated to the ear, but some studies strongly suggest that sound is channelled to the ear by tissues in the area of the lower jaw. One group of whales, the Odontocetes (toothed whales), use echolocation to determine the position of objects such as prey. The toothed whales are also unusual in that the ears are separated from the skull and placed well apart, which assists them with localizing sounds, an important element for echolocation.
Studies have found there to be two different types of cochlea in the dolphin population. Type I has been found in the Amazon river dolphin and harbour porpoises. These types of dolphin use extremely high frequency signals for echolocation. Harbour porpoises emit sounds at two bands, one at 2 kHz and one above 110 kHz. The cochlea in these dolphins is specialised to accommodate extreme high frequency sounds and is extremely narrow at the base.
Type II cochlea are found primarily in offshore and open water species of whales, such as the bottlenose dolphin. The sounds produced by bottlenose dolphins are lower in frequency and range typically between 75 and 150,000 Hz. The higher frequencies in this range are also used for echolocation and the lower frequencies are commonly associated with social interaction as the signals travel much farther distances.
Marine mammals use vocalisations in many different ways. Dolphins communicate via clicks and whistles, and whales use low-frequency moans or pulse signals. Each signal varies in terms of frequency and different signals are used to communicate different aspects. In dolphins, echolocation is used in order to detect and characterize objects and whistles are used in sociable herds as identification and communication devices.
Additional Information
The human auditory field corresponds to a specific band of frequencies and a specific range of intensities, perceived by our ear. Acoustic vibrations outside of this field are not considered as "sounds", even if they can be perceived by other animals.
Human ear perceives frequencies between 20 Hz (lowest pitch) to 20 kHz (highest pitch). All sounds below 20 Hz are qualified as infrasounds, althought some animals (ex. mole-rat, or elephant) are hearing them. Similarly, all sounds above 20 kHz are qualified as ultrasounds, but their are sounds for a cat or a dog (up to 40 kHz) or for a dolphin or a bat (up to 160 kHz).
The human ear as a dyamic range from 0dB (threshold) to 120-130 dB.
This is true for the middle frequency range (1-2 kHz). For lower or higher frequencies, the dynamic is narrowed.
The human auditory field (green) is limited by the threshold curve (bottom) and a curve giving the upper limit of sound perception (top). At each frequency, between 20 Hz and 20 kHz, the threshold of our sensitivity is different. The best threshold (at around 2 kHz) is close to 0 dB. It is also in this middle range of frequencies that the sensation dynamics is the best (120 dB). The conversation area (dark green) demonstrates the range of sounds most commonly used in human voice perception; when hearing loss affects this area, communication is altered.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2314) Materials Science
Gist
Materials science and engineering seeks to understand the fundamental physical origins of material behavior in order to optimize properties of existing materials through structure modification and processing, design and invent new and better materials, and understand why some materials unexpectedly fail.
Summary
materials science, the study of the properties of solid materials and how those properties are determined by a material’s composition and structure. It grew out of an amalgam of solid-state physics, metallurgy, and chemistry, since the rich variety of materials properties cannot be understood within the context of any single classical discipline. With a basic understanding of the origins of properties, materials can be selected or designed for an enormous variety of applications, ranging from structural steels to computer microchips. Materials science is therefore important to engineering activities such as electronics, aerospace, telecommunications, information processing, nuclear power, and energy conversion.
This article approaches the subject of materials science through five major fields of application: energy, ground transportation, aerospace, computers and communications, and medicine. The discussions focus on the fundamental requirements of each field of application and on the abilities of various materials to meet those requirements.
The many materials studied and applied in materials science are usually divided into four categories: metals, polymers, semiconductors, and ceramics. The sources, processing, and fabrication of these materials are explained at length in several articles: metallurgy; elastomer (natural and synthetic rubber); plastic; man-made fibre; and industrial glass and ceramics. Atomic and molecular structures are discussed in chemical elements and matter.
Details
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
History
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
Fundamentals
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
Structure
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
Atomic structure
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
Bonding
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
Crystallography
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. New and advanced materials that are being developed include nanomaterials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
Nanostructure
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
Microstructure
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
Macrostructure
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
Additional Information
Material science explores the properties and applications of materials, delving into their structure, composition, and behavior to develop new materials for various industries. Material science is a multidisciplinary field that investigates the properties, structure, and applications of materials. It encompasses understanding how materials behave and developing new substances for diverse purposes, ranging from electronics to medicine. Scientists in this field explore the composition and structure of materials at the atomic and molecular levels to enhance their properties and performance.
Types of materials used:
*Metals
*Polymers
*Ceramics
*Composites
*Semiconductors
Purpose of studying material science:
Studying material science serves a fundamental purpose in advancing our understanding of the properties, structure, and behavior of materials. This multidisciplinary field is crucial for driving innovation by enabling the development of new materials tailored to specific applications, from high-performance alloys for aerospace engineering to biocompatible materials for medical implants. Material science plays a pivotal role in optimizing the performance of existing materials, leading to more efficient and durable products. It is essential for problem-solving, helping identify and address issues related to materials in various industries. The economic impact is significant, as material science enhances manufacturing processes, reduces costs, and fosters sustainability. The study of material science is indispensable for technological progress, economic growth, and the development of materials that shape our modern world.
Scope of material science:
The scope of material science is vast and dynamic, playing a pivotal role in shaping the advancements of the modern world. With its interdisciplinary approach, material science influences a myriad of industries, from technology and energy to healthcare and manufacturing. It drives innovation by developing materials with tailored properties, fostering progress in electronics, telecommunications, and aerospace. The energy sector benefits from material science's exploration of materials for efficient energy storage and conversion, contributing to the development of sustainable energy solutions. Manufacturing and engineering rely on material science to optimize processes, improve product performance, and introduce novel materials. The nanotechnology frontier opens up through the manipulation of materials at the nanoscale, promising revolutionary applications. Furthermore, material science contributes to environmental sustainability by developing eco-friendly materials and practices.
Conclusion:
Material science stands as a cornerstone of scientific and technological progress, playing a pivotal role in diverse industries that shape our modern existence. The field's interdisciplinary nature allows researchers to delve into the intricacies of materials at the atomic and molecular levels, leading to the development of novel substances with tailored properties. Its impact extends to manufacturing, engineering, and the ever-expanding realm of nanotechnology. The ongoing exploration of new frontiers and the application of material science principles contribute significantly to the advancement of human knowledge and the improvement of our quality of life.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2315) Hearing Loss
Gist
What are the stages of hearing loss? / What does “hearing loss” or “hearing impairment” mean?
* Mild hearing loss: Hearing loss of 20 to 40 decibels.
* Moderate hearing loss: Hearing loss of 41 to 60 decibels.
* Severe hearing loss: Hearing loss of 61 to 80 decibels.
* Profound hearing loss or deafness: Hearing loss of more than 81 decibels.
How to get over hearing loss?
Treatment depends on the cause and severity of the hearing loss, as well the patient's lifestyle and listening needs. We may recommend medication or surgery to address any underlying issues. In other cases, patients may benefit from hearing aids, assistive listening devices or cochlear implants.
Loud noise is one of the most common causes of hearing loss. Noise from lawn mowers, snow blowers, or loud music can damage the inner ear and result in permanent hearing loss. Loud noise also contributes to tinnitus.
Summary
Hearing loss is a partial or total inability to hear. Hearing loss may be present at birth or acquired at any time afterwards. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to acquire spoken language, and in adults it can create difficulties with social interaction and at work. Hearing loss can be temporary or permanent. Hearing loss related to age usually affects both ears and is due to cochlear hair cell loss. In some people, particularly older people, hearing loss can result in loneliness.
Hearing loss may be caused by a number of factors, including: genetics, ageing, exposure to noise, some infections, birth complications, trauma to the ear, and certain medications or toxins. A common condition that results in hearing loss is chronic ear infections. Certain infections during pregnancy, such as cytomegalovirus, syphilis and rubella, may also cause hearing loss in the child. Hearing loss is diagnosed when hearing testing finds that a person is unable to hear 25 decibels in at least one ear. Testing for poor hearing is recommended for all newborns. Hearing loss can be categorized as mild (25 to 40 dB), moderate (41 to 55 dB), moderate-severe (56 to 70 dB), severe (71 to 90 dB), or profound (greater than 90 dB). There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss.
About half of hearing loss globally is preventable through public health measures. Such practices include immunization, proper care around pregnancy, avoiding loud noise, and avoiding certain medications. The World Health Organization recommends that young people limit exposure to loud sounds and the use of personal audio players to an hour a day in an effort to limit exposure to noise. Early identification and support are particularly important in children. For many, hearing aids, sign language, cochlear implants and subtitles are useful. Lip reading is another useful skill some develop. Access to hearing aids, however, is limited in many areas of the world.
As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in about 466 million people (5% of the global population), and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture may see themselves as having a difference rather than a disability. Many members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture.
Definition
* Hearing loss is defined as diminished acuity to sounds which would otherwise be heard normally. The terms hearing impaired or hard of hearing are usually reserved for people who have relative inability to hear sound in the speech frequencies. Hearing loss occurs when sound waves enter the ears and damage the sensitive tissues. The severity of hearing loss is categorized according to the increase in intensity of sound above the usual level required for the listener to detect it.
* Deafness is defined as a degree of loss such that a person is unable to understand speech, even in the presence of amplification. In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard.
* Speech perception is another aspect of hearing which involves the perceived clarity of a word rather than the intensity of sound made by the word. In humans, this is usually measured with speech discrimination tests, which measure not only the ability to detect sound, but also the ability to understand speech. There are very rare types of hearing loss that affect speech discrimination alone. One example is auditory neuropathy, a variety of hearing loss in which the outer hair cells of the cochlea are intact and functioning, but sound information is not faithfully transmitted by the auditory nerve to the brain.
Use of the terms "hearing impaired", "deaf-mute", or "deaf and dumb" to describe deaf and hard of hearing people is discouraged by many in the deaf community as well as advocacy organizations, as they are offensive to many deaf and hard of hearing people.
Hearing standards
Human hearing extends in frequency from 20 to 20,000 Hz, and in intensity from 0 dB to 120 dB HL or more. 0 dB does not represent absence of sound, but rather the softest sound an average unimpaired human ear can hear; some people can hear down to −5 or even −10 dB. Sound is generally uncomfortably loud above 90 dB and 115 dB represents the threshold of pain. The ear does not hear all frequencies equally well: hearing sensitivity peaks around 3,000 Hz. There are many qualities of human hearing besides frequency range and intensity that cannot easily be measured quantitatively. However, for many practical purposes, normal hearing is defined by a frequency versus intensity graph, or audiogram, charting sensitivity thresholds of hearing at defined frequencies. Because of the cumulative impact of age and exposure to noise and other acoustic insults, 'typical' hearing may not be normal.
Signs and symptoms
* difficulty using the telephone
* loss of sound localization
* difficulty understanding speech, especially of children and women whose voices are of a higher frequency.
* difficulty understanding speech in the presence of background noise
* sounds or speech sounding dull, muffled or attenuated
* need for increased volume on television, radio, music and other audio sources
Hearing loss is sensory, but may have accompanying symptoms:
* pain or pressure in the ears
* a blocked feeling
There may also be accompanying secondary symptoms:
* hyperacusis, heightened sensitivity with accompanying auditory pain to certain intensities and frequencies of sound, sometimes defined as "auditory recruitment"
* tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present
* vertigo and disequilibrium
* tympanophonia, also known as autophonia, abnormal hearing of one's own voice and respiratory sounds, usually as a result of a patulous (a constantly open) eustachian tube or dehiscent superior semicircular canals
disturbances of facial movement (indicating a possible tumour or stroke) or in persons with Bell's palsy.
Details
Audiology is the study, assessment, prevention, and treatment of disorders of hearing and balance. Clinical audiology is concerned primarily with the assessment of the function of the human ear, which affects hearing sensitivity and balance. The characterization of specific losses in hearing or balance facilitates the diagnosis of impairments and enables the development of effective treatment or management plans.
Hearing sensitivity
The human ear has an extremely wide dynamic range. The lower limit of hearing, where sound is just detectable, is referred to as the threshold of hearing (also known as absolute threshold, or absolute sensitivity). The upper limit of hearing, where sound begins to become uncomfortably loud, is referred to as the threshold of discomfort (or uncomfortable loudness level). In quantitative terms, the difference in level between those two extremes is more than six orders of magnitude. The human ear can hear single frequencies of vibration from around 20 to 20,000 hertz (Hz), although reductions occur during the natural aging process, particularly in the upper limit of the range.
The general relationship between the dynamic range of hearing and frequency has been well understood for many years. Studies measuring the minimum audible level of hearing have been made with stimuli presented to each ear separately or both ears together (usually via an earphone and loudspeaker, respectively). The results from the two methods are similar, but not identical, and show human hearing to be generally most sensitive between 500 and 10,000 Hz. The typical values obtained in a group of young healthy individuals, at individual frequencies, are used as the baseline reference level to which listeners with a suspected hearing impairment can be compared.
Pure-tone audiometry
The most widely used assessment procedure in clinical audiology is known as pure-tone audiometry. The listener’s hearing threshold level (hearing level), in decibels (dB), is plotted on a chart known as a pure-tone audiogram, with hearing level plotted on the ordinate (vertical axis) as a function of signal frequency on the abscissa (horizontal axis). The conventional clinical audiogram plots hearing level with low dB values (normal hearing) at the top of the chart and raised levels (reduced hearing) closer to the abscissa. Therefore, raised hearing levels, representing decreased hearing, are plotted lower on the pure-tone audiogram chart.
The reference baseline is called audiometric zero and represents the 0 dB hearing level line on the audiogram chart. If, for example, a listener’s hearing threshold level for a particular signal is 60 dB, then the listener’s hearing threshold is 60 dB higher than the median value obtained from a group of normal healthy young persons. However, not every healthy young adult has a hearing threshold level that falls on the 0 dB line; the range of normal hearing is generally taken to be ±20 dB of the zero line.
For clinical purposes, the hearing threshold is usually measured for single frequency tones at discrete frequencies from 500 Hz to 8,000 Hz, in octave or half-octave intervals, and reported in step sizes of 5 dB. The signals are selected and presented to the listener by using a classical measurement method known as the method of limits. A series of signals are presented in an ascending or descending run (from loud to quiet, or vice versa), and the task for the listener is to respond every time he or she detects a signal. As with any psychophysical measurement, there is a level above which the pure tone is always heard and a lower level where the tone is never heard. The threshold of hearing is taken as the lowest level at which the signal is detected at least 50 percent of the time. Because a host of extrinsic and intrinsic factors can influence the measurements (e.g., ambient noise level and duration of the test signal, respectively), clearly defined procedures have been developed for clinical testing.
Although the measurement of hearing thresholds from each ear separately appears a relatively straightforward procedure, sound may cross from one side of the head to the other. For example, if a listener has poor hearing in the ear under test, the signal may be sufficiently intense that it may cross the skull and be detected by the opposite ear that has better hearing. In those circumstances, a masking noise is presented to the nontest ear to prevent cross-hearing of the test signal. Standard procedures have been developed for when and how to use masking. If masking is insufficient, the test signal may continue to be detected by the nontest ear; if too much masking is used, cross-masking may occur, artificially raising the hearing threshold in the test ear.
The degree of hearing loss usually varies with frequency. However, impairment usually is summarized as mild, moderate, severe, or profound, on the basis of an average of the hearing threshold levels. The ability to hear speech is related to the degree of impairment. Slight hearing impairment (26–40 dB hearing level) can cause some difficulty in hearing speech, especially in noisy situations. Moderate hearing impairment (41–60 dB hearing level) can cause difficulty in hearing speech without a hearing aid. Conversational speech is not audible in cases of severe (61–80 dB hearing level) or profound (81 dB hearing level or greater) hearing impairment.
Some of those listeners, if they decide that they would like to hear speech, may need a special type of hearing aid known as a cochlear implant. The proportion of speech that is audible and usable for a listener, with or without a hearing aid, can be quantified by using a procedure known as the Speech Intelligibility Index. At relatively high presentation levels, the test signal sometimes can be perceived as a vibration, especially for low-frequency stimuli. In some instances, it may be difficult to determine if a threshold measurement is an auditory or a vibrotactile perception.
Air-conduction and bone-conduction testing
Hearing loss generally is categorized into two types: conductive and sensorineural. Conductive hearing loss occurs when a condition of the outer or middle ear prevents sound from being conducted to the cochlea in the inner ear. Sensorineural hearing loss involves a problem with either the sensory transducer cells in the cochlea or, less commonly, the neural pathway to the brain. In some instances, conductive and sensorineural hearing loss occur together, resulting in so-called mixed hearing loss. Whereas conductive hearing loss often can be corrected via surgery and is relatively common in childhood, sensorineural hearing loss usually is permanent. Therefore, it is important for the audiologist to distinguish between the two conditions.
One method of differentiating between conductive hearing loss and sensorineural hearing loss is to compare air-conduction and bone-conduction hearing threshold levels. This involves measuring hearing sensitivity by using two different types of earphone. In air-conduction testing, a pure tone is presented via an earphone (or a loudspeaker). The signal travels through the air in the outer ear to the middle ear and then to the cochlea in the inner ear. In bone-conduction testing, instead of using an earphone, an electromechanical earphone is placed on the skull. This allows for stimulation of the cochlea via mechanical vibration of the skull with almost no stimulation of the outer and middle ear.
Normal hearing individuals typically have a hearing threshold level close to 0 dB for both air and bone conduction. Individuals with a hearing disorder of any part of the auditory pathway have poor air-conduction thresholds. Poor air-conduction threshold is the primary indication of conductive hearing loss, since abnormalities of the conduction mechanism have relatively little effect on bone-conduction measurements. In sensorineural hearing loss, the thresholds for both air conduction and bone conduction are affected such that the air-bone gap (air conduction minus bone conduction) is close to zero. The presence of an air-bone gap signifies conductive hearing loss.
The dynamic range between the threshold of hearing and loudness discomfort level is around 100 dB in normal hearing listeners. Listeners with sensory hearing loss have raised hearing thresholds, but their loudness discomfort levels are essentially similar to those of normal hearing listeners. Listeners with a sensory hearing impairment have a reduced dynamic range and experience loudness recruitment, or an abnormal rate of loudness growth characterized by an abnormally disproportionate increase in loudness for a small increase in sound intensity. This has implications for the design of hearing instruments, since nonlinear amplification, in which soft sounds require greater amplification than loud sounds, is required. Although a nonlinear hearing instrument can compensate by increasing amplification for soft sounds, it cannot compensate for the loss of suprathreshold abilities such as impaired frequency resolution. As a result, background noise remains a problem for many listeners.
Pediatric assessment procedures
Between age six months and two or three years, the measurement technique of choice usually is visual reinforcement audiometry. This involves pairing a head turn response to a sound with an interesting visual reward, such as a flashing light or an animated toy animal. Once this classical conditioning has been established, operant conditioning then takes place, in which a visual reward is presented after an appropriate sound-elicited head turn. This technique is used to determine the minimum response level that will elicit a head turn. Although it is usual to attempt ear-specific measurements in children, in some cases earphones are not tolerated, requiring that the signal be presented from a loudspeaker; this is known as sound field audiometry. In general, infants tend to be more sensitive to high-frequency sounds than low-frequency sounds; whether this is related to physiological development or is associated with other factors, such as infant-directed speech (which is characterized in part by high-frequency pitches), is unclear.
Before six months, behavioral testing is of limited use in determining hearing threshold levels. However, a small amount of sound is generated in the healthy cochlea, and this otoacoustic emission can be measured with a small sensitive microphone in the ear canal. The normal response from a healthy ear forms the basis of a clinical procedure that can be used to screen hearing in a newborn. If no otoacoustic emission can be recorded, event-related potentials (brain activity produced by a sensory or cognitive response to a stimulus) can be used to estimate hearing sensitivity. This involves the measurement of electrical potentials via recording leads attached to the scalp. The method of choice in infants is the auditory brainstem response, because this can be obtained during sleep. A typical procedure is to commence at a high level and reduce this until the evoked response can no longer be detected. The presence of a response is based on the tester’s subjective interpretation of the waveform. Event-related potentials can also be used to estimate hearing sensitivity in adults who are unable or unwilling to provide reliable information via pure-tone audiometry. Newborns who do not pass initial hearing screening may undergo auditory steady state response testing, in which brain activity in the sleeping infant is measured in response to tones of differing frequency and intensity. The presence of a steady state response is determined on the basis of statistical data.
A commonly used procedure that provides information about the condition of the tympanic membrane (eardrum) and the middle ear is known as tympanometry. Tympanometry frequently is used to evaluate the eardrum in children who are prone to ear infections, in which fluid accumulates in the normally air-filled middle ear space. During the procedure, a pure tone is produced, and air pressure is changed inside the ear with a handheld instrument. The procedure is based on the principle that some sound entering the ear canal is reflected back from the eardrum; the reflected sound can be measured with a sensitive microphone. When the eardrum is stiff, air pressure in the ear canal is increased, resulting in an increased reflection of sound by the eardrum. Stiffening of the eardrum is associated with various conditions of the middle ear.
Vestibular assessment
The vestibular system of the inner ear functions in the perception of balance and motion. Sudden changes in the function of the vestibular organ can result in rotatory vertigo, which gives the illusion that the environment is spinning around. Useful information about vestibular function can be obtained by observing eye movements during certain visual and vestibular stimulation. The audiologist is particularly interested in the presence of a slow-quick oscillatory movement of the eyes known as nystagmus. This eye movement will be present spontaneously after a change in vestibular function and may continue for days or weeks until the brain has had time to compensate. Nystagmus may also be provoked by changes in body position, such as rising out of bed in the morning. The sensitivity of the right and left vestibular organs can be compared in a caloric test, in which the external ear canal is irrigated with hot or cold water to induce a response. In vestibular assessment, a force platform may be used to measure body sway, which can provide information about the use of the visual, vestibular, and proprioceptive systems for balance function and postural control.
Rehabilitative procedures generally involve head and eye exercises that aid the central compensation mechanism. In severe cases, surgery may be considered. Surgical procedures to treat vestibular disorders generally are either corrective, attempting to stabilize inner ear function, or destructive, removing the parts of inner ear structures responsible for the patient’s condition. An example of a corrective procedure is endolymphatic sac decompression, which is used to relieve pressure on the vestibular system, particularly in the case of Ménière disease. Examples of destructive procedures include labyrinthectomy (removal of the balance and hearing organs of the inner ear) and vestibular nerve section (the vestibular nerve is cut to prevent the transmission of balance information to the brain).
Additional Information
Hearing loss that comes on little by little as you age, also known as presbycusis, is common. More than half the people in the United States older than age 75 have some age-related hearing loss.
There are three types of hearing loss:
* Conductive, which involves the outer or middle ear.
* Sensorineural, which involves the inner ear.
* Mixed, which is a mix of the two.
Aging and being around loud noises both can cause hearing loss. Other factors, such as too much earwax, can lower how well ears work for a time.
You usually can't get hearing back. But there are ways to improve what you hear.
Symptoms
Symptoms of hearing loss may include:
* Muffling of speech and other sounds.
* Trouble understanding words, especially when in a crowd or a noisy place.
* Trouble hearing the letters of the alphabet that aren't vowels.
* Often asking others to speak more slowly, clearly and loudly.
* Needing to turn up the volume of the television or radio.
* Staying clear of some social settings.
* Being bothered by background noise.
* Ringing in the ears, known as tinnitus.
When to see a doctor
If you have a sudden loss of hearing, particularly in one ear, seek medical attention right away.
Talk to your health care provider if loss of hearing is causing you trouble. Age-related hearing loss happens little by little. So you may not notice it at first.
The ear has three main parts: the outer ear, middle ear and inner ear. Sound waves pass through the outer ear and cause the eardrum to vibrate. The eardrum and three small bones of the middle ear make the vibrations bigger as they travel to the inner ear. There, the vibrations pass through fluid in a snail-shaped part of the inner ear, known as the cochlea.
Attached to nerve cells in the cochlea are thousands of tiny hairs that help turn sound vibrations into electrical signals. The electrical signals are transmitted to the brain. The brain turns these signals into sound.
How hearing loss can occur
Causes of hearing loss include:
* Damage to the inner ear. Aging and loud noise can cause wear and tear on the hairs or nerve cells in the cochlea that send sound signals to the brain. Damaged or missing hairs or nerve cells don't send electrical signals well. This causes hearing loss.
* Higher pitched tones may seem muffled. It may be hard to pick out words against background noise.
* Buildup of earwax. Over time, earwax can block the ear canal and keep sound waves from passing through. Earwax removal can help restore hearing.
* Ear infection or unusual bone growths or tumors. In the outer or middle ear, any of these can cause hearing loss.
* Ruptured eardrum, also known as tympanic membrane perforation. Loud blasts of noise, sudden changes in pressure, poking an eardrum with an object and infection can cause the eardrum to burst.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2316) Ornament (Art)
Gist
Ornament is decoration or embellishment. It is any additional detail added to an object, interior or architectural structure which serves no other purpose than to make it more interesting, arresting or beautiful to us.
The concept of ornament has emerged as a result of the existence of the human being and its relation to its environment. Ornament of objects, which aim at adding qualitative features to objects alongside their quantitative states, is a practice that is as old as humanity.
Ornament : An accessory, article, or detail used to beautify the appearance of something to which it is added or of which it is a part. architectural ornaments. 2. a system, category, or style of such objects or features; ornamentation.
Summary
An ornament, in architecture, is any element added to an otherwise merely structural form, usually for purposes of decoration or embellishment. Three basic and fairly distinct categories of ornament in architecture may be recognized: mimetic, or imitative, ornament, the forms of which have certain definite meanings or symbolic significance; applied ornament, intended to add beauty to a structure but extrinsic to it; and organic ornament, inherent in the building’s function or materials.
Mimetic ornament is by far the most common type of architectural ornament in primitive cultures, in Eastern civilizations, and generally throughout antiquity. It grows out of what seems to be a universal human reaction to technological change: the tendency to use new materials and techniques to reproduce shapes and qualities familiar from past usage, regardless of appropriateness. For example, most common building types in antiquity, such as tombs, pyramids, temples, and towers, began as imitations of primeval house and shrine forms. An obvious example is the dome, which developed as a permanent wooden or stone reproduction of a revered form originally built of pliable materials. In the mature stages of early civilizations, building types tended to evolve past primitive prototypes; their ornament, however, usually remained based on such models. Decorative motifs derived from earlier structural and symbolic forms are innumerable and universal. In developed Indian and Chinese architecture, domical and other originally structural forms occur often and lavishly as ornament. In ancient Egypt, architectural details continued throughout history to preserve faithfully the appearance of bundled papyrus shafts and similar early building forms. In Mesopotamia, brick walls long imitated the effect of primitive mud-and-reed construction. In the carved stone details of Greco-Roman orders (capitals, entablatures, moldings), the precedent of archaic construction in wood was always clearly discernible.
Architectural ornament in classical Greece exemplified the common tendency for mimetic ornament to turn into applied ornament, which lacks either symbolic meaning or reference to the structure on which it is placed. By the 5th century bc in Greece, the details of the orders had largely lost whatever conscious symbolic or structural significance they may have had; they became simply decorative elements extrinsic to the structure. The Doric frieze is a good case: its origin as an imitation of the effect of alternating beam ends and shuttered openings in archaic wood construction remained evident, but it came to be treated as a decorative sheath without reference to the actual structural forms behind. In losing their mimetic character, the details of the Greek orders acquired a new function, however; they served to articulate the building visually, organizing it into a series of coordinated visual units that could be comprehended as an integrated whole, rather than a collection of isolated units. This is the concept of applied decoration which was passed on through the Greco-Roman period. The triumphal arch of Rome, with its system of decorative columns and entablature articulating what is essentially one massive shape, is a particularly good illustration. Most of the great architecture of the Renaissance and Baroque periods depends on applied ornament; to a large extent, the difference between these styles is the difference in decoration.
Judicious and intelligent use of applied ornament remained characteristic of most Western architecture until the 19th century. During the Victorian period, architectural ornament and architectural forms proper tended to part company, to be designed quite independently of each other. Since it became obvious that ornament so conceived served no good purpose at all, a reaction was inevitable; it began to appear in force by the 1870s.
As early as the 1870s H.H. Richardson adopted the Romanesque style less for its historical associations than for the opportunities it afforded him to express the nature and texture of stone. In mature examples of his architecture from the mid-1880s, ornament in the older, applied sense has virtually disappeared, and buildings depend for their aesthetic effect mainly on the inherent qualities of their materials. The generation following Richardson saw a further development of this principle everywhere.
By the early 20th century a preoccupation with the proper function of architectural ornament was characteristic of all advanced architectural thinkers; by the mid-20th century what may be called an organic concept of architectural ornament had been formulated. In the United States Louis Sullivan was the chief contributor to the new architectural expression. Sullivan’s urban architecture was largely based on an emphasis of the dynamic lines and patterns that were produced by modern steel-frame construction, but he retained interspersed bands and patches of naturalistic ornament on parts of his buildings’ facades, applied with studied discipline. With the general reaction against Victorian principles after World War I, however, leading designers rejected even this kind of applied ornament and relied for ornamental effect on the inherent qualities of building materials alone. The International Style, in which Walter Gropius and Le Corbusier were the chief figures, dominated advanced design during the late 1920s and 1930s. During the period of dominance of the austere International Style, which lasted into the 1960s, architectural ornament of almost any kind was absent from the facades of major buildings. It was not until the 1970s, with the advent of the Post-Modernist architectural movement, that the unadorned functionalism of the International Style was moderated to permit a modest use of ornament, including classical motifs.
Details
In architecture and decorative art, ornament is decoration used to embellish parts of a building or object. Large figurative elements such as monumental sculpture and their equivalents in decorative art are excluded from the term; most ornaments do not include human figures, and if present they are small compared to the overall scale. Architectural ornament can be carved from stone, wood or precious metals, formed with plaster or clay, or painted or impressed onto a surface as applied ornament; in other applied arts the main material of the object, or a different one such as paint or vitreous enamel may be used.
A wide variety of decorative styles and motifs have been developed for architecture and the applied arts, including pottery, furniture, metalwork. In textiles, wallpaper and other objects where the decoration may be the main justification for its existence, the terms pattern or design are more likely to be used. The vast range of motifs used in ornament draw from geometrical shapes and patterns, plants, and human and animal figures. Across Eurasia and the Mediterranean world there has been a rich and linked tradition of plant-based ornament for over three thousand years; traditional ornament from other parts of the world typically relies more on geometrical and animal motifs. The inspiration for the patterns usually lies in the nature that surrounds the people in the region. Many nomadic tribes in Central Asia had many animalistic motifs before the penetration of Islam in the region.
In a 1941 essay, the architectural historian Sir John Summerson called it "surface modulation". The earliest decoration and ornament often survives from prehistoric cultures in simple markings on pottery, where decoration in other materials (including tattoos) has been lost. Where the potter's wheel was used, the technology made some kinds of decoration very easy; weaving is another technology which also lends itself very easily to decoration or pattern, and to some extent dictates its form. Ornament has been evident in civilizations since the beginning of recorded history, ranging from Ancient Egyptian architecture to the assertive lack of ornament of 20th century Modernist architecture. Ornaments also depict a certain philosophy of the people for the world around. For example, in Central Asia among nomadic Kazakhs, the circular lines of the ornaments signalled the sequential perception of time in the wide steppes and the breadth and freedom of space.
Ornament implies that the ornamented object has a function that an unornamented equivalent might also fulfill. Where the object has no such function, but exists only to be a work of art such as a sculpture or painting, the term is less likely to be used, except for peripheral elements. In recent centuries a distinction between the fine arts and applied or decorative arts has been applied (except for architecture), with ornament mainly seen as a feature of the latter class.
History
The history of art in many cultures shows a series of wave-like trends where the level of ornament used increases over a period, before a sharp reaction returns to plainer forms, after which ornamentation gradually increases again. The pattern is especially clear in post-Roman European art, where the highly ornamented Insular art of the Book of Kells and other manuscripts influenced continental Europe, but the classically inspired Carolingian and Ottonian art largely replaced it. Ornament increased over the Romanesque and Gothic periods, but was greatly reduced in Early Renaissance styles, again under classical influence. Another period of increase, in Northern Mannerism, the Baroque and Rococo, was checked by Neoclassicism and the Romantic period, before resuming in the later 19th century Napoleon III style, Victorian decorative arts and their equivalents from other countries, to be decisively reduced by the Arts and Crafts movement and then Modernism.
The detailed study of Eurasian ornamental forms was begun by Alois Riegl in his formalist study Stilfragen: Grundlegungen zu einer Geschichte der Ornamentik (Problems of style: foundations for a history of ornament) of 1893, who in the process developed his influential concept of the Kunstwollen. Riegl traced formalistic continuity and development in decorative plant forms from Ancient Egyptian art and other ancient Near Eastern civilizations through the classical world to the arabesque of Islamic art. While the concept of the Kunstwollen has few followers today, his basic analysis of the development of forms has been confirmed and refined by the wider corpus of examples known today. Jessica Rawson has recently extended the analysis to cover Chinese art, which Riegl did not cover, tracing many elements of Chinese decoration back to the same tradition; the shared background helping to make the assimilation of Chinese motifs into Persian art after the Mongol invasion harmonious and productive.
Styles of ornamentation can be studied in reference to the specific culture which developed unique forms of decoration, or modified ornament from other cultures. The Ancient Egyptian culture is arguably the first civilization to add pure decoration to their buildings. Their ornament takes the forms of the natural world in that climate, decorating the capitals of columns and walls with images of papyrus and palm trees. Assyrian culture produced ornament which shows influence from Egyptian sources and a number of original themes, including figures of plants and animals of the region.
The Ancient Greek civilization created many new forms of ornament, which were diffused across Eurasia, helped by the conquests of Alexander the Great, and the expansion of Buddhism, which took some motifs to East Asia in somewhat modified form. In the West the Ancient Roman Latinized forms of the Greek ornament lasted for around a millennium, and after a period when they were replaced by Gothic forms, powerfully revived in the Italian Renaissance and remain extremely widely used today.
Roman ornament
Ornament in the Roman empire utilized a diverse array of styles and materials, including marble, glass, obsidian, and gold. Roman ornament, specifically in the context of Pompeii, has been studied and written about by scholar Jessica Powers in her book chapter "Beyond Painting in Pompeii's Houses: Wall Ornaments and Their Patrons." Instead of studying ornamental objects in isolation, Powers argues that, if the information is provided, objects must be approached in their original context. This information might include the location where the work was found, other objects located or found nearby, or who the patron was who might have commissioned the work. Jessica Powers' chapter primarily discusses the Casa Degli Amorini Dorati in Pompeii, where 18 wall ornaments were found, the most of any Pompeiian home. Interior wall ornament in a Pompeian home would typically divide the wall into three or more sections under which there would be a dado taking up roughly one-sixth of the height of the wall. The wall sections would be divided by broad pilasters connected by a frieze which bands across the top of the wall. The ornament found at the Casa Degli Amorini Dorati in Pompeii reflected this standard style and included objects that had clearly been reused, and rare and imported objects. Several of the panels on the walls of the Casa Degli Amorini Dorati were removed during archeological work in the 1970s, revealing that the panels had been stuck on different walls before the one on which they were found. Jessica Powers argues that these panels illustrate the home owner and correlating patrons' willingness to utilize damaged or secondhand materials in their own home. Moreover, the materials used in the decorative wall panels were identified as being from the Greek East or Egypt, not from Pompeii. This points to the elaborate trade routes that flourished across the Roman Empire, and that home owners were interested in using materials from outside of Pompeii to embellish their homes.
In addition to homes, public buildings and temples are locations where Roman ornament styles were on display. In the Roman temple, the extravagant use of ornament served as a means of self-glorification, as scholar Owen Jones notes in his book chapter, Roman Ornament. Roman ornament techniques include surface-modeling, where ornamental styles are applied onto a surface. This was a common ornamental style with marble surfaces. One common ornamental style was the use of acanthus leaf, a motif adopted from the Greeks. The use of acanthus leaf and other naturalist motifs can be seen in Corinthian capitals, in temples, and in other public sites.
Modern ornament
Modern millwork ornaments are made of wood, plastics, composites, etc. They come in many different colours and shapes. Modern architecture, conceived of as the elimination of ornament in favor of purely functional structures, left architects the problem of how to properly adorn modern structures. There were two available routes from this perceived crisis. One was to attempt to devise an ornamental vocabulary that was new and essentially contemporary. This was the route taken by architects like Louis Sullivan and his pupil Frank Lloyd Wright, or by the unique Antoni Gaudí. Art Nouveau, popular around the turn of the 20th century, was in part a conscious effort to evolve such a "natural" vocabulary of ornament.
A more radical route abandoned the use of ornament altogether, as in some designs for objects by Christopher Dresser. At the time, such unornamented objects could have been found in many unpretending workaday items of industrial design, ceramics produced at the Arabia manufactory in Finland, for instance, or the glass insulators of electric lines.
This latter approach was described by architect Adolf Loos in his 1908 manifesto, translated into English in 1913 and polemically titled Ornament and Crime, in which he declared that lack of decoration is the sign of an advanced society. His argument was that ornament is economically inefficient and "morally degenerate", and that reducing ornament was a sign of progress. Modernists were eager to point to American architect Louis Sullivan as their godfather in the cause of aesthetic simplification, dismissing the knots of intricately patterned ornament that articulated the skin of his structures.
With the work of Le Corbusier and the Bauhaus through the 1920s and 1930s, lack of decorative detail became a hallmark of modern architecture and equated with the moral virtues of honesty, simplicity, and purity. In 1932 Philip Johnson and Henry-Russell Hitchcock dubbed this the "International Style". What began as a matter of taste was transformed into an aesthetic mandate. Modernists declared their way as the only acceptable way to build. As the style hit its stride in the highly developed postwar work of Mies van der Rohe, the tenets of 1950s modernism became so strict that even accomplished architects like Edward Durrell Stone and Eero Saarinen could be ridiculed and effectively ostracized for departing from the aesthetic rules.
At the same time, the unwritten laws against ornament began to come into serious question. "Architecture has, with some difficulty, liberated itself from ornament, but it has not liberated itself from the fear of ornament," John Summerson observed in 1941.
The very difference between ornament and structure is subtle and perhaps arbitrary. The pointed arches and flying buttresses of Gothic architecture are ornamental but structurally necessary; the colorful rhythmic bands of a Pietro Belluschi International Style skyscraper are integral, not applied, but certainly have ornamental effect. Furthermore, architectural ornament can serve the practical purpose of establishing scale, signaling entries, and aiding wayfinding, and these useful design tactics had been outlawed. And by the mid-1950s, modernist figureheads Le Corbusier and Marcel Breuer had been breaking their own rules by producing highly expressive, sculptural concrete work.
The argument against ornament peaked in 1959 over discussions of the Seagram Building, where Mies van der Rohe installed a series of structurally unnecessary vertical I-beams on the outside of the building, and by 1984, when Philip Johnson produced his AT&T Building in Manhattan with an ornamental pink granite neo-Georgian pediment, the argument was effectively over. In retrospect, critics have seen the AT&T Building as the first Postmodernist building.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2317) Drought
Gist
Drought is a prolonged dry period in the natural climate cycle that can occur anywhere in the world. It is a slow-onset disaster characterized by the lack of precipitation, resulting in a water shortage. Drought can have a serious impact on health, agriculture, economies, energy and the environment.
Summary
A drought is a period of time when an area or region experiences below-normal precipitation. The lack of adequate precipitation, either rain or snow, can cause reduced soil moisture or groundwater, diminished stream flow, crop damage, and a general water shortage. Droughts are the second-most costly weather events after hurricanes.
Unlike with sudden weather events such as hurricanes, tornadoes, and thunderstorms, it is often difficult to pinpoint when a drought has started or when it has ended. The initial effects of a drought may be difficult to identify right away, so it may take weeks or months to determine that a drought has started. The end of a drought is hard to identify for the same reason. A drought may last for weeks, months, or even years. Sometimes, drought conditions can exist for a decade or more in a region. The longer a drought lasts, the greater the harmful effects it has on people.
Droughts affect people in a several ways. Access to clean drinking water is essential for all life, and sources of water may dwindle during a drought. Without the presence of water, people must bring in enough water from elsewhere to survive. Water is also needed for crops to grow. When not enough precipitation falls to naturally water crops, they must be watered by irrigation. Irrigation is possible only when there is enough water in nearby rivers, lakes, or streams, or from groundwater. During a drought, these water sources are diminished and may even dry up, preventing crops from being irrigated and causing them to die off.
One person studying these problems is Alexandra Cousteau, a National Geographic Emerging Explorer whose latest initiative is Blue Legacy. She started Blue Legacy to raise awareness that we live on a water planet and must take care of it. Cousteau, the granddaughter of the famed ocean explorer Jacques Cousteau, believes that water will be a crucial issue in this century. She predicts that water problems such as drought, storms, floods, and degraded water quality will create “water refugees:” people migrating in search of water. Cousteau stresses that we must do all we can to protect Earth’s valuable freshwater resources.
Details
A drought is a period of drier-than-normal conditions. A drought can last for days, months or years. Drought often has large impacts on the ecosystems and agriculture of affected regions, and causes harm to the local economy. Annual dry seasons in the tropics significantly increase the chances of a drought developing, with subsequent increased wildfire risks. Heat waves can significantly worsen drought conditions by increasing evapotranspiration. This dries out forests and other vegetation, and increases the amount of fuel for wildfires.
Drought is a recurring feature of the climate in most parts of the world, becoming more extreme and less predictable due to climate change, which dendrochronological studies date back to 1900. There are three kinds of drought effects, environmental, economic and social. Environmental effects include the drying of wetlands, more and larger wildfires, loss of biodiversity.
Economic impacts include disruption of water supplies for people, less agricultural productivity and therefore more expensive food production. Another impact is shortages of water for irrigation or hydropower. Social and health costs include the negative effect on the health of people directly exposed to this phenomenon (excessive heat waves), high food costs, stress caused by failed harvests, water scarcity, etc. Prolonged droughts have caused mass migrations and humanitarian crisis.
Examples for regions with increased drought risks are the Amazon basin, Australia, the Sahel region and India. For example, in 2005, parts of the Amazon basin experienced the worst drought in 100 years. Australia could experience more severe droughts and they could become more frequent in the future, a government-commissioned report said on July 6, 2008. The long Australian Millennial drought broke in 2010. The 2020–2022 Horn of Africa drought has surpassed the horrific drought in 2010–2011 in both duration and severity. More than 150 districts in India are drought vulnerable, mostly concentrated in the state of Rajasthan, Gujarat, Madhya Pradesh and its adjoining Chhattisgarh, Uttar Pradesh, northern Karnataka and adjoining Maharashtra of the country.
Throughout history, humans have usually viewed droughts as disasters due to the impact on food availability and the rest of society. People have viewed drought as a natural disaster or as something influenced by human activity, or as a result of supernatural forces.
Definition
The IPCC Sixth Assessment Report defines a drought simply as "drier than normal conditions". This means that a drought is "a moisture deficit relative to the average water availability at a given location and season".
According to National Integrated Drought Information System, a multi-agency partnership, drought is generally defined as "a deficiency of precipitation over an extended period of time (usually a season or more), resulting in a water shortage". The National Weather Service office of the NOAA defines drought as "a deficiency of moisture that results in adverse impacts on people, animals, or vegetation over a sizeable area".
Drought is a complex phenomenon − relating to the absence of water − which is difficult to monitor and define. By the early 1980s, over 150 definitions of "drought" had already been published. The range of definitions reflects differences in regions, needs, and disciplinary approaches.
Categories
There are three major categories of drought based on where in the water cycle the moisture deficit occurs: meteorological drought, hydrological drought, and agricultural or ecological drought. A meteorological drought occurs due to lack of precipitation. A hydrological drought is related to low runoff, streamflow, and reservoir storage. An agricultural or ecological drought is causing plant stress from a combination of evaporation and low soil moisture. Some organizations add another category: socioeconomic drought occurs when the demand for an economic good exceeds supply as a result of a weather-related shortfall in water supply. The socioeconomic drought is a similar concept to water scarcity.
The different categories of droughts have different causes but similar effects:
1. Meteorological drought occurs when there is a prolonged time with less than average precipitation. Meteorological drought usually precedes the other kinds of drought. As a drought persists, the conditions surrounding it gradually worsen and its impact on the local population gradually increases.
2. Hydrological drought is brought about when the water reserves available in sources such as aquifers, lakes and reservoirs fall below a locally significant threshold. Hydrological drought tends to show up more slowly because it involves stored water that is used but not replenished. Like an agricultural drought, this can be triggered by more than just a loss of rainfall. For instance, around 2007 Kazakhstan was awarded a large amount of money by the World Bank to restore water that had been diverted to other nations from the Aral Sea under Soviet rule. Similar circumstances also place their largest lake, Balkhash, at risk of completely drying out.
3. Agricultural or ecological droughts affect crop production or ecosystems in general. This condition can also arise independently from any change in precipitation levels when either increased irrigation or soil conditions and erosion triggered by poorly planned agricultural endeavors cause a shortfall in water available to the crops.
Indices and monitoring
Several indices have been defined to quantify and monitor drought at different spatial and temporal scales. A key property of drought indices is their spatial comparability, and they must be statistically robust. Drought indices include:
* Palmer drought index (sometimes called the Palmer drought severity index (PDSI)): a regional drought index commonly used for monitoring drought events and studying areal extent and severity of drought episodes. The index uses precipitation and temperature data to study moisture supply and demand using a simple water balance model.
* Keetch-Byram Drought Index: an index that is calculated based on rainfall, air temperature, and other meteorological factors.
* Standardized precipitation index (SPI): It is computed based on precipitation, which makes it a simple and easy-to-apply indicator for monitoring and prediction of droughts in different parts of the world. The World Meteorological Organization recommends this index for identifying and monitoring meteorological droughts in different climates and time periods.
* Standardized Precipitation Evapotranspiration Index (SPEI): a multiscalar drought index based on climatic data. The SPEI accounts also for the role of the increased atmospheric evaporative demand on drought severity. Evaporative demand is particularly dominant during periods of precipitation deficit. The SPEI calculation requires long-term and high-quality precipitation and atmospheric evaporative demand datasets. These can be obtained from ground stations or gridded data based on reanalysis as well as satellite and multi-source datasets.
* Indices related to vegetation: root-zone soil moisture, vegetation condition index (VDI) and vegetation health index (VHI). The VCI and VHI are computed based on vegetation indices such as the normalized difference vegetation index (NDVI) and temperature datasets.
* Deciles index
* Standardized runoff index
High-resolution drought information helps to better assess the spatial and temporal changes and variability in drought duration, severity, and magnitude at a much finer scale. This supports the development of site-specific adaptation measures.
The application of multiple indices using different datasets helps to better manage and monitor droughts than using a single dataset, This is particularly the case in regions of the world where not enough data is available such as Africa and South America. Using a single dataset can be limiting, as it may not capture the full spectrum of drought characteristics and impacts.
Careful monitoring of moisture levels can also help predict increased risk for wildfires.
Additional Information
A drought is a lack or insufficiency of rain for an extended period that causes a considerable hydrologic (water) imbalance and, consequently, water shortages, crop damage, streamflow reduction, and depletion of groundwater and soil moisture. It occurs when evaporation and transpiration (the movement of water in the soil through plants into the air) exceed precipitation for a considerable period. Drought is the most serious physical hazard to agriculture in nearly every part of the world. Efforts have been made to control it by seeding clouds to induce rainfall, but these experiments have had only limited success.
There are four basic kinds of drought:
1. Permanent drought characterizes the driest climates. The sparse vegetation is adapted to aridity, and agriculture is impossible without continuous irrigation.
2. Seasonal drought occurs in climates that have well-defined annual rainy and dry seasons. For successful agriculture, planting must be adjusted so that the crops develop during the rainy season.
3. Unpredictable drought involves an abnormal rainfall failure. It may occur almost anywhere but is most characteristic of humid and subhumid climates. Usually brief and irregular, it often affects only a relatively small area. However, ongoing large-scale droughts of this kind are possible, especially in drier regions with several subsequent years of inadequate rainfall or snowpack.
4. Invisible drought can also be recognized: in summer, when high temperatures induce high rates of evaporation and transpiration, even frequent showers may not supply enough water to restore the amount lost; the result is a borderline water deficiency that diminishes crop yields.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2318) Fever
Gist
Fever is a rise in body temperature above the normal temperature, usually caused by infection. Normal body temperature is around 37°C (give or take a degree, but this can vary from person to person). There may also be minor fluctuations over the course of the day and night.
Summary
Fever is a rise in body temperature above the normal temperature, usually caused by infection. Normal body temperature is around 37°C (give or take a degree, but this can vary from person to person). There may also be minor fluctuations over the course of the day and night.
The fever triggered by a viral or bacterial infection is caused by chemicals produced by the immune system, which reset the body’s thermostat to a higher level.
Contrary to popular belief, the severity of fever isn’t necessarily related to the seriousness of the illness – for example, life-threatening meningitis might only cause a small temperature rise.
Most cases of mild fever resolve by themselves within a couple of days. A mild fever (up to 39°C) can actually help the immune system to get rid of an infection. In children between the ages of 6 months and 6 years, fever can trigger convulsions. A fever of 42.4°C or higher, particularly in the elderly, can permanently damage the brain.
Symptoms of fever
The symptoms of fever can include:
* feeling unwell
* feeling hot and sweaty
* shivering or shaking
* chattering teeth
* flushed face.
Infection is usually the cause of fever
The cause of fever is usually an infection of some kind. This could include:
* diseases caused by viruses – such as colds, flu, COVID-19 or other upper respiratory tract infections
* diseases caused by bacteria – such as tonsillitis, pneumonia or urinary tract infections
* some chronic illnesses – such as rheumatoid arthritis and ulcerative colitis can cause fevers that last for longer periods
* some tropical diseases – such as malaria, which can cause bouts of recurring fever or typhoid fever
* heat stroke – which includes fever (without sweating) as one of its symptoms
* drugs – some people may be susceptible to fever as a side effect of particular drugs.
Self-treatment suggestions for fever
Suggestions to treat fever include:
* Take paracetamol or ibuprofen in appropriate doses to help bring your temperature down.
* Drink plenty of fluids, particularly water.
* Avoid alcohol, tea and coffee as these drinks can cause slight dehydration.
* Sponge exposed skin with tepid water. To boost the cooling effect of evaporation, you could try standing in front of a fan.
* Avoid taking cold baths or showers. Skin reacts to the cold by constricting its blood vessels, which will trap body heat. The cold may also cause shivering, which can generate more heat.
* Make sure you have plenty of rest, including bed rest.
Details
Fever or pyrexia in humans is a symptom of organism's anti-infection defense mechanism that appears with body temperature exceeding the normal range due to an increase in the body's temperature set point in the hypothalamus. There is no single agreed-upon upper limit for normal temperature: sources use values ranging between 37.2 and 38.3 °C (99.0 and 100.9 °F) in humans.
The increase in set point triggers increased muscle contractions and causes a feeling of cold or chills. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure, with this being more common in young children. Fevers do not typically go higher than 41 to 42 °C (106 to 108 °F).
A fever can be caused by many medical conditions ranging from non-serious to life-threatening. This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, Lassa fever, COVID-19, and malaria. Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication or vaccination, and cancer. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.
Treatment to reduce fever is generally not required. Treatment of associated pain and inflammation, however, may be useful and help a person rest. Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature. Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms. Hyperthermia requires treatment.
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever evolved as a defense mechanism, treating a fever does not appear to improve or worsen outcomes. Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as "fever phobia."
Associated symptoms
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, dehydration, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". Mild to severe delirium (which can also cause hallucinations) may also present itself during high fevers.
Diagnosis
A range for normal temperatures has been found.[8] Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures. Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
* Temperature in the rectum/rectal is at or over 37.5–38.3 °C (99.5–100.9 °F). An ear (tympanic) or forehead (temporal) temperature may also be used.
* Temperature in the mouth (oral) is at or over 37.2 °C (99.0 °F) in the morning or over 37.7 °C (99.9 °F) in the afternoon.
* Temperature under the arm (axillary) is usually about 0.6 °C (1.1 °F) below core body temperature.
In adults, the normal range of oral temperatures in healthy individuals is 35.7–37.7 °C (96.3–99.9 °F) among men and 33.2–38.1 °C (91.8–100.6 °F) among women, while when taken rectally it is 36.7–37.5 °C (98.1–99.5 °F) among men and 36.8–37.1 °C (98.2–98.8 °F) among women, and for ear measurement it is 35.5–37.5 °C (95.9–99.5 °F) among men and 35.7–37.5 °C (96.3–99.5 °F) among women.
Normal body temperatures vary depending on many factors, including age, gender, time of day, ambient temperature, activity level, and more. Normal daily temperature variation has been described as 0.5 °C (0.9 °F). A raised temperature is not always a fever. For example, the temperature rises in healthy people when they exercise, but this is not considered a fever, as the set point is normal. On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of 37.3 °C (99.1 °F) may represent a clinically significant fever.
Hyperthermia
Hyperthermia is an elevation of body temperature over the temperature set point, due to either too much heat production or not enough heat loss. Hyperthermia is thus not considered fever. Hyperthermia should not be confused with hyperpyrexia (which is a very high fever).
Clinically, it is important to distinguish between fever and hyperthermia as hyperthermia may quickly lead to death and does not respond to antipyretic medications. The distinction may however be difficult to make in an emergency setting, and is often established by identifying possible causes.
Additional Information
Fever is abnormally high body temperature. Fever is a characteristic of many different diseases. For example, although most often associated with infection, fever is also observed in other pathologic states, such as cancer, coronary artery occlusion, and certain disorders of the blood. It also may result from physiological stresses, such as strenuous exercise or ovulation, or from environmentally induced heat exhaustion or heat stroke.
Under normal conditions, the temperature of deeper portions of the head and trunk does not vary by more than 1–2 °F in a day, and it does not exceed 99 °F (37.22 °C) in the mouth or 99.6 °F (37.55 °C) in the rectum. Fever can be defined as any elevation of body temperature above the normal level. Persons with fever may experience daily fluctuations of 5–9 °F above normal; peak levels tend to occur in the late afternoon. Mild or moderate states of fever (up to 105 °F [40.55 °C]) cause weakness or exhaustion but are not in themselves a serious threat to health. More serious fevers, in which body temperature rises to 108 °F (42.22 °C) or more, can result in convulsions and death.
During fever the blood and urine volumes become reduced as a result of loss of water through increased perspiration. Body protein is rapidly broken down, leading to increased excretion of nitrogenous products in the urine. When the body temperature is rising rapidly, the affected person may feel chilly or even have a shaking chill; conversely, when the temperature is declining rapidly, the person may feel warm and have a flushed moist skin.
In treating fever, it is important to determine the underlying cause of the condition. In general, in the case of infection, low-grade fevers may be best left untreated in order to allow the body to fight off infectious microorganisms on its own. However, higher fevers may be treated with acetaminophen or ibuprofen, which exerts its effect on the temperature-regulating areas of the brain.
The mechanism of fever appears to be a defensive reaction by the body against infectious disease. When bacteria or viruses invade the body and cause tissue injury, one of the immune system’s responses is to produce pyrogens. These chemicals are carried by the blood to the brain, where they disturb the functioning of the hypothalamus, the part of the brain that regulates body temperature. The pyrogens inhibit heat-sensing neurons and excite cold-sensing ones, and the altering of these temperature sensors deceives the hypothalamus into thinking the body is cooler than it actually is. In response, the hypothalamus raises the body’s temperature above the normal range, thereby causing a fever. The above-normal temperatures are thought to help defend against microbial invasion because they stimulate the motion, activity, and multiplication of white blood cells and increase the production of antibodies. At the same time, elevated heat levels may directly kill or inhibit the growth of some bacteria and viruses that can tolerate only a narrow temperature range.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2319) Swimming
Gist
Swimming is the self-propulsion of a person through water, or other liquid, usually for recreation, sport, exercise, or survival. Locomotion is achieved through coordinated movement of the limbs and the body to achieve hydrodynamic thrust that results in directional motion.
Swimming has many other benefits including:
* being a relaxing and peaceful form of exercise.
* alleviating stress.
* improving coordination, balance and posture.
* improving flexibility.
* providing good low-impact therapy for some injuries and conditions.
* providing a pleasant way to cool down on a hot day.
A recent study showed that swimming two to three times a week reduces the risk of heart disease in older adults. In addition to raising your heart rate, swimming regularly can help reduce body fat, build physical strength and help improve or maintain bone health in post-menopausal women.
Summary
Swimming, in recreation and sports, is the propulsion of the body through water by combined arm and leg motions and the natural flotation of the body. Swimming as an exercise is popular as an all-around body developer and is particularly useful in therapy and as exercise for physically handicapped persons. It is also taught for lifesaving purposes. Moreover, swimming is practiced as a competitive sport and is one of the top audience draws at the Olympic Games.
Competitive swimming
Internationally, competitive swimming came into prominence with its inclusion in the modern Olympic Games from their inception in 1896. Olympic events were originally only for men, but women’s events were added in 1912. Before the formation of FINA, the Games included some unusual events. In 1900, for instance, when the Games’ swimming events were held on the Seine River in France, a 200-meter obstacle race involved climbing over a pole and a line of boats and swimming under them. Such oddities disappeared after FINA took charge. Under FINA regulations, for both Olympic and other world competition, race lengths came increasingly to be measured in meters, and in 1969 world records for yard-measured races were abolished. The kinds of strokes allowed were reduced to freestyle (crawl), backstroke, breaststroke, and butterfly. All four strokes were used in individual medley races. Many countries have at one time or another dominated Olympic and world competition, including Hungary, Denmark, Australia, Germany, France, Great Britain, Canada, Japan, and the United States.
Instruction and training
The earliest instruction programs were in Great Britain in the 19th century, both for sport and for lifesaving. Those programs were copied in the rest of Europe. In the United States swimming instruction for lifesaving purposes began under the auspices of the American Red Cross in 1916. Instructional work done by the various branches of the armed forces during both World Wars I and II was very effective in promoting swimming. Courses taught by community organizations and schools, extending ultimately to very young infants, became common.
The early practice of simply swimming as much as possible at every workout was replaced by interval training and repeat training by the late 1950s. Interval training consists of a series of swims of the same distance with controlled rest periods. In slow interval training, used primarily to develop endurance, the rest period is always shorter than the time taken to swim the prescribed distance. Fast interval training, used primarily to develop speed, permits rest periods long enough to allow almost complete recovery of the heart and breathing rate.
The increased emphasis on international competition led to the growing availability of 50-meter (164-foot) pools. Other adjuncts that improved both training and performance included wave-killing gutters for pools, racing lane markers that also reduce turbulence, cameras for underwater study of strokes, large clocks visible to swimmers, and electrically operated touch and timing devices. Since 1972 all world records have been expressed in hundredths of a second. Advances in swimsuit technology reached a head at the 2008 Olympic Games in Beijing, where swimmers—wearing high-tech bodysuits that increased buoyancy and decreased water resistance—broke 25 world records. After another round of record-shattering times at the 2009 world championships, FINA banned such bodysuits, for fear that they augmented a competitor’s true ability.
Details
Swimming is the self-propulsion of a person through water, or other liquid, usually for recreation, sport, exercise, or survival. Locomotion is achieved through coordinated movement of the limbs and the body to achieve hydrodynamic thrust that results in directional motion. Humans can hold their breath underwater and undertake rudimentary locomotive swimming within weeks of birth, as a survival response. Swimming requires stamina, skills, and proper technique.
Swimming is a popular activity and competitive sport where certain techniques are deployed to move through water. It offers numerous health benefits, such as strengthened cardiovascular health, muscle strength, and increased flexibility. It is suitable for people of all ages and fitness levels.
Swimming is consistently among the top public recreational activities, and in some countries, swimming lessons are a compulsory part of the educational curriculum. As a formalized sport, swimming is featured in various local, national, and international competitions, including every modern Summer Olympics.
Swimming involves repeated motions known as strokes to propel the body forward. While the front crawl, also known as freestyle, is widely regarded as the fastest of the four main strokes, other strokes are practiced for special purposes, such as training.
Swimming comes with certain risks, mainly because of the aquatic environment where it takes place. By way of example, swimmers may find themselves incapacitated by panic and exhaustion, both potential causes of death by drowning. Other dangers may arise from exposure to infection or hostile aquatic fauna. To minimize such eventualities, most facilities employ a lifeguard to keep alert for any signs of distress.
Swimmers often wear specialized swimwear, although depending on the area's culture, some swimmers may also swim nude or wear their day attire. In addition, a variety of equipment can be used to enhance the swimming experience or performance, including but not limited to the use of swimming goggles, floatation devices, swim fins, and snorkels.
Science
Swimming relies on the nearly neutral buoyancy of the human body. On average, the body has a relative density of 0.98 compared to water, which causes the body to float. However, buoyancy varies based on body composition, lung inflation, muscle and fat content, centre of gravity and the salinity of the water. Higher levels of body fat and saltier water both lower the relative density of the body and increase its buoyancy. Because they tend to have a lower centre of gravity and higher muscle content, human males find it more difficult to float or be buoyant. See also: Hydrostatic weighing.
Since the human body is less dense than water, water can support the body's weight during swimming. As a result, swimming is "low-impact" compared to land activities such as running. The density and viscosity of water also create resistance for objects moving through the water. Swimming strokes use this resistance to create propulsion, but this same resistance also generates drag on the body.
Hydrodynamics is important to stroke technique for swimming faster, and swimmers who want to swim faster or exhaust less try to reduce the drag of the body's motion through the water. To be more hydrodynamically effective, swimmers can either increase the power of their strokes or reduce water resistance. However, power must increase by a factor of three to achieve the same effect as reducing resistance. Efficient swimming by reducing water resistance involves a horizontal water position, rolling the body to reduce the breadth of the body in the water, and extending the arms as far as possible to reduce wave resistance.
Just before plunging into the pool, swimmers may perform exercises such as squatting. Squatting helps enhance a swimmer's start by warming up the thigh muscles.
Infant swimming
Human babies demonstrate an innate swimming or diving reflex from newborn until approximately ten months. Other mammals also demonstrate this phenomenon (see mammalian diving reflex). The diving response involves apnea, reflex bradycardia, and peripheral vasoconstriction; in other words, babies immersed in water spontaneously hold their breath, slow their heart rate, and reduce blood circulation to the extremities (fingers and toes). Because infants are innately able to swim, classes for babies about six months old are offered in many locations. This helps build muscle memory and makes strong swimmers from a young age.
Technique
Swimming can be undertaken using a wide range of styles, known as 'strokes,' and which are used for different purposes or to distinguish between classes in competitive swimming. Using a defined stroke for propulsion through the water is unnecessary, and untrained swimmers may use a 'doggy paddle' of arm and leg movements, similar to how four-legged animals swim.
Four main strokes are used in competition and recreational swimming: the front crawl, breaststroke, backstroke, and butterfly. Competitive swimming in Europe started around 1800, mostly using the breaststroke, which started as the current breaststroke arms and the legs of the butterfly stroke. In 1873, John Arthur Trudgen introduced the trudgen to Western swimming competitions. The butterfly was developed in the 1930s and was considered a variant of the breaststroke until it was accepted as a separate style in 1953. Butterfly is considered the hardest stroke by many people, but it is the most effective for all-around toning and the building of muscles. It also burns the most calories and can be the second fastest stroke if practiced regularly.
In non-competitive swimming, there are some swimming strokes, including sidestroke. The sidestroke, toward the end of the 19th century, changed this pattern by raising one arm above the water first, then the other, and then each in turn. It is still used in lifesaving and recreational swimming.
Other strokes exist for particular reasons, such as training, school lessons, and rescue, and it is often possible to change strokes to avoid using parts of the body, either to separate specific body parts, such as swimming with only arms or legs to exercise them harder, or for amputees or those affected by paralysis.
Additional Information
When it comes exercise, there’s an exercise more effective than weight training and running that you’re probably forgetting about — unless it’s the Olympics. Yes, we’re talking about that sport. The activity Michael Phelps has his name stamped on is actually the best exercise anyone can start.
Lap swims — done in a pool with designated lanes, if possible — is what we’re talking about. Swimming back and forth is nothing like being on the repetitive “dreadmill.” It’s more fun, carries a lot smaller chance of injury, and is essentially a life skill.
Plus, it’s the perfect way to cool off in the summer heat or get in an effective indoor workout during the snowy winter months.
Swimming is the easiest way to get a full-body workout
“You can get any type of cardio workout that you need in the pool and have little or no impact on your joints,” explains Ian Rose, director of aquatics at East Bank Club in Chicago.
“If you have a good technique in your swim stroke, you can safely perform all of the cardio that any goal requires without doing damage to your body,” he explains. “Other exercises come with a list of potential long-term negative effects.”
The low-impact nature of the sport is one reason many athletes actually turn to swimming — or aqua jogging — when recovering from a running or cycling injury. Because of the effectiveness of the workout, athletes actually don’t miss out on any strength or endurance work they’d be getting in other sports.
“Swimming fires up more of your body’s major muscle groups than other forms of cardio exercise,” adds Natasha Van Der Merwe, director of triathlon at Austin Aquatics and Sports Academy in Austin, Texas. “Swimming not only engages your legs, but also recruits your upper body and core, especially your lats — the muscles of your middle back — and triceps,” she explains. Certain movements like dolphin kicks, flutter kicks, and more can help strengthen your core.
And your lungs also really benefit from this sport. In fact, a 2016 study notes that swimmers tend to have stronger lungs than other athletes.
But just because the sport benefits your lungs the most doesn’t mean it comes without warnings.
Another study cautioned that competitive swimmers who train indoors in chlorinated pools do risk lung changes that mirror the lungs of people with mild asthma. You can avoid these airway changes by training in outdoor pools and mixing up your training with other activities instead of relying only on swimming.
For those times you do choose the pool over the gym (let’s be honest, the machines can be a bit intimidating), the good news is that little gear other than a swimsuit and goggles are needed for a quality swim workout.
Should you wish, you can get more gear, like fins and a kickboard. They aren’t completely necessary, but serve as a training aid — especially as you learn proper form and technique.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2320) Meteorology
Gist
Meteorology is the science of weather. It is essentially an inter-disciplinary science because the atmosphere, land and ocean constitute an integrated system. The three basic aspects of meteorology are observation, understanding and prediction of weather. There are many kinds of routine meteorological observations.
Meteorology is the study of the atmosphere, atmospheric phenomena, and atmospheric effects on our weather. The atmosphere is the gaseous layer of the physical environment that surrounds a planet.
Summary
Meteorology is a branch of the atmospheric sciences (which include atmospheric chemistry and physics) with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not begin until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. It was not until after the elucidation of the laws of physics, and more particularly in the latter half of the 20th century, the development of the computer (allowing for the automated solution of a great many modelling equations) that significant breakthroughs in weather forecasting were achieved. An important branch of weather forecasting is marine weather forecasting as it relates to maritime and coastal safety, in which weather effects also include atmospheric interactions with large bodies of water.
Meteorological phenomena are observable weather events that are explained by the science of meteorology. Meteorological phenomena are described and quantified by the variables of Earth's atmosphere: temperature, air pressure, water vapour, mass flow, and the variations and interactions of these variables, and how they change over time. Different spatial scales are used to describe and predict weather on local, regional, and global levels.
Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between Earth's atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in many diverse fields such as the military, energy production, transport, agriculture, and construction.
The word meteorology is from the Ancient Greek metéōros (meteor) and -logia (-(o)logy), meaning "the study of things high in the air".
Details
Meteorology is the study of the atmosphere, atmospheric phenomena, and atmospheric effects on our weather. The atmosphere is the gaseous layer of the physical environment that surrounds a planet. Earth’s atmosphere is roughly 100 to 125 kilometers (65-75 miles) thick. Gravity keeps the atmosphere from expanding much farther.
Meteorology is a subdiscipline of the atmospheric sciences, a term that covers all studies of the atmosphere. A subdiscipline is a specialized field of study within a broader subject or discipline. Climatology and aeronomy are also subdisciplines of the atmospheric sciences. Climatology focuses on how atmospheric changes define and alter the world’s climates. Aeronomy is the study of the upper parts of the atmosphere, where unique chemical and physical processes occur. Meteorology focuses on the lower parts of the atmosphere, primarily the troposphere, where most weather takes place.
Meteorologists use scientific principles to observe, explain, and forecast our weather. They often focus on atmospheric research or operational weather forecasting. Research meteorologists cover several subdisciplines of meteorology to include: climate modeling, remote sensing, air quality, atmospheric physics, and climate change. They also research the relationship between the atmosphere and Earth’s climates, oceans, and biological life.
Forecasters use that research, along with atmospheric data, to scientifically assess the current state of the atmosphere and make predictions of its future state. Atmospheric conditions both at Earth's surface and above are measured from a variety of sources: weather stations, ships, buoys, aircraft, radar, weather balloons, and satellites. This data is transmitted to centers throughout the world that produce computer analyses of global weather. The analyses are passed on to national and regional weather centers, which feed this data into computers that model the future state of the atmosphere. This transfer of information demonstrates how weather and the study of it take place in multiple, interconnected ways.
Scales of Meteorology
Weather occurs at different scales of space and time. The four meteorological scales are: microscale, mesoscale, synoptic scale, and global scale. Meteorologists often focus on a specific scale in their work.
Microscale Meteorology
Microscale meteorology focuses on phenomena that range in size from a few centimeters to a few kilometers, and that have short life spans (less than a day). These phenomena affect very small geographic areas, and the temperatures and terrains of those areas.
Microscale meteorologists often study the processes that occur between soil, vegetation, and surface water near ground level. They measure the transfer of heat, gas, and liquid between these surfaces. Microscale meteorology often involves the study of chemistry.
Tracking air pollutants is an example of microscale meteorology. MIRAGE-Mexico is a collaboration between meteorologists in the United States and Mexico. The program studies the chemical and physical transformations of gases and aerosols in the pollution surrounding Mexico City. MIRAGE-Mexico uses observations from ground stations, aircraft, and satellites to track pollutants.
Mesoscale Meteorology
Mesoscale phenomena range in size from a few kilometers to roughly 1,000 kilometers (620 miles). Two important phenomena are mesoscale convective complexes (MCC) and mesoscale convective systems (MCS). Both are caused by convection, an important meteorological principle.
Convection is a process of circulation. Warmer, less-dense fluid rises, and colder, denser fluid sinks. The fluid that most meteorologists study is air. (Any substance that flows is considered a fluid.) Convection results in a transfer of energy, heat, and moisture—the basic building blocks of weather.
In both an MCC and MCS, a large area of air and moisture is warmed during the middle of the day—when the sun angle is at its highest. As this warm air mass rises into the colder atmosphere, it condenses into clouds, turning water vapor into precipitation.
An MCC is a single system of clouds that can reach the size of the state of Ohio and produce heavy rainfall and flooding. An MCS is a smaller cluster of thunderstorms that lasts for several hours. Both react to unique transfers of energy, heat, and moisture caused by convection.
The Deep Convective Clouds and Chemistry (DC3) field campaign is a program that will study storms and thunderclouds in Colorado, Alabama, and Oklahoma. This project will consider how convection influences the formation and movement of storms, including the development of lightning. It will also study their impact on aircraft and flight patterns. The DC3 program will use data gathered from research aircraft able to fly over the tops of storms.
Synoptic Scale Meteorology
Synoptic-scale phenomena cover an area of several hundred or even thousands of kilometers. High- and low-pressure systems seen on local weather forecasts, are synoptic in scale. Pressure, much like convection, is an important meteorological principle that is at the root of large-scale weather systems as diverse as hurricanes and bitter cold outbreaks.
Low-pressure systems occur where the atmospheric pressure at the surface of Earth is less than its surrounding environment. Wind and moisture from areas with higher pressure seek low-pressure systems. This movement, in conjunction with the Coriolis force and friction, causes the system to rotate counter-clockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere, creating a cyclone. Cyclones have a tendency for upward vertical motion. This allows moist air from the surrounding area to rise, expand and condense into water vapor, forming clouds. This movement of moisture and air causes the majority of our weather events.
Hurricanes are a result of low-pressure systems (cyclones) developing over tropical waters in the Western Hemisphere. The system drags up massive amounts of warm moisture from the sea, causing convection to take place, which in turn causes wind speeds to increase and pressure to fall. When these winds reach speeds over 119 kilometers per hour (74 miles per hour), the cyclone is classified as a hurricane.
Hurricanes can be one of the most devastating natural disasters in the Western Hemisphere. The National Hurricane Center, in Miami, Florida, regularly issues forecasts and reports on all tropical weather systems. During hurricane season, hurricane specialists issue forecasts and warnings for every tropical storm in the western tropical Atlantic and eastern tropical Pacific. Businesses and government officials from the United States, the Caribbean, Central America, and South America rely on forecasts from the National Hurricane Center.
High-pressure systems occur where the atmospheric pressure at the surface of Earth is greater than its surrounding environment. This pressure has a tendency for downward vertical motion, allowing for dry air and clear skies.
Extremely cold temperatures are a result of high-pressure systems that develop over the Arctic and move over the Northern Hemisphere. Arctic air is very cold because it develops over ice and snow-covered ground. This cold air is so dense that it pushes against Earth’s surface with extreme pressure, preventing any moisture or heat from staying within the system.
Meteorologists have identified many semi-permanent areas of high-pressure. The Azores high, for instance, is a relatively stable region of high pressure around the Azores, an archipelago in the mid-Atlantic Ocean. The Azores high is responsible for arid temperatures of the Mediterranean basin, as well as summer heat waves in Western Europe.
Global Scale Meteorology
Global scale phenomena are weather patterns related to the transport of heat, wind, and moisture from the tropics to the poles. An important pattern is global atmospheric circulation, the large-scale movement of air that helps distribute thermal energy (heat) across the surface of the Earth.
Global atmospheric circulation is the fairly constant movement of winds across the globe. Winds develop as air masses move from areas of high pressure to areas of low pressure. Global atmospheric circulation is largely driven by Hadley cells. Hadley cells are tropical and equatorial convection patterns. Convection drives warm air high in the atmosphere, while cool, dense air pushes lower in a constant loop. Each loop is a Hadley cell.
Hadley cells determine the flow of trade winds, which meteorologists forecast. Businesses, especially those exporting products across oceans, pay close attention to the strength of trade winds because they help ships travel faster. Westerlies are winds that blow from the west in the midlatitudes. Closer to the Equator, trade winds blow from the northeast (north of the Equator) and the southeast (south of the Equator).
Meteorologists study long-term climate patterns that disrupt global atmospheric circulation. Meteorologists discovered the pattern of El Nino, for instance. El Niño involves ocean currents and trade winds across the Pacific Ocean. El Niño occurs roughly every five years, disrupting global atmospheric circulation and affecting local weather and economies from Australia to Peru.
El Niño is linked with changes in air pressure in the Pacific Ocean known as the Southern Oscillation. Air pressure drops over the eastern Pacific, near the coast of the Americas, while air pressure rises over the western Pacific, near the coasts of Australia and Indonesia. Trade winds weaken. Eastern Pacific nations experience extreme rainfall. Warm ocean currents reduce fish stocks, which depend on nutrient-rich upwelling of cold water to thrive. Western Pacific nations experience drought, devastating agricultural production.
Understanding the meteorological processes of El Niño helps farmers, fishers, and coastal residents prepare for the climate pattern.
History of Meteorology
The development of meteorology is deeply connected to developments in science, math, and technology. The Greek philosopher Aristotle wrote the first major study of the atmosphere around 340 B.C.E. Many of Aristotle’s ideas were incorrect, however, because he did not believe it was necessary to make scientific observations.
A growing belief in the scientific method profoundly changed the study of meteorology in the 17th and 18th centuries. Evangelista Torricelli, an Italian physicist, observed that changes in air pressure were connected to changes in weather. In 1643, Torricelli invented the barometer, to accurately measure the pressure of air. The barometer is still a key instrument in understanding and forecasting weather systems. In 1714, Daniel Fahrenheit, a German physicist, developed the mercury thermometer. These instruments made it possible to accurately measure two important atmospheric variables.
There was no way to quickly transfer weather data until the invention of the telegraph by American inventor Samuel Morse in the mid-1800s. Using this new technology, meteorological offices were able to share information and produce the first modern weather maps. These maps combined and displayed more complex sets of information such as isobars (lines of equal air pressure) and isotherms (lines of equal temperature). With these large-scale weather maps, meteorologists could examine a broader geographic picture of weather and make more accurate forecasts.
In the 1920s, a group of Norwegian meteorologists developed the concepts of air masses and fronts that are the building blocks of modern weather forecasting. Using basic laws of physics, these meteorologists discovered that huge cold and warm air masses move and meet in patterns that are the root of many weather systems.
Military operations during World War I and World War II brought great advances to meteorology. The success of these operations was highly dependent on weather over vast regions of the globe. The military invested heavily in training, research, and new technologies to improve their understanding of weather. The most important of these new technologies was radar, which was developed to detect the presence, direction, and speed of aircraft and ships. Since the end of World War II, radar has been used and improved to detect the presence, direction, and speed of precipitation and wind patterns.
The technological developments of the 1950s and 1960s made it easier and faster for meteorologists to observe and predict weather systems on a massive scale. During the 1950s, computers created the first models of atmospheric conditions by running hundreds of data points through complex equations. These models were able to predict large-scale weather, such as the series of high- and low-pressure systems that circle our planet.
TIROS I, the first meteorological satellite, provided the first accurate weather forecast from space in 1962. The success of TIROS I prompted the creation of more sophisticated satellites. Their ability to collect and transmit data with extreme accuracy and speed has made them indispensable to meteorologists. Advanced satellites and the computers that process their data are the primary tools used in meteorology today.
Meteorology Today
Today’s meteorologists have a variety of tools that help them examine, describe, model, and predict weather systems. These technologies are being applied at different meteorological scales, improving forecast accuracy and efficiency.
Radar is an important remote sensing technology used in forecasting. A radar dish is an active sensor in that it sends out radio waves that bounce off particles in the atmosphere and return to the dish. A computer processes these pulses and determines the horizontal dimension of clouds and precipitation, and the speed and direction in which these clouds are moving.
A new technology, known as dual-polarization radar, transmits both horizontal and vertical radio wave pulses. With this additional pulse, dual-polarization radar is better able to estimate precipitation. It is also better able to differentiate types of precipitation—rain, snow, sleet, or hail. Dual-polarization radar will greatly improve flash-flood and winter-weather forecasts.
Tornado research is another important component of meteorology. Starting in 2009, the National Oceanic and Atmospheric Administration (NOAA) and the National Science Foundation conducted the largest tornado research project in history, known as VORTEX2. The VORTEX2 team, consisting of about 200 people and more than 80 weather instruments, traveled more than 16,000 kilometers (10,000 miles) across the Great Plains of the United States to collect data on how, when, and why tornadoes form. The team made history by collecting extremely detailed data before, during, and after a specific tornado. This tornado is the most intensely examined in history and will provide key insights into tornado dynamics.
Satellites are extremely important to our understanding of global scale weather phenomena. The National Aeronautics and Space Administration (NASA) and NOAA operate three Geostationary Operational Environmental Satellites (GOES) that provide weather observations for more than 50 percent of Earth’s surface.
GOES-15, launched in 2010, includes a solar X-ray imager that monitors the sun’s X-rays for the early detection of solar phenomena, such as solar flares. Solar flares can affect military and commercial satellite communications around the globe. A highly accurate imager produces visible and infrared images of Earth’s surface, oceans, cloud cover, and severe storm developments. Infrared imagery detects the movement and transfer of heat, improving our understanding of the global energy balance and processes such as global warming, convection, and severe weather.
Additional Information
Look up at the sky. Is it raining or sunny? Are there big, puffy clouds that look like marshmallows, or dark, angry clouds threatening sleet? No matter how the sky appears, you are looking at Earth’s lower atmosphere, the realm that is studied by the science of meteorology. Meteorology concerns itself with the science of atmospheric properties and phenomena—science that includes the atmosphere’s physics and chemistry.
Meteorologists are often thought of as people who forecast the weather. And some meteorologists certainly do that! Predicting the weather is a complicated process, which requires both sophisticated new tools and some old-fashioned techniques. Meteorologists are observers and researchers. They note the physical conditions of the atmosphere above them, and they study maps, satellite data, and radar information. They also compare various kinds of weather data from local, regional, and global sources.
Beyond weather forecasting, meteorology is concerned with long-term trends in climate and weather, and their potential impact on human populations. An important area of meteorological research these days is climate change and the effects it may cause.
Many people wonder why the study of the atmosphere is called meteorology. The name comes from the ancient Greeks. In about 340 B.C.E., the Greek philosopher Aristotle wrote a book called Meteorologica, which contained all that was known at the time about weather and climate. Aristotle got the title of his book from the Greek word “meteoron,” which meant “a thing high up” and referred to anything observed in the atmosphere. That term stuck through the centuries, so experts on the atmosphere became known as meteorologists.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2321) Aerodrome / Hangar
Gist
An aerodrome is the part of an airport that is used by aircraft. Its design, its safety-related equipment and operations, including aerodrome operators, apron management services and ground handling operations help ensure the safety of aircraft, as well as other vehicles and persons present.
Aerodromes play a pivotal role in aviation, facilitating not just the movement of aircraft but also supporting the vast infrastructure required for passenger services, cargo handling, aircraft maintenance, and emergency services.
An aerodrome is a location where aircraft operate. The international aviation community defines aerodromes as an area (including any buildings and equipment) intended to be used for the arrival, departure, and movement of aircraft. This includes the runways, aprons, hangars, and aircraft parking areas.
Summary
A hangar is a building or structure designed to hold aircraft or spacecraft. Hangars are built of metal, wood, or concrete. The word hangar comes from Middle French hanghart ("enclosure near a house"), of Germanic origin, from Frankish *haimgard ("home-enclosure", "fence around a group of houses"), from *haim ("home, village, hamlet") and gard ("yard"). The term, gard, comes from the Old Norse garðr ("enclosure, garden").
Hangars are used for protection from the weather, direct sunlight and for maintenance, repair, manufacture, assembly and storage of aircraft.
The Wright brothers stored and repaired their aircraft in a wooden hangar constructed in 1902 at Kill Devil Hills in North Carolina for their glider. After completing design and construction of the Wright Flyer in Ohio, the brothers returned to Kill Devil Hills only to find their hangar damaged. They repaired the structure and constructed a new workshop while they waited for the Flyer to be shipped.
Carl Richard Nyberg used a hangar to store his 1908 Flugan (fly) in the early 20th century and in 1909, Louis Bleriot crash-landed on a northern French farm in Les Baraques (between Sangatte and Calais) and rolled his monoplane into the farmer's cattle pen. Bleriot was in a race to be the first man to cross the English Channel in a heavier-than-air aircraft, and he and set up his headquarters in the unused shed. In Britain, the earliest aircraft hangars were known as aeroplane sheds, and the oldest survivors of these are at Larkhill, Wiltshire. These were built in 1910 for the Bristol School of Flying and are now Grade II* Listed buildings. British aviation pioneer Alliott Verdon Roe built one of the first aeroplane sheds in 1907 at Brooklands, Surrey and full-size replicas of this and the 1908 Roe biplane are on display at Brooklands Museum.
As aviation became established in Britain before World War I, standard designs of hangar gradually appeared with military types too such as the Bessonneau hangar and the side-opening aeroplane shed of 1913, both of which were soon adopted by the Royal Flying Corps. Examples of the latter survive at Farnborough, Filton and Montrose airfields. During World War I, other standard designs included the RFC General Service Flight Shed and the Admiralty F-Type of 1916, the General Service Shed (featuring the characteristic Belfast-truss roof and built-in various sizes) and the Handley Page aeroplane shed (1918).
Construction:
Steel construction
Sheds built for rigid airships survive at Moffett Field, California; Akron, Ohio; Weeksville, North Carolina; Lakehurst, New Jersey; Santa Cruz Air Force Base in Brazil; and Cardington, Bedfordshire. Steel rigid airship hangars are some of the largest in the world.
Hangar 1, Lakehurst, is located at Naval Air Engineering Station Lakehurst (formerly Naval Air Station Lakehurst), New Jersey. The structure was completed in 1921 and is typical of airship hangar designs of World War I. The site is best known for the Hindenburg disaster, when on May 6, 1937, the German airship Hindenburg crashed and burned while landing. Hangar No.1 at Lakehurst was used to build and store the American USS Shenandoah. The hangar also provided service and storage for the airships USS Los Angeles, Akron, Macon, as well as the Graf Zeppelin and the Hindenburg.
The largest hangars ever built include the Goodyear Airdock measuring 1,175x325x211 feet and Hangar One (Mountain View, California) measuring 1,133 ft × 308 ft × 198 ft (345 m × 94 m × 60 m). The Goodyear Airdock, is in Akron, Ohio and the structure was completed on November 25, 1929. The Airdock was used for the construction of the USS Akron and her sister ship, the USS Macon.
Hangar One at Moffett Federal Field (formerly Naval Air Station Moffett Field), is located in Mountain View, California. The structure was completed in 1931. It housed the USS Macon.
Wood construction
Six helium-filled blimps stored in one of the two hangars at the former US Marine Corps Air Station Tustin
The U.S. Navy established more airship operations during WWII. As part of this, ten "lighter-than-air" (LTA) bases across the United States were built as part of the coastal defence plan; a total of 17 hangars were built. Hangars at these bases are some of the world's largest freestanding timber structures. Bases with wooden hangars included: the Naval Air Stations at South Weymouth, Massachusetts (1 hangar); Lakehurst, New Jersey (2); Weeksville, North Carolina (1); Glynco, Georgia (2); Richmond, Florida (3); Houma, Louisiana (1); Hitchcock, Texas (1); Tustin (Santa Ana), California (2); Moffett Field, California (2) and Tillamook, Oregon (2). Of the seventeen, only seven remain, Moffett Federal Field, (former NAS Moffett Field), California (2); former Tustin, California (former NAS Santa Ana and MCAS Tustin), California (2); Tillamook Air Museum/Tillamook Airport (former NAS Tillamook), Oregon (1) and Joint Base McGuire-Dix-Lakehurst/Naval Support Activity Lakehurst (former NAS Lakehurst), New Jersey (2).
Fabric construction
A hangar for Cargolifter was built at Brand-Briesen Airfield 1,180 ft (360 m) long, 705 ft (215 m) wide and 348 ft (106 m) high and is a free standing steel-dome "barrel-bowl" construction large enough to fit the Eiffel Tower on its side. The company went into insolvency and in June 2003, the facilities were sold off and the airship hangar was converted to a 'tropical paradise'-themed indoor holiday resort called Tropical Islands, which opened in 2004.
An alternative to the fixed hangar is a portable shelter that can be used for aircraft storage and maintenance. Portable fabric structures can be built up to 215 ft (66 m) wide, 100 ft (30 m) high and any length. They are able to accommodate several aircraft and can be increased in size and even relocated when necessary.
Details
An aerodrome is a location from which aircraft flight operations take place, regardless of whether they involve air cargo, passengers, or neither, and regardless of whether it is for public or private use. Aerodromes include small general aviation airfields, large commercial airports, and military air bases.
The term airport may imply a certain stature (having satisfied certain certification criteria or regulatory requirements) that not all aerodromes may have achieved. That means that all airports are aerodromes, but not all aerodromes are airports. Usage of the term "aerodrome" (or "airfield") remains more common in Commonwealth English, and is conversely almost unknown in American English, where the term "airport" is applied almost exclusively.
A water aerodrome is an area of open water used regularly by seaplanes, floatplanes or amphibious aircraft for landing and taking off.
In formal terminology, as defined by the International Civil Aviation Organization (ICAO), an aerodrome is "a defined area on land or water (including any buildings, installations, and equipment) intended to be used either wholly or in part for the arrival, departure, and surface movement of aircraft."
Etymology
In British military usage, the Royal Flying Corps in the First World War, and the Royal Air Force in the First and Second World Wars, used the term—it had the advantage that their French allies, on whose soil they were often based, and with whom they co-operated, used the cognate term aérodrome.
In Canada and Australia, aerodrome is a legal term of art for any area of land or water used for aircraft operation, regardless of facilities.
International Civil Aviation Organization (ICAO) documents use the term aerodrome, for example, in the Annex to the ICAO Convention about aerodromes, their physical characteristics, and their operation. However, the terms airfield or airport mostly superseded use of aerodrome after the Second World War, in colloquial language.
History
In the early days of aviation, when there were no paved runways and all landing fields were grass, a typical airfield might permit takeoffs and landings in only a couple of directions, much like today's airports, whereas an aerodrome was distinguished, by virtue of its much greater size, by its ability to handle landings and takeoffs in any direction. The ability to always take off and land directly into the wind, regardless of the wind's direction, was an important advantage in the earliest days of aviation when an airplane's performance in a crosswind takeoff or landing might be poor or even dangerous. The development of differential braking in aircraft, improved aircraft performance, utilization of paved runways, and the fact that a circular aerodrome required much more space than did the "L" or triangle shaped airfield, eventually made the early aerodromes obsolete.
The unimproved airfield remains a phenomenon in military aspects. The DHC-4 Caribou served in the United States military in Vietnam (designated as the CV-2), landing on rough, unimproved airfields where the C-130 Hercules workhorse could not operate. Earlier, the Ju 52 and Fieseler Storch could do the same, one example of the latter taking off from the Führerbunker whilst completely surrounded by Russian troops.
Types:
Airport
In colloquial use in certain environments, the terms airport and aerodrome are often interchanged. However, in general, the term airport may imply or confer a certain stature upon the aviation facility that other aerodromes may not have achieved. In some jurisdictions, airport is a legal term of art reserved exclusively for those aerodromes certified or licensed as airports by the relevant civil aviation authority after meeting specified certification criteria or regulatory requirements.
Air base
An air base is an aerodrome with significant facilities to support aircraft and crew. The term is usually reserved for military bases, but also applies to civil seaplane bases.
Airstrip
An airstrip is a small aerodrome that consists only of a runway with perhaps fueling equipment. They are generally in remote locations, e.g. Airstrips in Tanzania. Many airstrips (now mostly abandoned) were built on the hundreds of islands in the Pacific Ocean during the Second World War. A few airstrips grew to become full-fledged airbases as the strategic or economic importance of a region increased over time.
An advanced landing ground was a temporary airstrip used by the Allies in the run-up to and during the invasion of Normandy, and these were built both in Britain, and on the continent.
Water aerodrome
A water aerodrome or seaplane base is an area of open water used regularly by seaplanes, floatplanes and amphibious aircraft for landing and taking off. It may have a terminal building on land and/or a place where the plane can come to shore and dock like a boat to load and unload (for example, Yellowknife Water Aerodrome). Some are co-located with a land based airport and are certified airports in their own right. These include Vancouver International Water Airport and Vancouver International Airport. Others, such as Vancouver Harbour Flight Centre have their own control tower, Vancouver Harbour Control Tower.
By country:
Canada
The Canadian Aeronautical Information Manual says "...for the most part, all of Canada can be an aerodrome", however, there are also "registered aerodromes" and "certified airports". To become a registered aerodrome, the operator must maintain certain standards and keep the Minister of Transport informed of any changes. To be certified as an airport the aerodrome, which usually supports commercial operations, must meet safety standards. Nav Canada, the private company responsible for air traffic control services in Canada, publishes the Canada Flight Supplement, a directory of all registered Canadian land aerodromes, as well as the Canada Water Aerodrome Supplement (CWAS).
Republic of Ireland
Casement Aerodrome is the main military airport used by the Irish Air Corps. The term "aerodrome" is used for airports and airfields of lesser importance in Ireland, such as those at Abbeyshrule; Bantry; Birr; Inisheer; Inishmaan; Inishmore; Newcastle, County Wicklow; and Trim.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2322) Tourist Guide
Gist
A tourist guide is someone who points out the way and leads others on a trip or tour. Generally, a tourist guide will work at a specific location, city or province. In some cases, guides qualify to guide throughout an entire country.
Tour guides are important in the tourism industry because they provide information, guidance, and enhance the overall tourist experience. Tour guides are important in the tourism industry because they contribute to the progress of the business by providing excellent service to tourists.
Tour guides, or tourist guides, are members of the hospitality and travel industry who show visitors around places of interest. Tour guides may lead groups or individuals through historical sites, museums, geographic destinations and on outdoor excursions.
Summary
At the National Department of Tourism we measure success not only in the visitor numbers, but in the experiences we create, the new opportunities for meaningful employment and growth and the understanding that is fostered between people from different backgrounds and different corners of the world and our tourist guides play an integral role in this.
Tourist Guides are often one of the first people to welcome tourists and the last to bid them farewell. Their role is to enhance our visitors' experience and be ambassadors for South Africa as a tourist destination.
Definition of Tourist Guide
Tourist Guides act as ambassadors of the country, they are the first to meet and welcome tourists and they are often the last ones to bid farewell to them when they leave the country.
Various international organizations such as the World Federation of Tourist Guides Associations (WFTGA) define a tourist guide as the person who guides visitors in the language of their choice and interprets the cultural and natural heritage of an area, which person may possess an area specific qualification. Such specifications are usually issued and/or recognized by the appropriate authority.
A tourist guide is someone who points out the way and leads others on a trip or tour. Generally, a tourist guide will work at a specific location, city or province. In some cases, guides qualify to guide throughout an entire country.
According to the Tourism Act No. 3 of 2014, Tourist guide means any person registered as such under section 50 and who for reward accompanies any person who travels within or visits any place within the Republic and who furnishes such person with information or comments.
Importance of Tourist Guides
Tourist guiding is a very critical component of the tourism value chain. They play an essential role in ensuring repeat tourist visitation to South Africa through creating a positive image of our country.
In South Africa, tourist guiding is a regulated profession governed by national legislation and policies. Any person that would like to become a tourist guide must undergo training as part of a formal qualification registered by the South African Qualifications Authority (SAQA), Upon being deemed competent, such person will receive a certificate issued by the Culture, Arts, Tourism Hospitality and Sports Sector Education and Training Authority (CATHSSETA), Such person must then apply to the relevant Provincial Registrar to be registered in order to operate legally. This process unfolds as prescribed in the Tourism Act, 2014 and the Regulations in respect of Tourist Guides, 1994 and 2001 respectively.
Characteristics of Tourist Guides
The role and function of a guide is to organise, inform and entertain. Guides are mainly freelance and self-employed. Work is often seasonal and may involve working during unsociable hours. Work is usually obtained through direct contact with tour operators and other agencies and therefore, guides must be self-sufficient and be able to market themselves.
The manner in which tourist guides interact and treat tourists is very important because it gives a lasting impression about the country in general. The Code of Conduct and Ethics that tourist guides signs prescribes the way in which qualified, legally registered tourist guides must conduct themselves whilst on duty. Registered tourist guides who fail to abide by the Code of Conduct and Ethics could be subjected to formal disciplinary hearings and be charged with misconduct.
Details
A tour guide (U.S.) or a tourist guide (European) is a person who provides assistance, and information on cultural, historical and contemporary heritage to people on organized sightseeing and individual clients at educational establishments, religious and historical sites such as; museums, and at various venues of tourist attraction resorts. Tour guides also take clients on outdoor guided trips. These trips include hiking, whitewater rafting, mountaineering, alpine climbing, rock climbing, ski and snowboarding in the backcountry, fishing, and biking.
History
In 18th-century Japan, a traveler could pay for a tour guide or consult guide books such as Kaibara Ekken's Keijō Shōran (The Excellent Views of Kyoto).
Description:
In Europe
The CEN (European Committee for Standardization) definition for "tourist guide" – part of the work by CEN on definitions for terminology within the tourism industry – is a "person who guides visitors in the language of their choice and interprets the cultural and natural heritage of an area, which person normally possesses an area-specific qualification usually issued and/or recognized by the appropriate authority". CEN also defines a "tour manager" as a "person who manages and supervises the itinerary on behalf of the tour operator, ensuring the programme is carried out as described in the tour operator's literature and sold to the traveller/consumer and who gives local practical information".
In Europe, tourist guides are represented by FEG, the European Federation of Tourist Guide Associations. In Europe, the tourist guiding qualification is specific to each country; in some cases the qualification is national, in some cases it is broken up into regions. In all cases, it is embedded in the educational and training ethic of that country. EN15565 is a European Standard for the Training and Qualification of Tourist Guides.
In Australia
In Australia, tour guides are qualified to a minimum of Certificate III Guiding. They belong to a couple of organisations, notably the Professional Tour Guide Association of Australia [PTGAA] and Guides of Australia [GOA].
According to the Tour Guides Australia Code of Conduct, guides must commit to:
* Provid[ing] a professional service to visitors – ensuring they are treated with respect, and care and a commitment to best practice guiding.
* Providing objective and fair interpretations of the places visited.
* Educat[ing] visitors on the need to be respectful of our precious natural, cultural and heritage environments, minimising our footprint and always impacts.
* Act in such a way as to bring credit to the country and the promotion of it as a tourist destination.
* Regularly updat[ing their] guiding skills and knowledge through training, professional development, and networking activities.
* Continually maintain a valid Certificate II in First Aid & CPR
* Have their own indemnity insurance (if self-employed)
In Japan
In Japan, tour guides are required to pass a certification exam by the Commissioner of the Japan Tourism Agency and register with the relevant prefectures. Non-licensed guides caught performing guide-interpreter activities can face a fine of up to 500,000 Yen.
In India
In India it is mandatory to have a license approved by the Ministry of Tourism (India) to work officially as a tourist guide. The government provides the license to regional-level tour guides and also runs a Regional Level Guide Training Program (RLGTP). These programs and training sessions are conducted under the guidance of Indian Institute of Tourism and Travel Management (IITTM) or other government-recognized institutes.
In South Africa
In South Africa tourist guides are required to register in terms of the Tourism Act 3, 2014. Training must be done through a trainer accredited by the Culture, Arts, Tourism, Hospitality and Sport Sector Education and Training Authority.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2323) Hoarseness
Gist
Hoarseness refers to difficulty making sounds when trying to speak. Vocal sounds may be weak, breathy, scratchy, or husky, and the pitch or quality of the voice may change.
Hoarseness is a condition marked by changes in the pitch or quality of the voice, which may sound weak, scratchy or husky. Hoarseness can be caused by misuse or overuse of the voice, viruses, and growths on the vocal cords like cysts, papillomas, polyps and nodules, among other things.
You can typically relieve voice loss at home by resting your voice, staying hydrated, and avoiding irritants. You can also try inhaling steam and using a humidifier to soothe your throat and voice box. A doctor may recommend speech therapy if your lost voice results from causes like vocal nodules or dysphonia.
Summary
If you are hoarse, your voice will sound breathy, raspy, or strained, or will be softer in volume or lower in pitch. Your throat might feel scratchy. Hoarseness is often a symptom of problems in the vocal folds of the larynx.
How does our voice work?
The sound of our voice is produced by vibration of the vocal folds, which are two bands of smooth muscle tissue that are positioned opposite each other in the larynx. The larynx is located between the base of the tongue and the top of the trachea, which is the passageway to the lungs.
When we're not speaking, the vocal folds are open so that we can breathe. When it's time to speak, however, the brain orchestrates a series of events. The vocal folds snap together while air from the lungs blows past, making them vibrate. The vibrations produce sound waves that travel through the throat, nose, and mouth, which act as resonating cavities to modulate the sound. The quality of our voice—its pitch, volume, and tone—is determined by the size and shape of the vocal folds and the resonating cavities. This is why people's voices sound so different.
Individual variations in our voices are the result of how much tension we put on our vocal folds. For example, relaxing the vocal folds makes a voice deeper; tensing them makes a voice higher.
If my voice is hoarse, when should I see my doctor?
You should see your doctor if your voice has been hoarse for more than three weeks, especially if you haven't had a cold or the flu. You should also see a doctor if you are coughing up blood or if you have difficulty swallowing, feel a lump in your neck, experience pain when speaking or swallowing, have difficulty breathing, or lose your voice completely for more than a few days.
How will my doctor diagnose what is wrong?
Your doctor will ask you about your health history and how long you've been hoarse. Depending on your symptoms and general health, your doctor may send you to an otolaryngologist (a doctor who specializes in diseases of the ears, nose, and throat). An otolaryngologist will usually use an endoscope (a flexible, lighted tube designed for looking at the larynx) to get a better view of the vocal folds. In some cases, your doctor might recommend special tests to evaluate voice irregularities or vocal airflow.
What are some of the disorders that cause hoarseness and how are they treated?
Hoarseness can have several possible causes and treatments, as described below:
* Laryngitis. Laryngitis is one of the most common causes of hoarseness. It can be due to temporary swelling of the vocal folds from a cold, an upper respiratory infection, or allergies. Your doctor will treat laryngitis according to its cause. If it's due to a cold or upper respiratory infection, your doctor might recommend rest, fluids, and nonprescription pain relievers. Allergies might be treated similarly, with the addition of over-the-counter allergy medicines.
* Misusing or overusing your voice. Cheering at sporting events, speaking loudly in noisy situations, talking for too long without resting your voice, singing loudly, or speaking with a voice that's too high or too low can cause temporary hoarseness. Resting, reducing voice use, and drinking lots of water should help relieve hoarseness from misuse or overuse. Sometimes people whose jobs depend on their voices—such as teachers, singers, or public speakers—develop hoarseness that won't go away. If you use your voice for a living and you regularly experience hoarseness, your doctor might suggest seeing a speech-language pathologist for voice therapy. In voice therapy, you'll be given vocal exercises and tips for avoiding hoarseness by changing the ways in which you use your voice.
* Gastroesophageal reflux (GERD). GERD—commonly called heartburn—can cause hoarseness when stomach acid rises up the throat and irritates the tissues. Usually hoarseness caused by GERD is worse in the morning and improves throughout the day. In some people, the stomach acid rises all the way up to the throat and larynx and irritates the vocal folds. This is called laryngopharyngeal reflux (LPR). LPR can happen during the day or night. Some people will have no heartburn with LPR, but they may feel as if they constantly have to cough to clear their throat and they may become hoarse. GERD and LPR are treated with dietary modifications and medications that reduce stomach acid.
* Vocal nodules, polyps, and cysts. Vocal nodules, polyps, and cysts are benign (noncancerous) growths within or along the vocal folds. Vocal nodules are sometimes called "singer's nodes" because they are a frequent problem among professional singers. They form in pairs on opposite sides of the vocal folds as the result of too much pressure or friction, much like the way a callus forms on the foot from a shoe that's too tight. A vocal polyp typically occurs only on one side of the vocal fold. A vocal cyst is a hard mass of tissue encased in a membrane sac inside the vocal fold. The most common treatments for nodules, polyps, and cysts are voice rest, voice therapy, and surgery to remove the tissue.
* Vocal fold hemorrhage. Vocal fold hemorrhage occurs when a blood vessel on the surface of the vocal fold ruptures and the tissues fill with blood. If you lose your voice suddenly during strenuous vocal use (such as yelling), you may have a vocal fold hemorrhage. Sometimes a vocal fold hemorrhage will cause hoarseness to develop quickly over a short amount of time and only affect your singing but not your speaking voice. Vocal fold hemorrhage must be treated immediately with total voice rest and a trip to the doctor.
* Vocal fold paralysis. Vocal fold paralysis is a voice disorder that occurs when one or both of the vocal folds don't open or close properly. It can be caused by injury to the head, neck or chest; lung or thyroid cancer; tumors of the skull base, neck, or chest; or infection (for example, Lyme disease). People with certain neurologic conditions such as multiple sclerosis or Parkinson's disease or who have sustained a stroke may experience vocal fold paralysis. In many cases, however, the cause is unknown. Vocal fold paralysis is treated with voice therapy and, in some cases, surgery.
* Neurological diseases and disorders. Neurological conditions that affect areas of the brain that control muscles in the throat or larynx can also cause hoarseness. Hoarseness is sometimes a symptom of Parkinson's disease or a stroke. Spasmodic dysphonia is a rare neurological disease that causes hoarseness and can also affect breathing. Treatment in these cases will depend upon the type of disease or disorder.
* Other causes. Thyroid problems and injury to the larynx can cause hoarseness. Hoarseness may sometimes be a symptom of laryngeal cancer, which is why it is so important to see your doctor if you are hoarse for more than three weeks. Hoarseness is also the most common symptom of a disease called recurrent respiratory papillomatosis (RRP), or laryngeal papillomatosis, which causes noncancerous tumors to grow in the larynx and other air passages leading from the nose and mouth into the lungs.
Details
A hoarse voice, also known as dysphonia or hoarseness, is when the voice involuntarily sounds breathy, raspy, or strained, or is softer in volume or lower in pitch. A hoarse voice can be associated with a feeling of unease or scratchiness in the throat. Hoarseness is often a symptom of problems in the vocal folds of the larynx. It may be caused by laryngitis, which in turn may be caused by an upper respiratory infection, a cold, or allergies. Cheering at sporting events, speaking loudly in noisy situations, talking for too long without resting one's voice, singing loudly, or speaking with a voice that is too high or too low can also cause temporary hoarseness. A number of other causes for losing one's voice exist, and treatment is generally by resting the voice and treating the underlying cause. If the cause is misuse or overuse of the voice, drinking plenty of water may alleviate the problems.
It appears to occur more commonly in females and the elderly. Furthermore, certain occupational groups, such as teachers and singers, are at an increased risk.
Long-term hoarseness, or hoarseness that persists over three weeks, especially when not associated with a cold or flu should be assessed by a medical doctor. It is also recommended to see a doctor if hoarseness is associated with coughing up blood, difficulties swallowing, a lump in the neck, pain when speaking or swallowing, difficulty breathing, or complete loss of voice for more than a few days. For voice to be classified as "dysphonic", abnormalities must be present in one or more vocal parameters: pitch, loudness, quality, or variability. Perceptually, dysphonia can be characterised by hoarse, breathy, harsh, or rough vocal qualities, but some kind of phonation remains.
Dysphonia can be categorized into two broad main types: organic and functional, and classification is based on the underlying pathology. While the causes of dysphonia can be divided into five basic categories, all of them result in an interruption of the ability of the vocal folds to vibrate normally during exhalation, which affects the voice. The assessment and diagnosis of dysphonia is done by a multidisciplinary team, and involves the use of a variety of subjective and objective measures, which look at both the quality of the voice as well as the physical state of the larynx. Multiple treatments have been developed to address organic and functional causes of dysphonia. Dysphonia can be targeted through direct therapy, indirect therapy, medical treatments, and surgery. Functional dysphonias may be treated through direct and indirect voice therapies, whereas surgeries are recommended for chronic, organic dysphonias.
Types
Voice disorders can be divided into two broad categories: organic and functional. The distinction between these broad classes stems from their cause, whereby organic dysphonia results from some sort of physiological change in one of the subsystems of speech (for voice, usually respiration, laryngeal anatomy, and/or other parts of the vocal tract are affected). Conversely, functional dysphonia refers to hoarseness resulting from vocal use (i.e. overuse/abuse). Furthermore, according to ASHA, organic dysphonia can be subdivided into structural and neurogenic; neurogenic dysphonia is defined as impaired functioning of the vocal structure due to a neurological problem (in the central nervous system or peripheral nervous system); in contrast, structural dysphonia is defined as impaired functioning of the vocal mechanism that is caused by some sort of physical change (e.g. a lesion on the vocal folds). Notably, an additional subcategory of functional dysphonia recognized by professionals is psychogenic dysphonia, which can be defined as a type of voice disorder that has no known cause and can be presumed to be a product of some sort of psychological stressors in one's environment. It is important to note that these types are not mutually exclusive and much overlap occurs. For example, Muscle Tension Dysphonia (MTD) has been found to be a result of many different causes including the following: MTD in the presence of an organic pathology (i.e. organic type), MTD stemming from vocal use (i.e. functional type), and MTD as a result of personality and/or psychological factors (i.e. psychogenic type).
Organic dysphonia
* Laryngitis (Acute: viral, bacterial) - (Chronic: smoking, GERD, LPR)
* Neoplasm (Premalignant: dysplasia) - (Malignant: Squamous cell carcinoma)
* Trauma (Iatrogenic: surgery, intubation) - (Accidental: blunt, penetrating, thermal)
* Endocrine (Hypothyroidism, hypogonadism)
* Haematological (Amyloidosis)
* Iatrogenic (inhaled corticosteroids)
Functional dysphonia
* Psychogenic
* Vocal misuse
* Idiopathic
Causes
The most common causes of hoarseness is laryngitis (acute 42.1%; chronic 9.7%) and functional dysphonia (30%). Hoarseness can also be caused by laryngeal tumours (benign 10.7 - 31%; malignant 2.2 - 3.0%). Causes that are overall less common include neurogenic conditions (2.8 - 8.0%), psychogenic conditions (2.0 - 2.2%), and aging (2%).
A variety of different causes, which result in abnormal vibrations of the vocal folds, can cause dysphonia. These causes can range from vocal abuse and misuse to systemic diseases. Causes of dysphonia can be divided into five basic categories, although overlap may occur between categories. (Note that this list is not exhaustive):
1. Neoplastic/structural: Abnormal growths of the vocal fold tissue.
* Dysplasia
* Cysts
* Polyps
* Nodules
* Carcinoma
2. Inflammatory: Changes in the vocal fold tissue as a result of inflammation.
* Allergy
* Infections
* Reflux
* Smoking
* Trauma
* Voice abuse
3. Neuromuscular: Disturbances in any of the components of the nervous system that control laryngeal function.
* Multiple Sclerosis
* Myasthenia Gravis
* Parkinson's disease
* Spasmodic Dysphonia
* Nerve injury
4. Associated Systemic Diseases: Systemic diseases which have manifestations that affect the voice.
* Acromegaly
* Amyloidosis
* Hypothyroidism
* Sarcoidosis
5. Technical: Associated with poor muscle functioning or psychological stresses, with no corresponding physiological abnormalities of the larynx.
* Psychogenic such as dissociation disorder
* Excess demands
* Stress
* Vocal strain
* Employment
It has been suggested that certain occupational groups may be at increased risk of developing dysphonia due to the excessive or intense vocal demands of their work. Research on this topic has primarily focused on teachers and singers, although some studies have examined other groups of heavy voice users (e.g. actors, cheerleaders, aerobic instructors, etc.). At present, it is known that teachers and singers are likely to report dysphonia. Moreover, physical education teachers, teachers in noisy environments, and those who habitually use a loud speaking voice are at increased risk. The term clergyman's throat or dysphonia clericorum was previously used for painful dysphonia associated with public speaking, particularly among preachers. However, the exact prevalence rates for occupational voice users are unclear, as individual studies have varied widely in the methodologies used to obtain data (e.g. employing different operational definitions for "singer").
Additional Information
Hoarseness (dysphonia) is a common problem. You’re hoarse when your voice sounds raspy or strained, is softer than usual or sounds higher or lower than usual. Many things cause hoarseness, but it’s rarely a symptom of a serious illness. Healthcare providers who specialize in ear, nose and throat issues treat hoarseness.
Overview:
What is hoarseness?
Hoarseness (dysphonia) is when your voice sounds rough, raspy, strained or breathy. Hoarseness may affect how loud you speak or your voice’s pitch (how high or low your voice sounds). Many things cause hoarseness, but it’s rarely a sign of a serious illness.
Is hoarseness common?
Hoarseness is very common. About 1 in 3 people will have it at some point in their lives. It often affects people who smoke and those who use their voices professionally like teachers, singers and actors, sales representatives and call center employees.
Symptoms and Causes:
What are the symptoms of hoarseness?
The following symptoms may mean you have hoarseness:
* Your voice sounds as if you’re having a hard time talking.
* Your voice sounds raspy or breathy.
* You’re speaking more quietly or softer than usual.
* Your voice sounds higher or lower than usual.
When should I be worried about hoarseness?
Most hoarseness happens because you overuse your voice and goes away on its own. But you should talk to a healthcare provider if your voice is hoarse for three weeks or longer or if there are other concerning signs. Contact a provider right away if you notice that:
* It hurts when you speak or swallow.
* It’s hard to breathe or swallow.
* You’re coughing up blood.
* There’s a lump in your neck.
* You’ve lost your voice.
What causes hoarseness?
To understand why you get hoarse, it may help to know how your voice works. You can speak thanks to your vocal folds (vocal cords) and larynx (voice box). Your larynx sits above your trachea (windpipe) — a long tube that connects your larynx to your lungs.
Your vocal cords are two bands of tissue inside your larynx that open and close. When you speak, air from your lungs makes your vocal cords vibrate and create sound waves. Anything that affects your vocal cords and larynx can make you sound hoarse, including:
* Laryngitis. This is the most common hoarseness cause. It happens when allergies, upper respiratory infections or sinus infections make your vocal cords swell.
* Using your voice more than usual or in different ways. For example, you can become hoarse after making a long speech. Cheering or yelling can affect your voice. So can speaking in a pitch that’s higher or lower than your normal pitch.
* Age. Your vocal cords get thin and limp as you age, which can affect your voice.
* GERD (chronic acid reflux). Also known as heartburn, GERD is when your stomach acids go up into your throat. Sometimes the acids can go as high as your vocal cords, and that’s known as laryngopharyngeal reflux (LPR).
* Vocal cord hemorrhage. This happens when a blood vessel on a vocal cord ruptures, filling the muscle tissues with blood.
* Vocal nodules, cysts and polyps. Nodules, polyps and cysts are noncancerous growths on your vocal cords.
* Vocal cord paralysis. Vocal cord paralysis means that one or both of your vocal cords don’t open or close as they should.
* Recurrent respiratory papillomatosis (RRP/laryngeal papillomatosis). This condition creates benign (noncancerous) warts on and around your vocal cords.
* Spasmodic dysphonia. This chronic neurological speech disorder changes the way your voice sounds.
* Muscle tension dysphonia. This occurs when you put too much stress on your vocal cords and the muscles get tight. It can also be the result of an injury to the neck, shoulders or chest.
* Neurological diseases and disorders. If you have a stroke or Parkinson’s disease, your condition may affect the part of your brain that controls the muscles in your larynx.
* Cancer. Cancers including laryngeal cancer, lung cancer and throat cancer may make you sound hoarse.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2324) Grease
Gist
Grease provides reliable lubrication in high-load and high-speed applications, reducing friction and preventing premature wear. Construction machinery, such as excavators, bulldozers, cranes, and heavy equipment used in mining and material handling, require robust lubrication solutions.
Because of its consistency, grease acts as a sealant to prevent lubricant leakage and also to prevent entrance of corrosive contaminants and foreign materials. It also acts to keep deteriorated seals effective. Grease is easier to contain than oil.
Greases or lubricants have traditionally been used to keep vehicles, vessels, machines, and their components lubricated at all times. However, no two lubricants are the same - different types of grease produce different results based on the unique properties they possess.
Summary
Grease is a thick, oily lubricant consisting of inedible lard, the rendered fat of waste animal parts, or a petroleum-derived or synthetic oil containing a thickening agent.
White grease is made from inedible hog fat and has a low content of free fatty acids. Yellow grease is made from darker parts of the hog and may include parts used to make white grease. Brown grease contains beef and mutton fats as well as hog fats. Fleshing grease is the fatty material trimmed from hides and pelts. Bone grease, hide grease, and garbage grease are named according to their origin. In some factories, food offal is used along with animal carcasses, butcher-shop scraps, and garbage from restaurants for recovery of fats.
Greases of mineral or synthetic origin consist of a thickening agent dispersed in a liquid lubricant such as petroleum oil or a synthetic fluid. The thickening agent may be soap, an inorganic gel, or an organic substance. Other additives inhibit oxidation and corrosion, prevent wear, and change viscosity. The fluid component is the more important lubricant for clearances between parts that are relatively large, but for small clearances the molecular soap layers provide the lubrication.
Synthetic grease may consist of synthetic oils containing standard soaps or may be a mixture of synthetic thickeners, or bases, in petroleum oils. Silicones are greases in which both the base and the oil are synthetic. Synthetic greases are made in water-soluble and water-resistant forms and may be used over a wide temperature range. The synthetics can be used in contact with natural or other rubbers because they do not soften these materials.
Special-purpose greases may contain two or more soap bases or special additives to gain a special characteristic.
Details
Grease is a solid or semisolid lubricant formed as a dispersion of thickening agents in a liquid lubricant. Grease generally consists of a soap emulsified with mineral or vegetable oil.
A common feature of greases is that they possess high initial viscosities, which upon the application of shear, drop to give the effect of an oil-lubricated bearing of approximately the same viscosity as the base oil used in the grease. This change in viscosity is called shear thinning. Grease is sometimes used to describe lubricating materials that are simply soft solids or high viscosity liquids, but these materials do not exhibit the shear-thinning properties characteristic of the classical grease. For example, petroleum jellies such as Vaseline are not generally classified as greases.
Greases are applied to mechanisms that can be lubricated only infrequently and where a lubricating oil would not stay in position. They also act as sealants to prevent the ingress of water and incompressible materials. Grease-lubricated bearings have greater frictional characteristics because of their high viscosities.
Properties
A true grease consists of an oil or other fluid lubricant that is mixed with a thickener, typically a soap, to form a solid or semisolid. Greases are usually shear-thinning or pseudo-plastic fluids, which means that the viscosity of the fluid is reduced under shear stress. After sufficient force to shear the grease has been applied, the viscosity drops and approaches that of the base lubricant, such as mineral oil. This sudden drop in shear force means that grease is considered a plastic fluid, and the reduction of shear force with time makes it thixotropic. A few greases are rheotropic, meaning they become more viscous when worked. Grease is often applied using a grease gun, which applies the grease to the part being lubricated under pressure, forcing the solid grease into the spaces in the part.
Thickeners
Soaps are the most common emulsifying agent used, and the selection of the type of soap is determined by the application. Soaps include calcium stearate, sodium stearate, lithium stearate, as well as mixtures of these components. Fatty acids derivatives other than stearates are also used, especially lithium 12-hydroxystearate. The nature of the soaps influences the temperature resistance (relating to the viscosity), water resistance, and chemical stability of the resulting grease. Calcium sulphonates and polyureas are increasingly common grease thickeners not based on metallic soaps.
Powdered solids may also be used as thickeners, especially as absorbent clays like bentonite. Fatty oil-based greases have also been prepared with other thickeners, such as tar, graphite, or mica, which also increase the durability of the grease. Silicone greases are generally thickened with silica.
Engineering assessment and analysis
Lithium-based greases are the most commonly used; sodium and lithium-based greases have higher melting point (dropping point) than calcium-based greases but are not resistant to the action of water. Lithium-based grease has a dropping point at 190 to 220 °C (374 to 428 °F). However the maximum usable temperature for lithium-based grease is 120 °C.
The amount of grease in a sample can be determined in a laboratory by extraction with a solvent followed by e.g. gravimetric determination.
Additives
Some greases are labeled "EP", which indicates "extreme pressure". Under high pressure or shock loading, normal grease can be compressed to the extent that the greased parts come into physical contact, causing friction and wear. EP greases have increased resistance to film breakdown, form sacrificial coatings on the metal surface to protect if the film does break down, or include solid lubricants such as graphite, molybdenum disulfide or hexagonal boron nitride (hBN) to provide protection even without any grease remaining.
Solid additives such as copper or ceramic powder (most often hBN) are added to some greases for static high pressure and/or high temperature applications, or where corrosion could prevent dis-assembly of components later in their service life. These compounds are working as a release agent. Solid additives cannot be used in bearings because of tight tolerances. Solid additives will cause increased wear in bearings.
History
Grease from the early Egyptian or Roman eras is thought to have been prepared by combining lime with olive oil. The lime saponifies some of the triglyceride that comprises oil to give a calcium grease. In the middle of the 19th century, soaps were intentionally added as thickeners to oils. Over the centuries, all manner of materials have been employed as greases. For example, black slugs Arion ater were used as axle-grease to lubricate wooden axle-trees or carts in Sweden.
Classification and standards
Jointly developed by ASTM International, the National Lubricating Grease Institute (NLGI) and SAE International, standard ASTM D4950 “standard classification and specification for automotive service greases” was first published in 1989 by ASTM International. It categorizes greases suitable for the lubrication of chassis components and wheel bearings of vehicles, based on performance requirements, using codes adopted from the NLGI's “chassis and wheel bearing service classification system”:
* LA and LB: chassis lubricants (suitability up to mild and severe duty respectively)
* GA, GB and GC: wheel-bearings (suitability up to mild, moderate and severe duty respectively)
A given performance category may include greases of different consistencies.
The measure of the consistency of grease is commonly expressed by its NLGI consistency number.
The main elements of standard ATSM D4950 and NLGI's consistency classification are reproduced and described in standard SAE J310 “automotive lubricating greases” published by SAE International.
Standard ISO 6743-9 “lubricants, industrial oils and related products (class L) — classification — part 9: family X (greases)”, first released in 1987 by the International Organization for Standardization, establishes a detailed classification of greases used for the lubrication of equipment, components of machines, vehicles, etc. It assigns a single multi-part code to each grease based on its operational properties (including temperature range, effects of water, load, etc.) and its NLGI consistency number.
Other types:
Silicone grease
Silicone grease is based on a silicone oil, usually thickened with amorphous fumed silica.
Fluoroether-based grease
Fluoropolymers containing C-O-C (ether) with fluorine (F) bonded to the carbon. They are more flexible and often used in demanding environments due to their inertness. Fomblin by Solvay Solexis and Krytox by duPont are prominent examples.
Laboratory grease
Grease is used to lubricate glass stopcocks and joints. Some laboratories fill them into syringes for easy application.
Apiezon, silicone-based, and fluoroether-based greases are all used commonly in laboratories for lubricating stopcocks and ground glass joints. The grease helps to prevent joints from "freezing", as well as ensuring high vacuum systems are properly sealed. Apiezon or similar hydrocarbon based greases are the cheapest, and most suitable for high vacuum applications. However, they dissolve in many organic solvents. This quality makes clean-up with pentane or hexanes trivial, but also easily leads to contamination of reaction mixtures.
Silicone-based greases are cheaper than fluoroether-based greases. They are relatively inert and generally do not affect reactions, though reaction mixtures often get contaminated. Silicone-based greases are not easily removed with solvent, but they are removed efficiently by soaking in a base bath.
Fluoroether-based greases are inert to many substances including solvents, acids, bases, and oxidizers. They are, however, expensive, and are not easily cleaned away.
Food-grade grease
Food-grade greases are those greases that may come in contact with food and as such are required to be safe to digest. Food-grade lubricant base oil are generally low sulfur petrochemical, less easily oxidized and emulsified. Another commonly used poly-α olefin base oil as well.[clarification needed] The United States Department of Agriculture (USDA) has three food-grade designations: H1, H2 and H3. H1 lubricants are food-grade lubricants used in food-processing environments where there is the possibility of incidental food contact. H2 lubricants are industrial lubricants used on equipment and machine parts in locations with no possibility of contact. H3 lubricants are food-grade lubricants, typically edible oils, used to prevent rust on hooks, trolleys and similar equipment.
Water-soluble grease analogs
In some cases, the lubrication and high viscosity of a grease are desired in situations where non-toxic, non-oil based materials are required. Carboxymethyl cellulose, or CMC, is one popular material used to create a water-based analog of greases. CMC serves to both thicken the solution and add a lubricating effect, and often silicone-based lubricants are added for additional lubrication. The most familiar example of this type of lubricant, used as a surgical and personal lubricant, is K-Y Jelly.
Cork grease
Cork grease is a lubricant used to lubricate cork, for example in musical wind instruments. It is usually applied using small lip-balm/lip-stick like applicators.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online
2325) Customer Care
Gist
Customer care is a way of dealing with customers when they interact with your brand, products, or services to keep them happy and satisfied. Customer care goes beyond customer service and support because it focuses on building emotional connections between brands and customers.
What is the role of customer care?
Handle customer complaints, provide appropriate solutions and alternatives within the time limits; follow up to ensure resolution. Keep records of customer interactions, process customer accounts and file documents. Follow communication procedures, guidelines and policies. Take the extra mile to engage customers.
What is customer care meaning?
Customer care is when companies treat their customers with respect and kindness and build an emotional connection with them. It's something that can—and should—be handled by everyone on the team, not just a customer service representative or a customer success manager.
Summary
Customer service involves an array of activities to keep existing customers satisfied. An example is computer software manufacturers who allow consumers to telephone them to discuss problems they are encountering with the software. Servicing equipment in the field and training new users are other examples of customer service. The term user-friendly is sometimes applied; the firm wants to develop a reputation as being easy to do business with. Firms continually monitor the levels of customer service they—and their competitors—offer. They might use machines to record how many times customer-service telephones ring before being answered or what percentage of requested repair parts they can deliver within a certain time span.
Details
Customer service is the assistance and advice provided by a company through phone, online chat, mail, and e-mail to those who buy or use its products or services. Each industry requires different levels of customer service, but towards the end, the idea of a well-performed service is that of increasing revenues. The perception of success of the customer service interactions is dependent on employees "who can adjust themselves to the personality of the customer". Customer service is often practiced in a way that reflects the strategies and values of a firm. Good quality customer service is usually measured through customer retention.
Customer service for some firms is part of the firm’s intangible assets and can differentiate it from others in the industry. One good customer service experience can change the entire perception a customer holds towards the organization. It is expected that AI-based chatbots will significantly impact customer service and call centre roles and will increase productivity substantially. Many organisations have already adopted AI chatbots to improve their customer service experience.
The evolution in the service industry has identified the needs of consumers. Companies usually create policies or standards to guide their personnel to follow their particular service package. A service package is a combination of tangible and intangible characteristics a firm uses to take care of its clients.
Customer support
Customer support is a range of consumer services to assist customers in making cost-effective and correct use of a product. It includes assistance in planning, installation, training, troubleshooting, maintenance, upgrading, and disposal of a product. These services may even be provided at the place in which the customer makes use of the product or service. In this case, it is called "at home customer service" or "at home customer support." Customer support is an effective strategy that ensures that the customer's needs have been attended to. Customer support helps ensure that the products and services that have been provided to the customer meet their expectations. Given an effective and efficient customer support experience, customers tend to be loyal to the organization, which creates a competitive advantage over its competitors. Organizations should ensure that any complaints from customers about customer support have been dealt with effectively.
Automation and productivity increase
Customer service may be provided in person (e.g. sales / service representative), or by automated means, such as kiosks, websites, and apps. An advantage of automation is that it can provide service 24 hours a day which can complement face-to-face customer service. There is also economic benefit to the firm. Through the evolution of technology, automated services become less expensive over time. This helps provide services to more customers for a fraction of the cost of employees' wages. Automation can facilitate customer service or replace it entirely.
A popular type of automated customer service is done through artificial intelligence (AI). The customer benefit of AI is the feel for chatting with a live agent through improved speech technologies while giving customers the self-service benefit. AI can learn through interaction to give a personalized service. The exchange the Internet of Things (IoT) facilitates within devices, lets us transfer data when we need it, where we need it. Each gadget catches the information it needs while it maintains communication with other devices. This is also done through advances in hardware and software technology. Another form of automated customer service is touch-tone phone, which usually involves IVR (Interactive Voice Response) a main menu and the use of a keypad as options (e.g. "Press 1 for English, Press 2 for Spanish").
In the Internet era, a challenge is to maintain and/or enhance the personal experience while making use of the efficiencies of online commerce. "Online customers are literally invisible to you (and you to them), so it's easy to shortchange them emotionally. But this lack of visual and tactile presence makes it even more crucial to create a sense of personal, human-to-human connection in the online arena."
Examples of customer service by artificial means are automated online assistants that can be seen as avatars on websites, which enterprises can use to reduce operating and training costs. These are driven by chatbots, and a major underlying technology to such systems is natural language processing.
Metrics
The two primary methods of gathering feedback are customer surveys and Net Promoter Score measurement, used for calculating the loyalty that exists between a provider and a consumer.
Instant feedback
Many outfits have implemented feedback loops that allow them to capture feedback at point of experience. For example, National Express in the UK has invited passengers to send text messages while riding the bus. This has been shown to be useful, as it allows companies to improve their customer service before the customer defects, thus making it far more likely that the customer will return next time.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Online