You are not logged in.
441) Vulture
Vulture, any of 22 species of large carrion-eating birds that live predominantly in the tropics and subtropics. The seven species of New World vultures include condors, and the 15 Old World species include the lammergeier and griffons. Although many members of the two groups appear similar, they are only distantly related.
All of the New World vultures and some of the Old World vultures have bare heads, a condition that prevents feathers from matting with blood when the birds reach inside carcasses. Most vultures have a large pouch in the throat (crop) and can go for long periods without food—adaptations to a feast-or-famine scavenging lifestyle. In some species the beak is exceptionally strong and heavy for tearing hide, muscle, and even bone. Eyesight in all vultures is well developed, as is the sense of smell in the turkey vulture. Old World vultures have relatively strong feet, but New World Vultures have flat, weak feet that are poorly adapted for grasping.
Vultures are widely distributed, but they are absent from Australia and most oceanic islands. Most have broad food habits, consuming carrion, garbage, and even excrement, but rarely do they descend upon live animals. A few occasionally take helpless prey such as lambs and tortoises or, in the case of Andean condors, newborn calves. Vultures may remain aloft for hours, soaring gracefully on long, broad wings. When one bird descends to a dead or dying animal, others may be attracted from miles away. When feeding, vultures maintain a strict social order based on body size and strength of beak. Smaller vultures must wait for the scraps left behind by the larger, dominant species. Even large vultures, however, give way to nearly all mammalian competitors, including jackals, hyenas, and coyotes.
Most vultures inhabit open country, often roosting in groups on cliffs, in tall trees, or on the ground. Old World vultures build large stick platform nests in trees or on cliffs, sometimes in large colonies. Most of the larger Old World vultures lay only a single egg. New World vultures do not build nests but lay their eggs in bare scrapes in natural cavities in cliffs or trees; none nests colonially. The smaller New World vultures lay two eggs and incubate them for just over a month. The largest species lay only a single egg that may take nearly two months to hatch. The young mature more slowly than those of typical birds of prey. New World vultures have no voice because they lack a syrinx; they have a perforated nasal septum.
New World Vultures
The turkey vulture (Cathartes aura) is the most widespread New World vulture, breeding from Canada southward to the southern tip of South America. Northern populations are migratory. They are small brownish black vultures with red heads as adults (dark gray as juveniles) and a wingspan of nearly 2 metres (6.6 feet). They are usually the first to find carcasses, owing to their well-developed sense of smell, but they are more timid than other vultures and retreat while other species feed.
In addition to the California and Andean condors, other notable New World vultures include the black vulture (Coragyps atratus), a New World vulture sometimes called a black buzzard or, inappropriately, a carrion crow. The black vulture, the most abundant vulture species of all, is a resident of the tropics and subtropics that often wanders far into temperate regions. It is a chunky black bird about 60 cm (24 inches) long, with a very short tail, short wings, a bare black head, and a feathered hindneck.
The king vulture (Sarcoramphus papa) is the most colourful vulture. The head and neck are red, yellow, and bluish; the eyes are white with red eye-rings; the body is buff above and white below; and the neck fringe is gray. Wingspan is about 2 metres; the body is about 80 cm (31 inches) long. King vultures range from southern Mexico to Argentina, where they soar singly or in pairs over tropical forests.
New World vultures are generally classified with storks in the order Ciconiiformes.
Old World Vultures
The cinereous vulture, sometimes called the black vulture (Aegypius monachus), is one of the largest flying birds. Many scientists consider this bird to be the largest vulture and the largest bird of prey. It is about 1 metre (3.3 feet) long and 12.5 kg (27.5 pounds) in weight, with a wingspan of about 2.7 metres (8.9 feet). Entirely black with very broad wings and a short, slightly wedge-shaped tail, it ranges through southern Europe, Asia Minor, and the central steppes and highest mountains of Asia, nesting in tall trees. Many of these regions are also inhabited by the slightly smaller bearded vulture, or lammergeier (Gypaetus barbatus).
The Egyptian vulture (Neophron percnopterus), also called Pharaoh’s chicken, is a small Old World vulture about 60 cm (24 inches) long. It is white with black flight feathers, a bare face, and a cascading mane of feathers. This vulture’s range is northern and eastern Africa, southern Europe, and the Middle East to Afghanistan and India.
The common griffon (Gyps fulvus), or Eurasian griffon, is an Old World vulture of northwestern Africa, the Spanish highlands, southern Russia, and the Balkans. Gray above and reddish brown with white streaking below, it is about a metre long. The genus Gyps contains seven similar species, including some of the most common vultures. In South Asia three Gyps species, the Asian white-backed vulture (G. bengalensis), the long-billed vulture (G. indicus), and the slender-billed vulture (G. tenuirostris), have been brought close to extinction by feeding on the carcasses of dead cattle that had been given pain-killing drugs; the pain killers cause kidney failure in the vultures.
The lappet-faced vulture (Torgos tracheliotus), sometimes called the eared, or Nubian, vulture, is a huge Old World vulture of arid Africa. Being a metre tall, with a 2.7-metre (8.9-foot) wingspan, it dominates all other vultures when feeding. It is black and brown above and has a wedge-shaped tail; there is white down on the underparts. Large folds of skin hang from the sides of its bare head. The face is pink or reddish.
The palm-nut vulture (Gypohierax angolensis) lives in western and central Africa. It is about 50 cm (20 inches) long and has a bare orange face and yellow beak. It is unusual in being primarily vegetarian, although it sometimes takes crustaceans and dead fish.
The red-headed vulture (Sarcogyps calvus), often called the Pondicherry vulture or the Indian (black) vulture, is an Old World vulture ranging from Pakistan to Malaysia. It is about 75 cm (30 inches) long and has a wingspan of about 2.7 metres (8.9 feet). It is black with white down on the breast and has a huge black beak and large lappets on the sides of the neck.
The white-headed vulture (Trigonoceps occipitalis) is about 80 cm (31 inches) long and has a wingspan of about 1.8 metres (6 feet). Black with white secondary wing feathers and belly, it has a high black neck fringe and a massive red beak. This bird has a uniquely triangular head, which is pale yellowish and bare except for a cap of white down.
Old World vultures comprise the subfamily Aegyptiinae of the hawk and eagle family, Accipitridae, which is part of the order Falconiformes.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Are vultures dangerous to humans? As in, do they attack humans unprovoked?
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Offline
That is to be done by some homework. I concede I am not an authority in the subject.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
442) Sapodilla
Sapodilla, (Manilkara zapota), tropical evergreen tree (family Sapotaceae) and its distinctive fruit, native to southern Mexico, Central America, and parts of the Caribbean. Though of no great commercial importance in any part of the world, the sapodilla is much appreciated in many tropical and subtropical areas, where it is eaten fresh. The milky latex from the tree trunk was once important in the chewing-gum industry as the chief source of chicle; it was also used as chewing gum by the Aztecs. Elaborately carved lintels of sapodilla wood, some 1,000 years old, are still seen in some Mayan ruins.
As a cultivated species, the sapodilla tree is medium-sized and slow-growing. The reddish wood is hard and durable. The leaves, 5–12.5 cm (2–5 inches) long, are glossy and light green in colour and ovate to elliptic in outline; the flowers are small and inconspicuous. The fruit is spheroid to ovoid in shape, rusty brown on the surface, and roughly 5–10 cm (2–4 inches) in diameter. Its sweet flavour has been compared to a combination of pears and brown sugar. When the fruit is ripe, the seeds—two to five in number, shiny black, and the size of flattened beans—are surrounded by translucent, yellowish brown, juicy flesh. When the fruit is immature, its flesh contains both tannin and milky latex and is unpalatable. Propagation is usually by means of seed, but superior trees can be reproduced by grafting.
Sapodilla (Sapota) nutrition facts
Sapodilla or sapota (chikoo) is another popular tropical fruit in line with mango, banana, jackfruit, etc. Sapota composes of soft, easily digestible pulp made of simple sugars like fructose and sucrose.
Sapota is a tropical evergreen, fruit-bearing tree belongs to the family of Sapotaceae, in the genus: Manilkara. The fruit, popular as nasebrry, is one of the most relished tropical delicacy in the Caribbean islands. Scientific name: Manilkara zapota.
Sapota thought to have originated in the Central American rain forests, probably in Mexico and Belize. Today, its cultivation has spread all over the tropical belt and grown as a major commercial crop in India, Sri Lanka, Indonesia, and Malaysia. The tree is one of fast growing, wind and drought resistant, and it flourishes well even under dry arid regions receiving scanty rains. However, water irrigation during summer would result in good fruit yields.
Botanically, each sapodilla fruit is a berry; round or oval measures about 10 cm in diameter and weigh about 150 g. A tree bears as many as 2,000 fruits/year.
Sapota fruit has gray/brown, sandy, kiwifruit-like outer surface but without the fuzziness. Unripe fruits possess white, hard, inedible pulp that secretes sticky latex containing toxic substance saponin. This milky latex gradually disappears, and its white flesh turns brown as the fruit ripe. Once ripen, it becomes soft, acquires a sweet taste and smooth or grainy texture with slight musky flavor. It contains about 3-10 black, smooth, shiny biconvex/bean shaped, inedible seeds, located at its center.
ated at its center.
Health benefits of sapodilla
• Sapodilla is one of the high-calorie fruits; 100 g provides 83 calories (almost same as that of calories in sweet potato, and banana). Additionally, it is an excellent source of dietary fiber (5.6 g/100g), which makes it a good bulk laxative. This fiber content helps relieve constipation episodes and help protect mucosa of the colon from cancer-causing toxins.
• The fruit is rich in antioxidant polyphenolic compound tannin. Tannins are a composite family of naturally occurring polyphenols. Research studies suggest that tannins possess astringent properties, and shown to have potential anti-inflammatory, antiviral, antibacterial, and anti-parasitic effects. Hence, these compounds may found useful applications in traditional medicines as antidiarrheal, hemostatic (stops bleeding) and as a remedy for hemorrhoids.
• Furthermore, the anti-inflammatory effect of tannins helps limit conditions like erosive gastritis, reflux esophagitis, enteritis, and irritating bowel disorders. Some other fruits that also rich in tannins include pomegranate, persimmon, and grapes.
• Sapote contains a good amount of antioxidant vitamins like vitamin-C (24.5% of recommended daily intake per 100 g of fruit), and vitamin A. Vitamin-A is essential for vision. It also required for maintaining healthy mucosa and skin. Consumption of natural fruits rich in vitamin-A has been known to offer protection from lung and oral cavity cancers. So also, consumption of foods containing vitamin-C helps the body develop resistance to combat infectious agents and help scavenge harmful free radicals from the human body
• Fresh ripe sapodilla is a good source of minerals like potassium, copper, iron and vitamins like folate, niacin and pantothenic acid. These compounds are essential for optimal health as they involve in various metabolic processes in the body as cofactors for the enzymes.
There exists many cultivars of sapodilla are grown worldwide like:-
• Brown Sugar variety -Fruit is medium to small, 2 to 2-1/2 inches long, nearly round. Skin is light, scruffy brown. Flesh pale brown, fragrant, juicy, very sweet and creamy, texture slightly granular. Quality is superb.
• Prolific variety - the fruit is round-conical, 2-1/2 to 3-1/2 inches long and broad. Skin is scruffy, brown, becoming nearly smooth at maturity. Flesh is light-pinkish, mildly fragrant, texture smooth, flavor sweet, quality good. The tree bears early, consistently and heavily.
• Russel type- The fruit is large, round, 3 to 5 inches in diameter and length. Skin is scruffy brown with gray patches. Flesh is pinkish, mildly fragrant, texture somewhat granular. Flavor is rich and sweet.
• Tikal- A new seedling selection with excellent flavor. It is elliptic, light brown and smaller than Prolific variety. Ripe very early.
Selection and storage
Sapodillas can be available all around the season in the markets. Harvesting usually done by plucking each fruit gently as you do in mango. It is often difficult to tell when a sapodilla is ready to harvest. Mature fruit appears brown and easily separates from the stem without the leakage of latex. Scratch the fruit to make sure whether the skin is not green beneath the scurf.
In the stores, buy fresh sapodilla with smooth, intact skin and without cuts/cracks, bruises or wrinkles. Once ripen, the fruit just yields to gentle thumb pressure.
Mature but unripe fruits must be kept at room temperature for 7 to 10 days to ripen. Firm, ripe sapodillas can keep well for several days in the home refrigerator and if set at 35° F, they can be kept for up to six weeks.
Preparation and serving method
Wash the sandy scruff before eating in cold water. Fresh sapodilla should be eaten when it turns soft. Cut the fruit into two halves, then scoop the flesh using a spoon and discard the seeds. It should be enjoyed without any additions in order to experience its unique flavor.
Here are some serving tips:
• Fresh sapota sections are a great addition to fruit salads.
• Sapodilla-milkshake (chikoo milkshake)/smoothie is a favorite drink in Asia.
• It also used in ice-creams, fruit-jam, cakes, pie, etc.
Safety profile
Latex and tannins are highly concentrated in the raw sapodilla fruits and, therefore, intensely bitter in taste. Eating unripe fruits may cause mouth ulcers, itchy sensation in the throat, and breathing difficulty, especially in the children.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
443) Bermuda Triangle
Bermuda Triangle, section of the North Atlantic Ocean off North America in which more than 50 ships and 20 airplanes are said to have mysteriously disappeared. The area, whose boundaries are not universally agreed upon, has a vaguely triangular shape marked by the Atlantic coast of the Florida panhandle (in the United States), Bermuda, and the Greater Antilles.
Reports of unexplained occurrences in the region date to the mid-19th century. Some ships were discovered completely abandoned for no apparent reason; others transmitted no distress signals and were never seen or heard from again. Aircraft have been reported and then vanished, and rescue missions are said to have vanished when flying in the area. However, wreckage has not been found, and some of the theories advanced to explain the repeated mysteries have been fanciful. Although theories of supernatural causes for these disappearances abound, geophysical and environmental factors are most likely responsible. One hypothesis is that pilots failed to account for the agonic line—the place at which there is no need to compensate for magnetic compass variation—as they approached the Bermuda Triangle, resulting in significant navigational error and catastrophe. Another popular theory is that the missing vessels were felled by so-called “rogue waves,” which are massive waves that can reach heights of up to 100 feet (30.5 metres) and would theoretically be powerful enough to destroy all evidence of a ship or airplane. The Bermuda Triangle is located in an area of the Atlantic Ocean where storms from multiple directions can converge, making rogue waves more likely to occur.
According to the National Oceanic and Atmospheric Administration, “There is no evidence that mysterious disappearances occur with any greater frequency in the Bermuda Triangle than in any other large, well-traveled area of the ocean,” and boaters and fliers continue to venture through the triangle without event.
What Is Known (and Not Known) About the Bermuda Triangle
People have been trying to solve the “mystery” of the Bermuda Triangle for years. Here’s what we know (and don’t know) about the Bermuda Triangle.
What Is Known About The Bermuda Triangle:
• The Bermuda Triangle is a region of the North Atlantic Ocean (roughly) bounded by the southeastern coast of the U.S., Bermuda, and the islands of the Greater Antilles (Cuba, Hispaniola, Jamaica, and Puerto Rico).
• The exact boundaries of the Bermuda Triangle are not universally agreed upon. Approximations of the total area range between 500,000 and 1,510,000 square miles (1,300,000 and 3,900,000 square kilometers). By all approximations, the region has a vaguely triangular shape.
• The Bermuda Triangle does not appear on any world maps, and the U.S. Board on Geographic Names does not recognize the Bermuda Triangle as an official region of the Atlantic Ocean.
• Although reports of unexplained occurrences in the region date to the mid-19th century, the phrase “Bermuda Triangle” didn’t come into use until 1964. The phrase first appeared in print in a pulp magazine article by Vincent Gaddis, who used the phrase to describe a triangular region “that has destroyed hundreds of ships and planes without a trace.”
• Despite its reputation, the Bermuda Triangle does not have a high incidence of disappearances. Disappearances do not occur with greater frequency in the Bermuda Triangle than in any other comparable region of the Atlantic Ocean.
• At least two incidents in the region involved U.S. military craft. In March 1918 the collier USS Cyclops, en route to Baltimore, Maryland, from Brazil, disappeared inside the Bermuda Triangle. No explanation was given for its disappearance, and no wreckage was found. Some 27 years later, a squadron of bombers (collectively known as Flight 19) under American Lieut. Charles Carroll Taylor disappeared in the airspace above the Bermuda Triangle. As in the Cyclops incident, no explanation was given and no wreckage was found.
• Charles Berlitz popularized the legend of the Bermuda Triangle in his best-selling book ‘The Bermuda Triangle’ (1974). In the book, Berlitz claimed that the fabled lost island of Atlantis was involved in the disappearances.
• In 2013 the World Wildlife Fund (WWF) conducted an exhaustive study of maritime shipping lanes and determined that the Bermuda Triangle is not one of the world’s 10 most dangerous bodies of water for shipping.
• The Bermuda Triangle sustains heavy daily traffic, both by sea and by air.
• The Bermuda Triangle is one of the most heavily traveled shipping lanes in the world.
• The agonic line sometimes passes through the Bermuda Triangle, including a period in the early 20th century. The agonic line is a place on Earth’s surface where true north and magnetic north align, and there is no need to account for magnetic declination on a compass.
• The Bermuda Triangle is subject to frequent tropical storms and hurricanes.
• The Gulf Stream—a strong ocean current known to cause sharp changes in local weather—passes through the Bermuda Triangle.
• The deepest point in the Atlantic Ocean, the Milwaukee Depth, is located in the Bermuda Triangle. The Puerto Rico Trench reaches a depth of 27,493 feet (8,380 meters) at the Milwaukee Depth.
What Is Not Known About The Bermuda Triangle:
• The exact number of ships and airplanes that have disappeared in the Bermuda Triangle is not known. The most common estimate is about 50 ships and 20 airplanes.
• The wreckage of many ships and airplanes reported missing in the region has not been recovered.
• It is not known whether disappearances in the Bermuda Triangle have been the result of human error or weather phenomena.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
• In 2013 the World Wildlife Fund (WWF) conducted an exhaustive study of maritime shipping lanes and determined that the Bermuda Triangle is not one of the world’s 10 most dangerous bodies of water for shipping.
Wait, there are at least 10 more dangerous body than that?
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Offline
ganesh wrote:• In 2013 the World Wildlife Fund (WWF) conducted an exhaustive study of maritime shipping lanes and determined that the Bermuda Triangle is not one of the world’s 10 most dangerous bodies of water for shipping.
Wait, there are at least 10 more dangerous body than that?
• In 2013 the World Wildlife Fund (WWF) conducted an exhaustive study of maritime shipping lanes and determined that the Bermuda Triangle is not one of the world’s 10 most dangerous bodies of water for shipping.
Now, is it clear?
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
That's why I said that. There are 10 most dangerous bodies for shipping, and the Bermuda Triangle isn't one of them. Doesn't this imply that those 10 are all more dangerous than the Bermuda Triangle?
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Offline
That's what the source of information says. The source is authentic.
More posts on this subject, viz. Bermuda Triangle is redundant.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
444) Fox
Fox, any of various members of the dog family (Canidae) resembling small to medium-sized bushy-tailed dogs with long fur, pointed ears, and a narrow snout. In a restricted sense, the name refers to the 10 or so species classified as “true” foxes (genus Vulpes), especially the red, or common, fox (V. vulpes), which lives in both the Old World and the New World. Several other foxes belong to genera other than Vulpes, including the North American gray fox, five species of South American fox, the Arctic fox (includes the blue fox), the bat-eared fox, and the crab-eating fox.
The Red Fox
Widely held as a symbol of animal cunning, the red fox is the subject of considerable folklore. The red fox has the largest natural distribution of any land mammal except human beings. In the Old World it ranges over virtually all of Europe, temperate Asia, and northern Africa; in the New World it inhabits most of North America. Introduced to Australia, it has established itself throughout much of the continent. The red fox has a coat of long guard hairs, soft, fine underfur that is typically a rich reddish brown, often a white-tipped tail, and black ears and legs. Colour, however, is variable; in North America black and silver coats are found, with a variable amount of white or white-banded hair occurring in a black coat. A form called the cross, or brant, fox is yellowish brown with a black cross extending between the shoulders and down the back; it is found in both North America and the Old World. The Samson fox is a mutant strain of red fox found in northwestern Europe. It lacks the long guard hairs, and the underfur is tightly curled.
Red foxes are generally about 90–105 cm (36–42 inches) long (about 35–40 cm [14–16 inches] of this being tail), stand about 40 cm at the shoulder, and weigh about 5–7 kg (10–15 pounds). Their preferred habitats are mixed landscapes, but they live in environments ranging from Arctic tundra to arid desert. Red foxes adapt very well to human presence, thriving in areas with farmland and woods, and populations can be found in many large cities and suburbs. Mice, voles, and rabbits, as well as eggs, fruit, and birds, make up most of the diet, but foxes readily eat other available food such as carrion, grain (especially sunflower seeds), garbage, pet food left unattended overnight, and domestic poultry. On the prairies of North America, it is estimated that red foxes kill close to a million wild ducks each year. Their impact on domestic birds and some wild game birds has led to their numbers often being regulated near game farms and bird-production areas.
The red fox is hunted for sport and for its pelt, which is a mainstay of the fur trade. Fox pelts, especially those of silver foxes, are commonly produced on fox farms, where the animals are raised until they are fully grown at approximately 10 months of age. In much of their range, red foxes are the primary carrier of rabies. Several countries, especially the United Kingdom and France, have extensive culling and vaccination programs aimed at reducing the incidence of rabies in red foxes.
Red foxes mate in winter. After a gestation period of seven or eight weeks, the female (vixen) gives birth to 1–10 or more (5 is average) young, called cubs or pups. Birth takes place in a den, which is commonly a burrow abandoned by another animal. It is often enlarged by the parent foxes. The cubs remain in the den for about five weeks and are cared for by both parents throughout the summer. The young disperse in the fall once they are fully grown and independent.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
the bat-eared fox
Is this the flying fox?
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Offline
(a) The bat-eared fox (Otocyon megalotis) is a species of fox found on the African savanna, named for its large ears, which are used for thermoregulation. Fossil records show this canid first appeared during the middle Pleistocene, about 800,000 years ago. It is considered a basal canid species, resembling ancestral forms of the family, It has also been called a Sub-Saharan African version of a Fennec Fox due to their huge ears.
The bat-eared fox (also referred to as Delalande's fox, long-eared fox, big-eared fox, and black-eared fox) has tawny fur with black ears, legs, and parts of the pointed face. It averages 55 centimetres (22 in) in length (head and body), with ears 13 centimetres (5.1 in) long. It is the only species in the genus Otocyon. The name Otocyon is derived from the Greek words otus for ear and cyon for dog, while the specific name megalotis comes from the Greek words mega for large and otus for ear.
(b) Pteropus (suborder Yinpterochiroptera) is a genus of megabats which are among the largest in the world. They are commonly known as fruit bats or flying foxes, among other colloquial names. They live in the tropics and subtropics of Asia (including the Indian subcontinent), Australia, East Africa, and some oceanic islands in the Indian and Pacific Oceans. There are at least 60 extant species in the genus.
Flying foxes eat fruit and other plant matter, and occasionally consume insects as well. They locate resources with their keen sense of smell. Most, but not all, are nocturnal. They navigate with keen eyesight, as they cannot echolocate. They have long life spans and low reproductive outputs, with females of most species producing only one offspring per year. Their slow life history makes their populations vulnerable to threats such as overhunting, culling, and natural disasters. Six flying fox species have been made extinct in modern times by overhunting. Flying foxes are often persecuted for their real or perceived role in damaging crops. They are ecologically beneficial by assisting in the regeneration of forests via seed dispersal. They benefit ecosystems and human interests by pollinating plants.
They are different.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Wow... So a fox which looks like a bat and a bat which looks like a fox are totally different species. Interesting.
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Offline
The common aspect : both belong to class Mammalia.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
445) Coin collecting
Coin collecting, also called numismatics, the systematic accumulation and study of coins, tokens, paper money, and objects of similar form and purpose. The collecting of coins is one of the oldest hobbies in the world. With the exception of China and Japan, the introduction of paper money is for the most part a recent development (meaning since the 18th century). Hence, while paper money and other types of notes are collectible, the history of that form of collecting is distinct from coins and largely a modern phenomenon.
Early Coin Collecting
The long-held view that coin collecting began with the Italian Renaissance has been challenged by evidence that the activity is even more venerable. Suetonius (AD 69–122) relates in his De vita Caesarum (Lives of the Caesars; Augustus 75) that the emperor Augustus was fond of old and foreign coins and gave them as gifts to his friends. In addition to this account and a variety of other literary accounts of collecting from Greek and Roman sources, there is tangible archaeological evidence that coins have been collected at least from the Roman era and probably for as long as they have existed. For example, a hoard of some 70 Roman gold coins found at Vidy, Switzerland, did not contain any two specimens of the same type, which implies that the coins were collected during the period of Roman rule in that town.
The broader field of art collecting, for which specific and reliable accounts do exist, began in the 4th or 3rd century BC. Since coins of that period are universally recognized as works of art, and since they were among the most affordable and transportable objects of the art world, it is not surprising that they would have been collected even then. Certainly, they were appreciated for more than their value as currency, because they were often used in jewelry and decorative arts of the period.
During the reign of Trajanus Decius (AD 249–251), the Roman mint issued a series of coins commemorating all of the deified emperors from Augustusthrough Severus Alexander. The designs on these coins replicated those of coins issued by the honoured rulers—some of the original coins being nearly 300 years old by that time. It would have been necessary for the mint to have examples of the coins to use as prototypes, and it is hard to see such an assemblage as anything but a collection. In AD 805 Charlemagneissued a series of coins that very closely resemble the style and subject matter of Roman Imperial issues—another example of collected coins providing inspiration for die engravers of a later era. The Nestorian scholars and artisans who served the princes of the Jazira (Mesopotamia, now Iraq, Syria, and Turkey) in the 12th and 13th centuries designed a magnificent series of coins with motifs based on ancient Greek and Roman issues. Some of these so accurately render the details of the originals that even the inscriptions are faithfully repeated. Others were modified in intriguing ways. The only difference, for example, between the reverse of a Byzantinecoin of Romanus III and its Islamic copy is that the cross has been removed from the emperor’s orb in deference to Muslim sensibilities. The great variety and the sophisticated use of these images reveal the existence of well-studied collections. The eminent French numismatist Ernest Babelon, in his 1901 work Traité des monnaies Grecques et Romaines, refers to a manuscript dating to 1274, Thesaurus magnus in medalis auri optimi, which recorded a formal collection of ancient coins at a monastery in Padua, Italy. Petrarch (1304–1374), the famed humanist of the Italian Renaissance, formed a notably scientific and artistic collection of ancient coins.
Fascination with the images on the coins—depictions of famous rulers, mythological beings, and the like—seems to have generated much of the interest in collecting in these early periods. Because the coins of Asia and Africa did not usually feature images, collecting was not common in these areas until relatively modern times.
The Hobby Of Kings And The Rise Of Numismatic Scholarship
The main difference between coin collecting before and after the Renaissance is the development of an active market. With the new wave of interest, demand for antique coins greatly exceeded the available supply. During the 15th and 16th centuries, ancient-coin collecting became the “hobby of kings,” and the list of collectors is a list of European nobility. At the same time, famous artists were employed by these patrons to create replicas of ancient coins and portrait or commemorative medals, which became collectible in their own right. The appetite of collectors fueled a cottage industry of agents and prompted a search of source lands for salable artifacts. As might be expected, the insatiable market created such demand that it also fostered the introduction of forgeries.
By the 17th century, the nature of collecting had shifted slowly toward serious research. As a result, very broad collections were formed, studied, and cataloged. Numismatics became an academic pursuit, and many important treatises were published during that period. The involvement of institutions and the rise of public collections in the 18th century led to sponsorship of academic study, which elevated numismatics to the stature of a science. Most important, the exchange of information and new discoveries was formalized through detailed and widely published treatises on the topic of coins and collecting. Many of the large private collections of noble families came under state control during this period, and the subsequent cataloging of these holdings added volumes to existing knowledge. This information was readily available to the general public, and coin collecting became a pursuit of middle-class merchants and members of the various professions who were growing in numbers as well as cultural sophistication. Collecting ancient coins is one of the few ways that the average person can own actual objects from antiquity, and this point was not lost on the growing collector base. Coins are remarkably accessible pieces of history.
Modern Collecting
The web of private coin collectors increased dramatically during the 19th century, and handbooks for the novice began to appear. The scope of collecting broadened from ancient coins to coins of the world, and the activity became a popular hobby. Numismatic societies were formed throughout Britain, Europe, and the United States, with membership open to all ranks of the general public. Periodicals about coin collecting emerged, and the growing appetite of new advocates led to a prosperous industry.
The 20th century saw an even greater widening of the coin-collecting fraternity, with the establishment of coin shows, numismatic conventions, international conferences, academic symposia, and a proliferation of local clubs. Some of these clubs banded together to form large and influential associations. At the same time, the community of professional numismatists (coin dealers) became more tightly knit, and trade associations were established.
During this time a popular market for coins began to develop. Previously, only the wealthy had purchased ancient coins, and the sources were few. As the general public became increasingly conscious of ancient coins as collectibles and a wider demand became apparent in the market, more effort was expended by local entrepreneurs to locate sources. This led to widespread excavation of ancient sites. Additionally, farmers, who regularly found coins and small artifacts on their tilled land, began to realize the worth of these items. Hundreds of thousands of coins were discovered, sold, and disseminated throughout the cultural centres of Europe. This led to a situation wherein the scarcity of individual coin types could be observed and evaluated. Many of the most common types of ancient coins existed (and still do exist) in great quantities, saturating the market and creating very low prices for these types. At the same time, of course, the price for very rare coins escalated. Consequently, entry-level coin collectors can find ancient coins to be very inexpensive while seasoned collectors find choice and rare examples expensive and difficult to obtain.
The development of the market also led to some promotion of coins as a vehicle for investment. A number of investors assembled private portfolios of collectible coins. At least two major funds for investment in ancient coins were traded on the New York Stock Exchange in the late 1980s and early ’90s. Modern gold coins are traded as bullion through numerous funds and outlets. Still, the vast majority of coin collecting is hobby-related.
With the advances in modern technology seen in the past two centuries, the issue of forgery has become increasingly important to collectors. There have always been false issues. Many coins were counterfeited in antiquity, either for profit or out of necessity. (The latter occurred because there was not always enough legal tender available for circulation. This situation occurred particularly often in the Roman provinces of Spain, Gaul, and Britain.) Forgery differs from counterfeiting in that the forger seeks to enter his wares into the collector market, where their value as legal tender is irrelevant. For centuries there has been a war of wits between forgers and collectors. Fortunately, there are as many tools at the disposal of the collector as there are available to the forger. Over time the vast majority of forgeries are detected.
Collecting is a behaviour that can be associated with primeval instinct. However, the methodology of collecting coins can vary greatly from collector to collector. The most common approaches are political, economic, historical, artistic, and topical. For example, some collectors strive to acquire a complete set of portraits of notable figures in either a narrow or a very broad field. Others may focus on the metallurgy or denominational relationships of certain issues. The commemoration of historical events has always been a popular theme among coin-issuing authorities, and it is among collectors as well. Coins throughout the ages have reflected the artistic styles of their day. Consequently, they provide modern-day students and admirers of art with a panoply of original sources and an impressive array of miniature art. Coins provide a wealth of topical areas from which to choose and to form a collection. The levels of appeal are deep and diverse, which accounts for the wide and sustained popularity of this hobby.
The collecting of paper currencies began for the most part in the 19th century. As with all collecting, scarcity increases the value of the object, but collectors may also focus on the historical interest of a note. Currency printed by governments that existed only briefly (such as the Confederate States of America), currency printed during brief periods in history (such as the Russian occupation notes circulated in territories controlled by the Soviet government during and after World War II), and unusual currency tied to specific events (such as the concentration camp money that was printed and used by prisoners held by the Nazis in camps such as Theresienstadt during World War II) all have great interest to the collector.
The advent of the Internet spawned an entirely new culture of numismatic collectors. Widespread exposure to a remarkably large audience created more new collectors than the hobby had seen in decades. This brought with it new opportunities and new challenges. The relatively low experience level of Internet buyers created artificial markets that were not sustainable in the long term. After a burst of enthusiasm in the mid-1990s, the Internet market gradually settled down and became a vehicle where well-established businesses could trade with greater effect than in other traditional venues. The simultaneous growth of educational sites allowed a much faster rate of maturity among new collectors who surf the Web. One of the greatest challenges of Internet shopping venues has been to control the integrity of sellers who anonymously emerge out of the vastness of cyberspace.
As coin collecting grew in popularity, greater protections were placed on historical coins. In 1970 UNESCO passed a resolution that identified coins and other classes of objects more than 100 years old as cultural property and recommended controls on the import, export, and transfer of ownership of these items. Each member state that approved the resolution was left to create its own vehicle of enforcement. As a result, many countries where coins were struck in antiquity now prohibit the exportation of these coins. The U.S. law that implements the UNESCO resolution provides for restrictions under certain specified conditions. Private collectors and museums generally oppose import restrictions. The primary advocates of such controls are nationalist governments and archaeological advocacy groups. The British Treasure Act and Portable Antiquities Scheme (both enacted in the mid-1990s) are widely advocated by collector groups as a viable system for the preservation of cultural property and the protection of individual freedoms.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
446) Helmet
Helmet, defensive covering for the head, one of the most universal forms of armour. Helmets are usually thought of as military equipment, but they are also worn by firefighters, miners, construction workers, riot police, and motorcyclists, players of several sports, and bicyclists.
Military helmets date from ancient times. Their basic function was to protect the head, face, and sometimes the neck from projectiles and the cutting blows of swords, spears, arrows, and other weapons. The Assyrians and Persians had helmets of leather and iron, and the Greeks brought helmet making to a pinnacle of craftsmanship with their bronze helmets, some of which covered the entire head, with only a narrow opening in front for vision and breathing. The Romans developed several forms of helmets, including the round legionary’s helmet and the special gladiator’s helmet, with broad brim and pierced visor, giving exceptional protection to head, face, and neck.
In northern and western Europe, early helmets were of leather reinforced with bronze or iron straps and usually took the form of conical or hemispherical skullcaps. Gradually the amount of metal increased until entire helmets were fashioned of iron, still following the same form. About the year 1200 the helm, or heaume, emerged. It was a flat-topped cylinder that was put on over the skullcap just before an engagement; experience soon dictated rounded contours that would cause blows to glance off. At the same time, the skullcap developed into the basinet, with pieces added to protect the neck and with a movable visor for the face. By 1500 several highly sophisticated types of helmets were in use, employing hinges or pivots to permit the piece to be put on over the head and then fitted snugly around head and neck so that it could not be knocked off in combat.
In the 16th and 17th centuries light, open helmets with broad brims became popular. In the 18th and 19th centuries, with the growing effectiveness of firearms and the consequent decline in use of the sword and spear, helmets largely disappeared except for the use of light helmets by cavalry. The steel helmet reappeared, however, as a standard item for infantry in the opening years of World War I because it protected the head against the high-velocity metal fragments of exploding artillery shells. The French first adopted the helmet as standard equipment in late 1914 and were quickly followed by the British, the Germans, and then the rest of Europe. The modern infantry helmet is a smoothly rounded hemisphere designed to present glancing surfaces off of which bullets or shell fragments will bounce without imparting their full impact. The typical helmet is a hardened-steel shell with an inner textile liner and weighs about 1 to 4 pounds (0.5 to 1.8 kg).
Separate traditions of materials and workmanship used in making military helmets have developed in non-Western parts of the world. Conical iron and steel helmets—developed in medieval Persia, Turkey, and India—are valued as works of art because of their fine forging and delicate damascening. In Tibet and China, helmets of bronze, leather, and horn have been made for centuries, while Japanese helmets with detachable face guards, finely forged and lacquered, have been recognized as outstanding examples of the armourer’s craft.
Military helmets made a reappearance in World War I as protection in the trenches from shrapnel and snipers’ rounds and remain a basic item of military equipment.
Motorcycle helmet
A motorcycle helmet is a type of helmet used by motorcycle riders. The primary goal of a motorcycle helmet is motorcycle safety – to protect the rider's head during impact, thus preventing or reducing head injury and saving the rider's life. Some helmets provide additional conveniences, such as ventilation, face shields, ear protection, intercom etc.
Motorcyclists are at high risk in traffic crashes. A 2008 systematic review examined studies on motorcycle riders who had crashed and looked at helmet use as an intervention. The review concluded that helmets reduce the risk of head injury by around 69% and death by around 42%. Although it was once speculated that wearing a motorcycle helmet increased neck and spinal injuries in a crash, recent evidence has shown the opposite to be the case, that helmets protect against cervical spine injury, and that an often-cited small study dating to the mid-1980s, "used flawed statistical reasoning".
Origins
The origins of the crash helmet date back to the Brooklands race track in early 1914 where a medical officer, Dr Eric Gardner, noticed he was seeing a motor cyclist with head injuries about every 2 weeks. He got a Mr Moss of Bethnal Green to make canvas and shellac helmets stiff enough to stand a heavy blow and smooth enough to glance off any projections it encountered. He presented the design to the Auto-Cycle Union where it was initially condemned, but later converted to the idea and made them compulsory for the 1914 Isle of Man TT races, although there was resistance from riders. Gardner took 94 of these helmets with him to the Isle of Man, and one rider who hit a gate with a glancing blow was saved by the helmet. Dr Gardner received a letter later from the Isle of Man medical officer stating that after the T.T. they normally had "several interesting concussion cases" but that in 1914 there were none.
In May 1935, T. E. Lawrence (known as Lawrence of Arabia) had a crash on a Brough Superior SS100 on a narrow road near his cottage near Wareham. The accident occurred because a dip in the road obstructed his view of two boys on bicycles. Swerving to avoid them, Lawrence lost control and was thrown over the handlebars. He was not wearing a helmet, and suffered serious head injuries which left him in a coma; he died after six days in the hospital. One of the doctors attending him was Hugh Cairns, a neurosurgeon, who after Lawrence's death began a long study of what he saw as the unnecessary loss of life by motorcycle despatch riders through head injuries. Cairns' research led to the increased use of crash helmets by both military and civilian motorcyclists.
Basic types
There are five basic types of helmets intended for motorcycling, and others not intended for motorcycling but which are used by some riders. All of these types of helmets are secured by a chin strap, and their protective benefits are greatly reduced, if not eliminated, if the chin strap is not securely fastened so as to maintain a snug fit.
From most to least protective, as generally accepted by riders and manufacturers, the helmet types are:
Full face helmet.
A full face helmet covers the entire head, with a rear that covers the base of the skull, and a protective section over the front of the chin. Such helmets have an open cutout in a band across the eyes and nose, and often include a clear or tinted transparent plastic face shield, known as a visor, that generally swivels up and down to allow access to the face. Many full face helmets include vents to increase the airflow to the rider. The significant attraction of these helmets is their protectiveness. Some wearers dislike the increased heat, sense of isolation, lack of wind, and reduced hearing of such helmets. Full-face helmets intended for off-road or motocross use sometimes omit the face shield, but extend the visor and chin portions to increase ventilation, since riding off-road is a very strenuous activity. Studies have shown that full face helmets offer the most protection to motorcycle riders because 35% of all crashes showed major impact on the chin-bar area. Wearing a helmet with less coverage eliminates that protection — the less coverage the helmet offers, the less protection for the rider.
Off-road / motocross
The motocross and off-road helmet has clearly elongated chin and visor portions, a chin bar, and partially open face to give the rider extra protection while wearing goggles and to allow the unhindered flow of air during the physical exertion typical of this type of riding. The visor allows the rider to dip his or her head and provide further protection from flying debris during off-road riding. It also serves the obvious purpose of shielding the wearer's eyes from the sun.
Originally, off-road helmets did not include a chin bar, with riders using helmets very similar to modern open face street helmets, and using a face mask to fend off dirt and debris from the nose and mouth. Modern off-road helmets include a (typically angular, rather than round) chin bar to provide some facial impact protection in addition to protection from flying dirt and debris. When properly combined with goggles, the result provides most of the same protective features of full face street helmets.
Modular or "flip-up"
A hybrid between full face and open face helmets for street use is the modular or "flip-up" helmet, also sometimes termed "convertible" or "flip-face". When fully assembled and closed, they resemble full face helmets by bearing a chin bar for absorbing face impacts. Its chin bar may be pivoted upwards (or, in some cases, may be removed) by a special lever to allow access to most of the face, as in an open face helmet. The rider may thus eat, drink or have a conversation without unfastening the chinstrap and removing the helmet, making them popular among motor officers. It is also popular with people who use eyeglasses as it allows them to fit a helmet without removing their glasses.
Many modular helmets are designed to be worn only in the closed position for riding, as the movable chin bar is designed as a convenience feature, useful while not actively riding. The curved shape of an open chin bar and face shield section can cause increased wind drag during riding, as air will not flow around an open modular helmet in the same way as a three-quarters helmet. Since the chin bar section also protrudes further from the forehead than a three-quarters visor, riding with the helmet in the open position may pose increased risk of neck injury in a crash. Some modular helmets are dual certified as full face and open face helmet. The chin bar of those helmets offer real protection and they can be used in the "open" position while riding. An example of such a helmet would be the Shark Evoline.
As of 2008, there have not been wide scientific studies of modular helmets to assess how protective the pivoting or removable chin bars are. Observation and unofficial testing suggest that significantly greater protection exists beyond that for an open face helmet, and may be enough to pass full-face helmet standardized tests, but the extent of protection is not fully established by all standards bodies.
The DOT standard does not require chin bar testing. The Snell Memorial Foundation recently certified a flip-up helmet for the first time. ECE 22.05 allows certification of modular helmets with or without chin bar tests, distinguished by -P (protective lower face cover) and -NP (non-protective) suffixes to the certification number, and additional warning text for non-certified chin bars.
Open face or 3/4 helmet
The open face, or "three-quarters", helmet covers the ears, cheeks, and back of the head, but lacks the lower chin bar of the full face helmet. Many offer snap-on visors that may be used by the rider to reduce sunlight glare. An open face helmet provides the same rear protection as a full face helmet, but little protection to the face, even from non-crash events.
Bugs, dust, or even wind to the face and eyes can cause rider discomfort or injury. As a result, it is not uncommon (and in some U.S. states, is required by law) for riders to wear wrap-around sunglasses or goggles to supplement eye protection with these helmets. Alternatively, many open face helmets include, or can be fitted with, a face shield, which is more effective in stopping flying insects from entering the helmet.
Half helmet
The half helmet, also referred to as a "Shorty" in the United States and "Pudding Basin" or TT helmet in the UK and popular with Rockers and road racers of the 1960s in the British Isles. It has essentially the same front design as an open face helmet but without a lowered rear in the shape of a bowl. The half helmet provides the minimum coverage generally allowed by law in the U.S., and British Standards 2001:1956.
As with the open face, it is not uncommon to augment this helmet's eye protection through other means such as goggles. Because of their inferiority compared to other helmet styles, some Motorcycle Safety Foundations prohibit the use of half helmets now. Notable UK manufacturers included Everoak, Chas Owens and, currently, Davida.
Novelty helmets
There are other types of headwear often called "beanies," "brain buckets", or "novelty helmets", a term which arose since they are uncertified and cannot legally be called motorcycle helmets in some jurisdictions. Such items are often smaller and lighter than helmets made to U.S. Department of Transportation (DOT) standards, and are unsuitable for crash protection because they lack the energy-absorbing foam that protects the brain by allowing it to come to a gradual stop during an impact. A "novelty helmet" can protect the scalp against sunburn while riding and – if it stays on during a crash – might protect the scalp against abrasion, but it has no capability to protect the skull or brain from an impact. In the US, 5% of riders wore non-DOT compliant helmets in 2013, a decrease from 7% the previous year.
Conflicting findings on color visibility
Although black helmets are popular among motorcyclists, one study determined they offer the least visibility to motorists. Riders wearing a plain white helmet rather than a black one were associated with a 24% lower risk of suffering a motorcycle accident injury or death. This study also notes "Riders wearing high visibility clothing and white helmets are likely to be more safety conscious than other riders."
However, the MAIDS report did not back up the claims that helmet color makes any difference in accident frequency, and that in fact motorcycles painted white were actually over-represented in the accident sample compared to the exposure data. While recognizing how much riders need to be seen, the MAIDS report documented that riders' clothing usually fails to do so, saying that "in 65.3% of all cases, the clothing made no contribution to the conspicuity of the rider or the PTW [powered two-wheeler, i.e. motorcycle]. There were very few cases found in which the bright clothing of the PTW rider enhanced the PTW’s overall conspicuity (46 cases). There were more cases in which the use of dark clothing decreased the conspicuity of the rider and the PTW (120 cases)." The MAIDS report was unable to recommend specific items of clothing or colors to make riders better seen.
Construction
Modern helmets are constructed from plastics. Premium price helmets are made with fiberglass reinforced with Kevlar or carbon fiber. They generally have fabric and foam interiors for both comfort and protection. Motorcycle helmets are generally designed to distort in a crash (thus expending the energy otherwise destined for the wearer's skull), so they provide little protection at the site of their first impact, but continued protection over the remainder of the helmet.
Helmets are constructed from an inner EPS “Expanded Polystyrene foam” and an outer shell to protect the EPS. The density and the thickness of the EPS is designed to cushion or crush on impact to help prevent head injuries. Some manufacturers even offer different densities to offer better protection. The outer shell can be made of plastics or fiber materials. Some of the plastics offer very good protection from penetration as in lexan (bulletproof glass) but will not crush on impact, so the outer shell will look undamaged but the inner EPS will be crushed. Fiberglass is less expensive than lexan but is heavy and very labor-intensive. Fiberglass or fiber shells will crush on impact offering better protection. Some manufacturers will use Kevlar or carbon fiber to help reduce the amount of fiberglass but in the process it will make the helmet lighter and offer more protection from penetration but still crushing on impact. However, using these materials can be very expensive, and manufacturers will balance factors such as protection, comfort, weight, and additional features to meet target price points.
Function
The conventional motorcycle helmet has two principal protective components: a thin, hard, outer shell typically made from polycarbonate plastic, fiberglass, or Kevlar and a soft, thick, inner liner usually made of expanded polystyrene or polypropylene "EPS" foam. The purpose of the hard outer shell is:
1) to prevent penetration of the helmet by a pointed object that might otherwise puncture the skull, and
2) to provide structure to the inner liner so it does not disintegrate upon abrasive contact with pavement. This is important because the foams used have very little resistance to penetration and abrasion.
The purpose of the foam liner is to crush during an impact, thereby increasing the distance and period of time over which the head stops and reducing its deceleration.
To understand the action of a helmet, it is first necessary to understand the mechanism of head injury. The common perception that a helmet's purpose is to save the rider's head from splitting open is misleading. Skull fractures are usually not life-threatening unless the fracture is depressed and impinges on the brain beneath and bone fractures usually heal over a relatively short period. Brain injuries are much more serious. They frequently result in death, permanent disability or personality change and, unlike bone, neurological tissue has very limited ability to recover after an injury. Therefore, the primary purpose of a helmet is to prevent traumatic brain injury while skull and face injuries are a significant secondary concern.
The most common type of head injury in motorcycle accidents is closed head injury, meaning injury in which the skull is not broken as distinct from an open head injury like a bullet wound. Closed head injury results from violent acceleration of the head, which causes the brain to move around inside the skull. During an impact to the front of the head, the brain lurches forwards inside the skull, squeezing the tissue near the impact site and stretching the tissue on the opposite side of the head. Then the brain rebounds in the opposite direction, stretching the tissue near the impact site and squeezing the tissue on the other side of the head. Blood vessels linking the brain to the inside of the skull may also break during this process, causing dangerous bleeding.
Another hazard, susceptibility of the brain to shearing forces, plays a role primarily in injuries that involve rapid and forceful movements of the head, such as in motor vehicle accidents. In these situations rotational forces such as might occur in whiplash-type injuries are particularly important. These forces, associated with the rapid acceleration and deceleration of the head, are smallest at the point of rotation of the brain near the lower end of the brain stem and successively increase at increasing distances from this point. The resulting shearing forces cause different levels in the brain to move relative to one another. This movement produces stretching and tearing of axons (diffuse axonal injury) and the insulating myelin sheath, injuries which are the major cause of loss of consciousness in a head trauma. Small blood vessels are also damaged causing bleeding (petechial hemorrhages) deep within the brain.
It is important that the liner in a motorcycle helmet is soft and thick so the head decelerates at a gentle rate as it sinks into it. The helmet quickly becomes impractical if the liner is more than 1–2 inches (2.5–5.1 cm) thick. This implies a limit to how soft the liner can be. If the liner is too soft, the head will crush it completely upon impact without coming to a stop. Outside the liner is a hard plastic shell and beyond that is whatever the helmet is hitting, which is usually an unyielding surface, like concrete pavement. Consequently, the head cannot move any further, so after crushing the liner it comes suddenly to an abrupt stop, causing high accelerations that injure the brain.
Therefore, an ideal helmet liner is stiff enough to decelerate the impacting head to an abrupt stop in a smooth uniform manner just before it completely crushes the liner and no stiffer. The required stiffness depends on the impact speed of the head, which is unknown at the time of manufacture of the helmet. The result is that the manufacturer must choose a likely speed of impact and optimize the helmet for that impact speed. If the helmet is in a real impact that is slower than the one for which it was designed, it will still help but the head will be decelerated a little more violently than was actually necessary given the available space between the inside and outside of the helmet, although that deceleration will still be much less than what it would have been in the absence of the helmet. If the impact is faster than the one the helmet was designed for, the head will completely crush the liner and slow down but not stop in the process. When the crush space of the liner runs out, the head will stop suddenly which is not ideal. However, in the absence of the helmet, the head would have been brought to a sudden stop from a higher speed causing more injury. Still, a helmet with a stiffer foam that stopped the head before the liner crush space ran out would have done a better job. So helmets help most in impacts at the speeds they were designed for, and continue to help but not as much in impacts that are at different speeds. In practice, motorcycle helmet manufacturers choose the impact speed they will design for based on the speed used in standard helmet tests. Most standard helmet tests use speeds between 4 and 7 m/s (8.9 and 15.7 mph; 14 and 25 km/h).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
447) Personal computer
Personal computer (PC), a digital computer designed for use by only one person at a time. A typical personal computer assemblage consists of a central processing unit (CPU), which contains the computer’s arithmetic, logic, and control circuitry on an integrated circuit; two types of computer memory, main memory, such as digital random-access memory (RAM), and auxiliary memory, such as magnetic hard disks and special optical compact discs, or read-only memory (ROM) discs (CD-ROMs and DVD-ROMs); and various input/output devices, including a display screen, keyboard and mouse, modem, and printer.
From Hobby Computers To Apple
Computers small and inexpensive enough to be purchased by individuals for use in their homes first became feasible in the 1970s, when large-scale integration made it possible to construct a sufficiently powerful microprocessor on a single semiconductor chip. A small firm named MITS made the first personal computer, the Altair. This computer, which used Intel Corporation’s 8080 microprocessor, was developed in 1974. Though the Altair was popular among computer hobbyists, its commercial appeal was limited.
The personal computer industry truly began in 1977, with the introduction of three preassembled mass-produced personal computers: Apple Computer, Inc.’s (now Apple Inc.) Apple II, the Tandy Radio Shack TRS-80, and the Commodore Business Machines Personal Electronic Transactor (PET). These machines used eight-bit microprocessors (which process information in groups of eight bits, or binary digits, at a time) and possessed rather limited memory capacity—i.e., the ability to address a given quantity of data held in memory storage. But because personal computers were much less expensive than mainframe computers (the bigger computers typically deployed by large business, industry, and government organizations), they could be purchased by individuals, small and medium-sized businesses, and primary and secondary schools.
Of these computers, the TRS-80 dominated the market. The TRS-80 microcomputer came with four kilobytes of memory, a Z80 microprocessor, a BASIC programming language, and cassettes for data storage. To cut costs, the machine was built without the ability to type lowercase letters. Thanks to Tandy’s chain of Radio Shack stores and the breakthrough price ($399 fully assembled and tested), the machine was successful enough to persuade the company to introduce a more powerful computer two years later, the TRS-80 Model II, which could reasonably be marketed as a small-business computer.
The Apple II received a great boost in popularity when it became the host machine for VisiCalc, the first electronic spreadsheet (computerized accounting program). Other types of application software soon developed for personal computers.
IBM PC
IBM Corporation, the world’s dominant computer maker, did not enter the new market until 1981, when it introduced the IBM Personal Computer, or IBM PC. The IBM PC was significantly faster than rival machines, had about 10 times their memory capacity, and was backed by IBM’s large sales organization. The IBM PC was also the host machine for 1-2-3, an extremely popular spreadsheet introduced by the Lotus Development Corporation in 1982. The IBM PC became the world’s most popular personal computer, and both its microprocessor, the Intel 8088, and its operating system, which was adapted from Microsoft Corporation’s MS-DOS system, became industry standards. Rival machines that used Intel microprocessors and MS-DOS became known as “IBM compatibles” if they tried to compete with IBM on the basis of additional computing power or memory and “IBM clones” if they competed simply on the basis of low price.
GUI
In 1983 Apple introduced Lisa, a personal computer with a graphical user interface (GUI) to perform routine operations. A GUI is a display format that allows the user to select commands, call up files, start programs, and do other routine tasks by using a device called a mouse to point to pictorial symbols (icons) or lists of menu choices on the screen. This type of format had certain advantages over interfaces in which the user typed text- or character-based commands on a keyboard to perform routine tasks. A GUI’s windows, pull-down menus, dialog boxes, and other controlling mechanisms could be used in new programs and applications in a standardized way, so that common tasks were always performed in the same manner. The Lisa’s GUI became the basis of Apple’s Macintosh personal computer, which was introduced in 1984 and proved extremely successful. The Macintosh was particularly useful for desktop publishing because it could lay out text and graphics on the display screen as they would appear on the printed page.
The Macintosh’s graphical interface style was widely adapted by other manufacturers of personal computers and PC software. In 1985 the Microsoft Corporation introduced Microsoft Windows, a graphical user interface that gave MS-DOS-based computers many of the same capabilities of the Macintosh. Windows became the dominant operating environment for personal computers.
Faster, Smaller, And More-Powerful PCs
These advances in software and operating systems were matched by the development of microprocessors containing ever-greater numbers of circuits, with resulting increases in the processing speed and power of personal computers. The Intel 80386 32-bit microprocessor (introduced 1985) gave the Compaq Computer Corporation’s Compaq 386 (introduced 1986) and IBM’s PS/2 family of computers (introduced 1987) greater speed and memory capacity. Apple’s Mac II computer family made equivalent advances with microprocessors made by Motorola, Inc. The memory capacity of personal computers had increased from 64 kilobytes (64,000 characters) in the late 1970s to 100 megabytes (100 million characters) by the early ’90s to several gigabytes (billions of characters) by the early 2000s.
By 1990 some personal computers had become small enough to be completely portable. They included laptop computers, also known as notebook computers, which were about the size of a notebook, and less-powerful pocket-sized computers, known as personal digital assistants (PDAs). At the high end of the PC market, multimedia personal computers equipped with DVD players and digital sound systems allowed users to handle animated images and sound (in addition to text and still images) that were stored on high-capacity DVD-ROMs. Personal computers were increasingly interconnected with each other and with larger computers in networks for the purpose of gathering, sending, and sharing information electronically. The uses of personal computers continued to multiply as the machines became more powerful and their application software proliferated.
By 2000 more than 50 percent of all households in the United States owned a personal computer, and this penetration increased dramatically over the next few years as people in the United States (and around the world) purchased PCs to access the world of information available through the Internet.
As the 2000s progressed, the calculation and video display distinctions between mainframe computers and PCs continued to blur: PCs with multiple microprocessors became more common; microprocessors that contained more than one “core” (CPU) displaced single-core microchips in the PC market; and high-end graphic processing cards, essential for playing the latest electronic games, became standard on all but the cheapest PCs. Likewise, the processor speed, amount and speed of memory, and data-storage capacities of PCs reached or exceeded the levels of earlier supercomputers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
448) International Date Line
International Date Line, also called Date Line, imaginary line extending between the North Pole and the South Pole and arbitrarily demarcating each calendar day from the next. It corresponds along most of its length to the 180th meridian of longitude but deviates eastward through the Bering Strait to avoid dividing Siberia and then deviates westward to include the Aleutian Islands with Alaska. South of the Equator, another eastward deviation allows certain island groups to have the same day as New Zealand.
The International Date Line is a consequence of the worldwide use of timekeeping systems arranged so that local noon corresponds approximately to the time at which the sun crosses the local meridian of longitude (see Standard Time, below). A traveler going completely around the world while carrying a clock that he advanced or set back by one hour whenever he entered a new time zone and a calendar that he advanced by one day whenever his clock indicated midnight would find on returning to his starting point that the date according to his own experience was different by one day from that kept by persons who had remained at the starting point. The International Date Line provides a standard means of making the needed readjustment: travelers moving eastward across the line set their calendars back one day, and those traveling westward set theirs a day ahead.
Standard Time, the time of a region or country that is established by law or general usage as civil time.
The concept was adopted in the late 19th century in an attempt to end the confusion that was caused by each community’s use of its own solar time. Some such standard became increasingly necessary with the development of rapid railway transportation and the consequent confusion of schedules that used scores of different local times kept in separate communities. (Local time varies continuously with change in longitude.) The need for a standard time was felt most particularly in the United States and Canada, where long-distance railway routes passed through places that differed by several hours in local time. Sir Sandford Fleming, a Canadian railway planner and engineer, outlined a plan for worldwide standard time in the late 1870s. Following this initiative, in 1884 delegates from 27 countries met in Washington, D.C., and agreed on a system basically the same as that now in use.
The present system employs 24 standard meridians of longitude (lines running from the North Pole to the South Pole, at right angles to the Equator) 15° apart, starting with the prime meridian through Greenwich, England. These meridians are theoretically the centres of 24 Standard Time zones, although in practice the zones often are subdivided or altered in shape for the convenience of inhabitants; a notable example of such alteration is the eastward extension of the International Date Line around the Pacific island country of Kiribati. Time is the same throughout each zone and differs from the international basis of legal and scientific time, Coordinated Universal Time, by an integral number of hours; minutes and seconds are the same. In a few regions, however, the legal time kept is not that of one of the 24 Standard Time zones, because half-hour or quarter-hour differences are in effect there. In addition, Daylight Saving Time is a common system by which time is advanced one hour from Standard Time, typically to extend daylight hours during conventional waking time and in most cases for part of the year (usually in summer).
Time zone, a zone on the terrestrial globe that is approximately 15° longitude wide and extends from pole to pole and within which a uniform clock time is used. Time zones are the functional basis of standard time.
Meridian, imaginary north–south line on the Earth’s surface that connects both geographic poles; it is used to indicate longitude. The 40th meridian, for example, has a longitude of 40° E or 40° W.
Coordinated Universal Time (UTC), international basis of civil and scientific time, which was introduced on January 1, 1960. The unit of UTC is the atomic second, and UTC is widely broadcast by radio signals. These signals ultimately furnish the basis for the setting of all public and private clocks. Since January 1, 1972, UTC has been modified by adding “leap seconds” when necessary.
UTC serves to accommodate the timekeeping differences that arise between atomic time (which is derived from atomic clocks) and solar time (which is derived from astronomical measurements of Earth’s rotation on its axis relative to the Sun). UTC is thus kept within an exact number of seconds of International Atomic Time and is also kept within 0.9 second of the solar time denoted UT1 (see Universal Time, below). Because of the irregular slowing of Earth’s rate of rotation by tidal friction and other forces, there is now about one more (atomic clock-derived) second in a solar year than there are UT1 seconds. To remedy this discrepancy, UTC is kept within 0.9 second of UT1 by adding a leap second to UTC as needed; the last minute of December or June is made to contain 61 seconds. The slowing of Earth’s rotation varies irregularly, and so the number of leap seconds by which UTC must be retarded to keep it in epoch with UT1 cannot be predicted years in advance. Impending leap seconds for UTC are announced at least eight weeks in advance by the International Earth Rotation and Reference Systems Service at the Paris Observatory, however.
Universal Time (UT), the mean solar time of the Greenwich meridian (0° longitude). Universal Time replaced the designation Greenwich Mean Time in 1928; it is now used to denote the solar time (q.v.) when an accuracy of about one second suffices. In 1955 the International Astronomical Union defined several categories of Universal Time of successively increasing accuracy. UT0 represents the initial values of Universal Time obtained by optical observations of star transits at various astronomical observatories. These values differ slightly from each other because of the effects of polar motion (q.v.). UT1, which gives the precise angular coordinate of the Earth about its spin axis, is obtained by correcting UT0 for the effects of polar motion. Finally, an empirical correction to take account of annual changes in the Earth’s speed of rotation is added to UT1 to convert it into UT2. Coordinated Universal Time (q.v.), the international basis of civil and scientific time, is obtained from an atomic clock that is adjusted in epoch so as to remain close to UT1; in this way, the solar time that is indicated by Universal Time is kept in close coordination with atomic time.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
449) DVD
DVD, in full digital video disc or digital versatile disc, type of optical disc used for data storage and as a platform for multimedia. Its most prominent commercial application is for playing back recorded motion pictures and television programs (hence the designation “digital video disc”), though read-only, recordable, and even erasable and rewritable versions can be used on personal computers to store large quantities of almost any kind of data (hence “digital versatile disc”).
The DVD represents the second generation of compact disc (CD) technology, and, in fact, soon after the release of the first audio CDs by the Sony Corporation and Philips Electronics NV in 1982, research was under way on storing high-quality video on the same 120-mm (4.75-inch) disc. In 1994–95 two competing formats were introduced, the Multimedia CD (MMCD) of Sony and Philips and the Super Density (SD) disc of a group led by the Toshiba Corporation and Time Warner Inc. By the end of 1995 the competing groups had agreed on a common format, to be known as DVD, that combined elements of both proposals, and in 1996 the first DVD players went on sale in Japan.
Like a CD drive, a DVD drive uses a laser to read digitized (binary) data that have been encoded onto the disc in the form of tiny pits tracing a spiral track between the centre of the disc and its outer edge. However, because the DVD laser emits red light at shorter wavelengths than the red light of the CD laser (635 or 650 nanometres for the DVD as opposed to 780 nanometres for the CD), it is able to resolve shorter pits on more narrowly spaced tracks, thereby allowing for greater storage density. In addition, DVDs are available in single- and double-sided versions, with one or two layers of information per side. A double-sided, dual-layer DVD can hold more than 16 gigabytes of data, more than 10 times the capacity of a CD-ROM, but even a single-sided, single-layer DVD can hold more than four gigabytes—more than enough capacity for a two-hour movie that has been digitized in the highly efficient MPEG-2 compression format. Indeed, soon after the first DVD players were introduced, single-sided DVDs became the standard media for watching movies at home, almost completely replacing videotape. Consumers quickly appreciated the convenience of the discs as well as the higher quality of the video images, the interactivity of the digital controls, and the presence of numerous extra features packed into the discs’ capacious storage.
The next generation beyond DVD technology is high-definition, or HD, technology. As television systems switched over to digital signaling, high-definition television (HDTV) became available, featuring much greater picture resolution than traditional television. Motion pictures are especially suited for display on wide flat-panel HDTV screens, and in 2002, as in 1994–95, two competing (and incompatible) technologies were presented for storing video in high-definition on a CD-ROM-sized disc: HD DVD, proposed by Toshiba and the NEC Corporation, and Blu-ray, proposed by a group led by Sony. Both technologies employed a laser emitting light in the blue-violet end of the visible spectrum. The extremely short wavelength of this light (405 nanometres) allowed yet smaller pits to be traced on even more closely spaced tracks than on the DVD. As a result, a single-sided. single-layer disc had a storage capacity of 15 gigabytes (HD DVD) or 25 gigabytes (Blu-ray).
With two incompatible technologies on the market, consumers were reluctant to purchase next-generation players for fear that one standard would lose out to the other and render their purchase worthless. In addition, movie studios faced a potentially expensive situation if they produced movies for the losing format, and computer and software firms were concerned about the type of disc drive that would be needed for their products. Those uncertainties created pressure to settle on a format, and in 2008 the entertainment industry accepted Blu-ray as its preferred standard. Toshiba’s group stopped development of HD DVD. By that time, doubts were being raised about how long even the new Blu-Ray discs would be viable, as a growing number of movies in high-definition were available for “streaming” online, and cloud computing services offered consumers huge data banks for storing all sorts of digitized data.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
450) Compass
Compass, in navigation or surveying, the primary device for direction-finding on the surface of the Earth. Compasses may operate on magnetic or gyroscopic principles or by determining the direction of the Sun or a star.
The oldest and most familiar type of compass is the magnetic compass, which is used in different forms in aircraft, ships, and land vehicles and by surveyors. Sometime in the 12th century, mariners in China and Europe made the discovery, apparently independently, that a piece of lodestone, a naturally occurring magnetic ore, when floated on a stick in water, tends to align itself so as to point in the direction of the polestar. This discovery was presumably quickly followed by a second, that an iron or steel needle touched by a lodestone for long enough also tends to align itself in a north-south direction. From the knowledge of which way is north, of course, any other direction can be found.
The reason magnetic compasses work as they do is that the Earth itself acts as an enormous bar magnet with a north-south field that causes freely moving magnets to take on the same orientation. The direction of the Earth’s magnetic field is not quite parallel to the north-south axis of the globe, but it is close enough to make an uncorrected compass a reasonably good guide. The inaccuracy, known as variation (or declination), varies in magnitude from point to point upon the Earth. The deflection of a compass needle due to local magnetic influences is called deviation.
Over the centuries a number of technical improvements have been made in the magnetic compass. Many of these were pioneered by the English, whose large empire was kept together by naval power and who hence relied heavily upon navigational devices. By the 13th century the compass needle had been mounted upon a pin standing on the bottom of the compass bowl. At first only north and south were marked on the bowl, but then the other 30 principal points of direction were filled in. A card with the points painted on it was mounted directly under the needle, permitting navigators to read their direction from the top of the card. The bowl itself was subsequently hung on gimbals (rings on the side that let it swing freely), ensuring that the card would always be level. In the 17th century the needle itself took the shape of a parallelogram, which was easier to mount than a thin needle.
During the 15th century navigators began to understand that compass needles do not point directly to the North Pole but rather to some nearby point; in Europe, compass needles pointed slightly east of true north. To counteract this difficulty, British navigators adopted conventional meridional compasses, in which the north on the compass card and the “needle north” were the same when the ship passed a point in Cornwall, England. (The magnetic poles, however, wander in a predictable manner—in more recent centuries Europeans have found magnetic north to be west of true north—and this must be considered for navigation.)
In 1745 Gowin Knight, an English inventor, developed a method of magnetizing steel in such a way that it would retain its magnetization for long periods of time; his improved compass needle was bar-shaped and large enough to bear a cap by which it could be mounted on its pivot. The Knight compass was widely used.
Some early compasses did not have water in the bowl and were known as dry-card compasses; their readings were easily disturbed by shocks and vibration. Although they were less affected by shock, liquid-filled compasses were plagued by leaks and were difficult to repair when the pivot became worn. Neither the liquid nor the dry-card type was decisively advantageous until 1862, when the first liquid compass was made with a float on the card that took most of the weight off the pivot. A system of bellows was invented to expand and contract with the liquid, preventing most leaks. With these improvements liquid compasses made dry-card compasses obsolete by the end of the 19th century.
Modern mariners’ compasses are usually mounted in binnacles, cylindrical pedestals with provision for illuminating the compass face from below. Each binnacle contains specially placed magnets and pieces of steel that cancel the magnetic effects of the metal of the ship. Much the same kind of device is used aboard aircraft, except that, in addition, it contains a corrective mechanism for the errors induced in magnetic compasses when airplanes suddenly change course. The corrective mechanism is a gyroscope, which has the property of resisting efforts to change its axis of spin. This system is called a gyromagnetic compass.
Gyroscopes are also employed in a type of nonmagnetic compass called the gyrocompass. The gyroscope is mounted in three sets of concentric rings connected by gimbals, each ring spinning freely. When the initial axis of spin of the central gyroscope is set to point to true north, it will continue to do so and will resist efforts to realign it in any other direction; the gyroscope itself thus functions as a compass. If it begins to precess (wobble), a pendulum weight pulls it back into line. Gyrocompasses are generally used in navigation systems because they can be set to point to true north rather than to magnetic north.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
451) Tsunami
Tsunami, (Japanese: “harbour wave”)also called seismic sea wave or tidal wave, catastrophic ocean wave, usually caused by a submarine earthquake, an underwater or coastal landslide, or a volcanic eruption. The term tidal wave is frequently used for such a wave, but it is a misnomer, for the wave has no connection with the tides.
Origin And Development
After an earthquake or other generating impulse occurs, a train of simple, progressive oscillatory waves is propagated great distances over the ocean surface in ever-widening circles, much like the waves produced by a pebble falling into a shallow pool. In deep water a tsunami can travel as fast as 800 km (500 miles) per hour. The wavelengths are enormous, about 100 to 200 km (60 to 120 miles), but the wave amplitudes (heights) are very small, only about 30 to 60 cm (1 to 2 feet). The waves’ periods (the lengths of time for successive crests or troughs to pass a single point) are very long, varying from five minutes to more than an hour. These long periods, coupled with the extremely low steepness and height of the waves, enables them to be completely obscured in deep water by normal wind waves and swell. A ship on the high seas experiences the passage of a tsunami as an insignificant rise and fall of only half a metre (1.6 feet), lasting from five minutes to an hour or more.
As the waves approach the coast of a continent, however, friction with the rising sea bottom reduces the velocity of the waves. As the velocity lessens, the wavelengths become shortened and the wave amplitudes (heights) increase. Coastal waters may rise as high as 30 metres (about 100 feet) above normal sea level in 10 to 15 minutes. The continental shelf waters begin to oscillate after the rise in sea level. Between three and five major oscillations generate most of the damage, frequently appearing as powerful “run-ups” of rushing water that uproot trees, pull buildings off their foundations, carry boats far inshore, and wash away entire beaches, peninsulas, and other low-lying coastal formations. Frequently the succeeding outflow of water is just as destructive as the run-up or even more so. In any case, oscillations may continue for several days until the ocean surface reaches equilibrium.
Much like any other water waves, tsunamis are reflected and refracted by the topography of the seafloor near shore and by the configuration of a coastline. As a result, their effects vary widely from place to place. Occasionally, the first arrival of a tsunami at a coast may be the trough of the wave, in which case the water recedes and exposes the shallow seafloor. Such an occurrence took place in the bay of Lisbon, Portugal, on November 1, 1755, after a large earthquake; many curious people were attracted to the bay floor, and a large number of them were drowned by the wave crest that followed the trough only minutes later.
Notable Tsunamis
One of the most destructive tsunamis in antiquity took place in the eastern Mediterranean Sea on July 21, 365 CE. A fault slip in the subduction zone beneath the island of Crete produced an earthquake with an estimated magnitude of 8.0–8.5, which was powerful enough to raise parts of the western third of the island up to 10 metres (33 feet). The earthquake spawned a tsunami that claimed tens of thousands of lives and caused widespread damage throughout the Mediterranean, from islands in the Aegean Sea westward to the coast of present-day Spain. Tsunami waves pushed ships over harbour walls and onto the roofs of houses in Alexandria, Egypt, while also ruining nearby croplands by inundating them with salt water.
Perhaps the most destructive tsunami in recorded history took place on December 26, 2004, after an earthquake of magnitude 9.1 displaced the ocean floor off the Indonesian island of Sumatra. Two hours later, waves as high as 9 metres (30 feet) struck the eastern coasts of India and Sri Lanka, some 1,200 km (750 miles) away. Within seven hours of the quake, waves washed ashore on the Horn of Africa, more than 3,000 km (1,800 miles) away on the other side of the Indian Ocean. More than 200,000 people were killed, most of them on Sumatra but thousands of others in Thailand, India, and Sri Lanka and smaller numbers in Malaysia, Myanmar, Bangladesh, Maldives, Somalia, and other locations.
On March 11, 2011, seafloor displacement resulting from a magnitude-9.0 earthquake in the Japan Trench of the Pacific Ocean created a large tsunami that devastated much of the eastern coast of Japan’s main island of Honshu. Waves measuring as much as 10 metres (33 feet) high struck the city of Sendai and other low-lying coastal regions of Miyagi prefecture as well as coastal areas in the prefectures of Iwate, Fukushima, Ibaraki, and Chiba. The tsunami also instigated a major nuclear accident at the Fukushima Daiichi power station along the coast.
Other tsunamis of note include those that followed the spectacular explosive eruption of the Krakatoa (Krakatau) volcano on August 26 and 27, 1883, and the Chile earthquake of 1960. A series of blasts from Krakatoa submerged the island of Rakata between Sumatra and Java, creating waves as high as 35 metres (115 feet) in many East Indies localities, and killed more than 36,000 people. The largest earthquake ever recorded (magnitude 9.5) took place in 1960 off the coast of Chile, and it caused a tsunami that killed approximately 2,000 people in Chile, 61 people 15 hours later in Hawaii, and 122 people 22 hours later in Japan.
Tsunami Warning Systems
The hazards presented by tsunamis have brought many countries in the Pacific basin to establish tsunami warning systems. A warning may begin with an alert by a geological society that an earthquake large enough to disturb the ocean’s surface (for instance, magnitude 7.0 or higher) has occurred. Meteorological agencies may then report unusual changes in sea level, and then the warning centre may combine this information with data on the depth and features of the ocean floor in order to estimate the path, magnitude, and arrival time of the tsunami. Depending on the distance from the seismic disturbance, government authorities may have several hours’ notice to order the evacuation of coastal areas. The Pacific Tsunami Warning Center, located near Honolulu, Hawaii, was established in 1949, three years after a tsunami generated by a submarine earthquake near the Aleutian Islands struck the island of Hawaii around Hilo, killing more than 170 people. It serves as one of two regional warning centres for the United States—the other is located in Palmer, Alaska—and since 1965 it has also served as the warning centre for 26 countries organized by UNESCO’s Intergovernmental Oceanographic Commission into the International Coordination Group for the Tsunami Warning System in the Pacific. Following the disaster of December 2004, UNESCO set a goal of establishing similar systems for the Indian Ocean and eventually the entire globe.
Extraterrestrial Tsunamis
Tsunami waves are not limited to Earth’s surface. An analysis of the Martian surface conducted in 2016, which examined the desert planet’s northern plains by using photographs and thermal imagery, revealed evidence of two separate tsunami events that occurred long ago. These events are thought to have been caused by comet or asteroid impacts.
What Causes Tsunamis?
As natural disasters go, tsunamis are among the worst in terms of overall destruction and loss of life. They rival earthquakes in their ability to suddenly devastate a wide area. In recent years massive tsunamis have caused extensive damage in northern Sumatra and Thailand, parts of Japan's Honshu Island, and parts of Chile. So what are tsunamis, and what causes them?
A tsunami is a catastrophic ocean wave that is usually caused by a submarine earthquake, an underwater or coastal landslide, or the eruption of a volcano. Tsunamis can also result from the impact of a meteor or comet in a body of water. The word tsunami in Japanese means “harbor wave.”
Much like when a rock plunges into a still pond, once a tsunami-generating disturbance in the water occurs, a train of outward-propagating waves comes from the disturbance’s central point. These waves can travel as fast as 800 km (500 miles) per hour, with wavelengths that extend from 100 to 200 km (60 to 120 miles). However, in the open ocean, amplitudes (heights) of the waves are very small, only about 30 to 60 cm (1 to 2 feet) high, and the period of the waves (that is, the length from one wave crest or trough to the next) can last from five minutes to more than an hour. As a result, people on ships far from shore barely perceive the passage of the tsunami underneath them.
As the tsunami approaches the coast of an island or a continent, friction with the rising sea bottom slows the waves, and wavelengths become shortened while wave amplitudes increase. In essence, fast-moving water from further out to sea stacks itself on the slower-moving water near the shore. Just before the tsunami reaches the shore, the water is drawn back by the sudden change in wave activity, effectively pulling the tide out far from where it normally meets the beach. When the tsunami reaches the shore, it can push far inland (limited only by the height of the wave). Waters may rise as high as 30 meters (about 100 feet) above normal sea level within 10 to 15 minutes and inundate low-lying areas.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
452) Eclipse
An eclipse is a phenomenon in which the light from a celestial body is temporarily obscured by the presence of another.
A solar eclipse occurs when the Moon is aligned between the Sun and Earth. The trace of the lunar shadow (where the solar eclipse is visible) is less than 270 km (168 mi) wide. A partial eclipse is visible over a much wider region. When the Moon is further away from Earth, the lunar disc has a smaller visible diameter than the solar disc, so a narrow ring of the Sun remains uncovered, even when the three bodies are aligned. This produces an annular solar eclipse. The ratio between the visible lunar and solar diameters is called the magnitude of the eclipse. At the beginning of the solar eclipse, the Moon progressively covers the solar disk. Illumination of Earth's surface rapidly diminishes. The air temperature falls a few degrees. Seconds before the totality of the eclipse, shadow bands appear. Shadow bands are irregular bands of shadow, a few centimeters wide and up to a meter apart, moving over the ground. The diamond ring phase of the eclipse then shines for few seconds and later, Bailey's beads appear on the solar limb. Bailey's beads are a string of bright beads of light produced by the uneven shape of the lunar limb.
In the first two to three seconds of the total phase of the eclipse (totality), the chromosphere is visible as a pink halo around half of the limb. Maximal duration of the totality varies from eclipse to eclipse, up to 7.5 minutes. The brightest stars and planets are observable on the sky during the totality. Prominences are the brightest objects visible continuously during the totality. They are clouds of relatively cold (10,000K) and dense matter with the same properties as that of the chromosphere matter. They emit in lines of hydrogen, helium and calcuim, which produce the pink color of prominences and the chromosphere, and can always be observed in monochromatic light.
White corona can be observed from Earth only during total solar eclipses, because its intensity is much lower than the brightness of the sky. It has several components emitting in the entire visible region of spectra. The K- (Electron or continuum) corona is due to scattering of sunlight on free high-energy electrons, which are at a temperature of 1 million degrees, and contain continuous spectra and linear polarization of the light. The K-corona dominates in the corona, have distinct 11-year cycles, and have variable structures depending on the level of solar activity. During the solar maximum, it is circular. During the solar minimum, it is symmetrical and elongated in the equatorial region, while in the polar regions, it has bunches of short rays or plumes. During intermediate phases, it has asymmetric structure with many streamers of different lengths. The F- (Fraunhofer or Dust) corona is due to scattering of sunlight on dust particles. An F-corona has Fraunhofer spectra with absorption lines. Due to heating of dust particles close to the Sun, the F-corona evaporates, producing a large cavity in the dust distribution. An F-corona has oval shape. Its intensity decreases slowly with the distance from the Sun, and it predominates over the K-corona at long distances. The F-corona reaches near-Earth space , producing Zodiacal light (a faint conical glow extending along the ecliptic, visible after sunset or before sunrise in a dark, clear sky). The Thermal (T) corona is due to thermal emission of dust particles heated by the Sun.
Solar corona also have components emitting linear spectrum. The E- (Emission) corona is due to emission lines of highly ionized atoms of iron , nickel, and calcium. The E-corona intensity decreases rapidly with its distance from the Sun and is visible up to a 2-solar radius in monochromatic light. The S- (Sublimation) corona, was recently found, but as of 2002, its existence is still debatable. It consists of emission of low ionized atoms of Ca(II) produced by sublimation of dust particles in relatively cold parts of the corona. All these components are visible together in the corona during total eclipses.
The last and most mysterious component of the corona is giant coronal streamers observed only from the orbital coronagraph LASCO and from stratospheric flights during total eclipses. The giant coronal streamer shape and properties are different from those of any other component of the corona. Animations of their timed development look similar to visualizations of gusts of solar wind . In the last few years, evidence has arisen demonstrating that its nature is the same as that of plasma tails of comets , fluorescence of ionized gas molecules (originated by evaporation of comets near the Sun), and is due to interaction with the solar wind and sunlight. This component of the corona is called Fluorescent (Fl) corona, but this hypothesis needs further scientific verification. The corona is divided arbitrarily to Internal corona (up to 1.3 radius), which can be observed any time by coronagraph, Medium (1.3-2.3 radius), and External corona (over 2.3 radius) where F-corona dominates. Edges of the corona gradually disappear in the background of the sky. Therefore, the size of the corona greatly depends on the spectral region of observations and clearness of the sky.
Lunar eclipses occur when the Moon passes into Earth's shadow. The Moon does not normally disappear completely; its disc is illuminated by light scattered by the Earth's atmosphere. Color of the lunar eclipse depends highly on the composition of the atmosphere (amount of ozone and dust). The full shadow (umbra) cast by Earth is surrounded by a region of partial shadow, called the penumbra. Some lunar eclipses are visible only as penumbral, other as partial. The length of the Moon's path through the umbra, divided by the Moon's diameter, defines the magnitude of a lunar eclipse.
What Causes Lunar and Solar Eclipses?
An eclipse happens when one astronomical body blocks light from or to another. In a lunar eclipse, the Moon moves into the shadow of Earth cast by the Sun. When the Moon passes through the outer part of Earth’s shadow—the penumbra, where the light of the Sun is only partly extinguished—the Moon dims only slightly in what is called a penumbral eclipse. When the Moon passes through the central part of Earth’s shadow—the umbra, where the direct light of the Sun is totally blocked—the lunar eclipse is considered partial if the Moon is partly within the umbra or total if the Moon is completely within it.
In a solar eclipse, the Moon passes between Earth and the Sun and stops some or all of the Sun’s light from reaching Earth. There are three kinds of solar eclipses. In a partial solar eclipse, the Sun is partly covered when the Moon passes in front of it. In a total solar eclipse, the Moon completely covers the Sun. In an annular solar eclipse, the Moon does not completely cover the Sun but leaves the edge of the Sun showing. This last type of eclipse happens when the Moon is farthest in its orbit from Earth and Earth is closest in its orbit to the Sun, which makes the Moon's disk too small to cover the Sun's disk completely.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
453) Spider
Spider, (order Araneida or Araneae), any of more than 46,700 species of arachnids that differ from insects in having eight legs rather than six and in having the body divided into two parts rather than three. The use of silk is highly developed among spiders. Spider behaviour and appearance are diverse, and the araneids outside Europe, Japan, and North America have not been thoroughly collected and studied.
All spiders are predators, feeding almost entirely on other arthropods, especially insects. Some spiders are active hunters that chase and overpower their prey. These typically have a well-developed sense of touch or sight. Other spiders instead weave silk snares, or webs, to capture prey. Webs are instinctively constructed and effectively trap flying insects. Many spiders inject venom into their prey to kill it quickly, whereas others first use silk wrappings to immobilize their victims.
General Features
Size range
Spiders range in body length from 0.5 to about 90 mm (0.02–3.5 inches). The largest spiders are the hairy mygalomorphs, commonly referred to as tarantulas, which are found in warm climates and are most abundant in the Americas. Some of the largest mygalomorphs include the goliath bird-eating spider (Theraphosa leblondi or T. blondi), found in parts of the Amazon, and the pinkfoot goliath (T. apophysis), limited to southern Venezuela. The smallest spiders belong to several families found in the tropics, and information about them first became known in the 1980s.
Female spiders generally are much larger than males, a phenomenon known in animals as male/female size dimorphism. Many female orb weavers, such as those in the families Tetragnathidae and Araneidae, show extreme size dimorphism, being at least twice the size of males of the same species. The extreme difference in body size appears to have arisen through selection processes favouring fecundity in females and “bridging” locomotion in males. Bridging is a technique used by spiders for orb web construction; the spider produces a silk thread that is carried by the wind and becomes attached to an object, forming a bridge. Small, light males can build and traverse silk bridges more rapidly than larger, heavier males can. Scientists suspect that this gives small males more mating opportunities, thereby favouring selection for their small size.
Distribution
Spiders are found on all continents (except Antarctica, although spider fragments have been reported there) and at elevations as high as 5,000 metres (16,400 feet) in the Himalayas. Many more species occur in the tropics than in temperate regions. Though most spiders are terrestrial, one Eurasian species is aquatic and lives in slow-moving fresh water. There are a few species that live along shores or on the surface of fresh or salt water.
Small spiders and the young of many larger species secrete long silk strands that catch the wind and can carry the spiders great distances. This behaviour, called ballooning, occurs in many families and expedites distribution. Some species are distributed in this way around the globe within the bounds of the northern jet stream. Ballooning spiders drift through the air at heights that range from 3 metres (10 feet) or less to more than 800 metres (2,600 feet).
Importance
All spiders are predators. Because of their abundance, they are the most important predators of insects. Spiders have been used to control insects in apple orchards in Israel and rice fields in China. Large numbers of spiders have also been observed feeding on insects in South American rice fields and in fields of various North American crops. Modern pest-management strategies emphasize the use of insecticides that do the least damage to natural predators of insect pests.
Although many spiders produce venom for use in capturing prey, few species are toxic to humans. The venom of the black widow (genus Latrodectus) acts as a painful nerve poison. The bite of the brown recluse and others of the genus Loxosceles may cause localized tissue death. Other venomous spiders include the tarantula-like funnel-web spider (genus Atrax) of southeastern Australia and some African members (baboon spiders) of the family Theraphosidae of Africa and South America. In North America Cheiracanthium mildei, a small, pale spider introduced from the Mediterranean, and the native Cheiracanthium inclusum may enter houses in late fall and are responsible for some bites. Occasionally tissue death at the site of the bite occurs. Some American tarantulas throw off abdominal hairs as a defense against predators. The hairs have tiny barbs that penetrate skin and mucous membranes and cause temporary itching and allergic reactions.
Certain species of orb weavers (Araneidae), tarantulas (Theraphosidae), and huntsman spiders (Sparassidae) and members of family Nephilidae are suspected predators of bats, especially species of vesper bats (family Vespertilionidae) and sheath-tailed bats (family Emballonuridae). Birds have also been known to become trapped in spider webs, and in some instances spiders have been observed feeding on birds. These reports have led scientists to propose that flying vertebrates may be an important source of prey for certain species of spiders.
Form And Function
External features
The bodies of spiders, like those of other arachnids, are divided into two parts, the cephalothorax (prosoma) and the abdomen (opisthosoma). The legs are attached to the cephalothorax, which contains the stomach and brain. The top of the cephalothorax is covered by a protective structure, the carapace, while the underside is covered by a structure called the sternum, which has an anterior projection, the labium. The abdomen contains the gut, heart, reproductive organs, and silk glands. Spiders (except the primitive suborder Mesothelae) differ from other arachnids in lacking external segmentation of the abdomen and in having the abdomen attached to the cephalothorax by a narrow stalk, the pedicel. The gut, nerve cord, blood vessels, and sometimes the respiratory tubules (tracheae) pass through the narrow pedicel, which allows the body movements necessary during web construction. Among arachnids other than spiders, the tailless whip scorpions (order Amblypygi) have a pedicel but lack spinnerets. Spiders, like other arthropods, have an outer skeleton (exoskeleton). Inside the cephalothorax is the endosternite, to which some jaw and leg muscles are attached.
Spiders have six pairs of appendages. The first pair, called the chelicerae, constitute the jaws. Each chelicera ends in a fang containing the opening of a poison gland. The chelicerae move forward and down in the tarantula-like spiders but sideways and together in the rest. The venom ducts pass through the chelicerae, which sometimes also contain the venom glands. The second pair of appendages, the pedipalps, are modified in the males of all adult spiders to carry male reproductive cell. In females and immature males, the leglike pedipalps are used to handle food and also function as sense organs. The pedipalpal segment (coxa) attached to the cephalothorax usually is modified to form a structure (endite) that is used in feeding.
The pedipalps are followed by four pairs of walking legs. Each leg consists of eight segments: the coxa, attached to the cephalothorax; a small trochanter; a long, strong femur; a short patella; a long tibia; a metatarsus; a tarsus, which may be subdivided in some species; and a small pretarsus, which bears two claws in spiders that do not build webs and an additional claw between them in web-building ones. The young of two-clawed spiders often have three claws. The legs, covered by long hairlike bristles called setae, contain several types of sense organs and may have accessory claws. A few species use the first pair of legs as feelers. Spiders can amputate their own legs (autotomy); new but shorter legs may appear at the next molt.
Internal features
Nervous system and senses
The nervous system of spiders, unlike that of other arachnids, is completely concentrated in the cephalothorax. The masses of nervous tissue (ganglia) are fused with a ganglion found under the esophagus and below and behind the brain. The shape of the brain, or epipharyngeal ganglion, somewhat reflects the habits of the spider; i.e., in the web builders, which are sensitive to touch, the posterior part of the brain is larger than in spiders that hunt with vision.
The simple eyes of spiders, which number eight or less, consist of two groups, the main or direct eyes (called the anterior medians) and the secondary eyes, which include anterior laterals, posterior laterals, and posterior medians. Structures called rhabdoms, which receive light rays, face the lenses in the main eyes; in the other eyes the rhabdoms turn inward. Both the structure of the secondary eyes and eye arrangement are characteristic for each family.
Other sense organs are long fine hairs (trichobothria) on the legs, which are sensitive to air currents and vibrations. Slit sense organs in the form of minute slits or several parallel slits either are located near the leg joints or are scattered over the body. The slit is closed toward the outside by a thin membrane and on the other side by another membrane that may be penetrated by a nerve. Slit sense organs are sensitive to stresses on the cuticle; other sense organs act as vibration receptors or hearing organs. Internal receptors (proprioceptors) provide information about body movement and position. Olfactory (smell-related) organs are specialized hollow hairs found at the tips of pedipalps and legs. Olfaction is used mainly to sense pheromones.
Digestion and excretion
Food is digested outside the mouth (preorally). Some spiders chew their prey as they cover it with enzymes secreted by the digestive tract, whereas others bite the prey and pump digestive enzymes into it before sucking up the liquefied internal tissues.
The mouth leads into a narrow passage, the pharynx, which leads to a sucking stomach, which is part of the midgut. The midgut has a variable number (usually four pairs) of blind extensions, or ceca, that extend into the first segments of the legs (coxae). Additional ceca and a branched digestive gland are located at the front of the abdomen. At the end of the gut a cecum (stercoral pocket, or cloaca) connects with the hindgut before opening through the math.
The excretory system includes large cells (nephrocytes) in the cephalothorax that concentrate nitrogen-containing wastes, a pigment-storing layer (hypodermis), coxal glands, tubular glands (Malpighian tubules) in the abdomen, and the ends of the abdominal gut ceca, which are filled with a white excretory pigment (guanine). The excreta include various nitrogen-containing compounds—e.g., guanine, adenine, hypoxanthine, and uric acid.
Respiration
The respiratory system, located in the abdomen, consists of book lungs and tracheae. In spiders the book lungs are paired respiratory organs composed of 10 to 80 hollow leaves that extend into a blood sinus separated by small hardened columns. The lungs open into chambers (atria), which open to the outside through one or several slits (spiracles). Tracheae are tubes that conduct air directly to various tissues. The two respiratory organs at the very front end of the abdomen usually are book lungs, and the rear two are tracheae. However, in some groups both pairs are book lungs (as in the tarantula-like spiders) or tracheae (as in some minute spiders). It is impossible to determine from surface structure whether a spider has book lungs, tracheae, or both, because the respiratory organs are covered on the exterior by hardened plates.
Circulation
The circulatory system is best developed in spiders with book lungs and is least developed in spiders with bundles of tracheae going to various parts of the body. In all spiders the abdomen contains a tube-shaped heart, which usually has a variable number of openings (ostia) along its sides and one artery to carry blood (hemolymph) forward and one to carry it backward when the heart contracts. The ostia close during contraction. The forward-flowing artery, which goes into the cephalothorax, is branched in spiders with book lungs. The blood eventually empties into spaces, flows into the book lung sinuses, and travels into a cavity (pericardial cavity) from which it enters the heart through ostia. The blood contains various kinds of blood cells and a respiratory pigment, hemocyanin. Changes in blood pressure function to extend the legs and to break the skin at molting time.
Reproductive system
The reproductive organs (gonads) of male and female spiders are in the abdomen. The eggs are fertilized, as they pass through the oviduct to the outside, with male gamete stored in the seminal receptacles after mating. The fertilized egg (zygote) develops in the manner typical of arthropod eggs rich in yolk.
Specialized features
Venom
Venom glands are present in most spiders, but they are absent in the family Uloboridae. The glands are located either in the chelicerae or under the carapace. The venom ducts extend through the chelicerae and open near the tips of the fangs. Venom glands probably originated as accessory digestive glands whose secretions aided in the external digestion of prey. Although the secretions of some spiders may consist entirely of digestive enzymes, those of many species effectively subdue prey, and venoms of a few species are effective against predators, including vertebrates. The spitting spiders (Scytodes, family Scytodidae) secrete a sticky substance that glues potential prey to a surface. The high domed carapace of the spitting spiders is a modification to house the large venom glands.
Characteristics of the venom of various spiders, especially the black widow (genus Latrodectus), have been determined. The various protein components of the venom affect specific organisms, different components affecting mammals and insects. Widows exhibit warning coloration as a red hourglass-shaped mark on the underside of the abdomen; some have a red stripe. Because the spider hangs upside down in its web, the hourglass mark is conspicuous. The venom contains a nerve toxin that causes severe pain in humans, especially in the abdominal region, though a bite is usually not fatal. There are widow spiders in most parts of the world except central Europe and northern Eurasia. Some areas have several species. Although all appear superficially similar, each species has its own habits.
In southeastern Australia the funnel-web spider (genus Atrax, a tarantula-like spider) is dangerous to people. Tarantulas (family Theraphosidae) are venomous, though their venom is mild; in humans the pain associated with a tarantula bite often is described as similar to that of a bee sting.
The bite of the brown recluse (Loxosceles reclusa) results in a localized region of dead tissue (necrotic lesion) that heals slowly. The larger Loxosceles laeta of South America causes a more severe lesion. The bites of several other species belonging to different families may occasionally cause necrotic lesions—e.g., Lycosa raptoria, certain bolas spiders (Mastophora), Phidippus formosus, P. sicarius, the northern yellow sac spider (Cheiracanthium mildei), and other sac spiders (Cheiracanthium). Knowledge of the effects of spider bites on humans is limited because in some species the bite is not noticed at the time it occurs or because the spider is never identified.
Silk
Although silk is produced by some insects, centipedes, and millipedes and a similar substance is produced by mites, pseudoscorpions, and some crustaceans (ostracods and amphipods), only the spiders are true silk specialists. Spider silks that have been studied are proteins called fibroin, which has chemical characteristics similar to those of insect silk. The silk is produced by different types of glands in the abdomen. Ducts from the glands traverse structures called spinnerets, which open to the outside through spigots. Abdominal pressure forces the silk to flow outward, although the rate of flow is controlled by muscular valves in the ducts. Primitive spiders (suborder Mesothelae) have only two types of silk glands, but orb weavers have at least seven, each of which produces a different kind of silk; e.g., aciniform glands produce silk for wrapping prey, ampullate glands produce the draglines and frame threads, and cylindrical glands produce parts of the egg sac. Epigastric silk glands of male spiders produce silk that emerges through spigots in the abdomen between the book lung covers and provides a surface for the sperm to be deposited upon during male gamete induction. Silk may have evolved from an excretory product.
Threads of silk from the orb weaver Nephila have a high tensile strength and great elasticity. Silk probably changes to a solid in the spigot or as a result of tension forces. Strands usually are flat or cylindrical as they emerge and are of surprisingly uniform diameter. The glob of silk that binds or anchors strands emerges from the spigot as a liquid.
The movable spinnerets, which consist of telescoping projections, are modified appendages. Two pairs are from the 10th body segment and two pairs from the 11th. Liphistius, of the suborder Mesothelae, is the only spider with a full complement of four pairs of spinnerets in the adult. Most spiders have three pairs, the forward central pair having been either lost or reduced to a nonfunctional cone (colulus) or flat plate (cribellum), through which open thousands of minute spigots. Spiders with a cribellum also have a comb (calamistrum) on the metatarsus of the fourth leg. The black widow is one such comb-footed spider (family Theridiidae). The calamistrum combs the silk that flows from the cribellum, producing a characteristically woolly (cribellate) silk.
Natural History
Reproduction and life cycle
Courtship
In male spiders the second pair of appendages (pedipalps) are each modified to form a complex structure for both holding male gamete and serving as the copulatory organs. When the time for mating approaches, the male constructs a special web called the sperm web. The silk for it comes from two sources, the spinnerets at the end of the abdomen and the spigots of the epigastric silk glands located between the book lungs. A drop of fluid containing sperm is deposited onto the sperm web through an opening (gonopore) located on the underside of the abdomen. The male draws the sperm into his pedipalps in a process known as male gamete induction. This may take anywhere from a few minutes to several hours. Male gamete induction may occur before a male seeks a mate or after the mate has been located. If more than one mating occurs, the male must refill the pedipalps between copulations.
The way in which a male finds a female varies. Males generally wander more extensively than females. The wandering males of some species will often follow silk threads. Research has shown that some may recognize both the threads produced by a female of his own species and the female’s condition (i.e., whether she is mature and receptive). Pheromones incorporated into the silk by the female are involved in this behaviour. Other species, especially jumping spiders (family Salticidae), use visual senses to recognize mates.
Males in a few species locate a female and unceremoniously run to her and mate. In most species, however, elaborate courtship patterns have evolved, probably to protect the male from being mistaken for prey. The male of the orb weaver family (Araneidae) and some others court by rhythmically plucking the threads of a web. After the female approaches, he pats and strokes her before mating. When male wolf spiders or jumping spiders see a female, they wave the pedipalps, conveying a visual message characteristic of the species. An appropriate response from a female encourages the approach of the male. Some male wolf spiders tap dry leaves, perhaps to attract a female. Aggregations of tapping males produce sound that can be heard some distance away. A male crab spider quickly and expertly wraps his intended mate with silk. Although the female is able to escape, she does not do so until mating has been completed. After the male of the European nursery-web spider has located a suitable mate, he captures a fly, wraps it in silk, and presents it to the female; while the female is occupied with eating the fly, the male mates with her. If no fly is available, the male may wrap a pebble. Some male spiders use their specialized jaws or legs to hold and immobilize the jaws of the female during mating.
Mating
In most groups, after a male has successfully approached a female and mounted her, he inserts his left pedipalp into the left opening of her genital structure and the right pedipalp into the right opening. In some primitive spiders (e.g., haplogynes, mygalomorphs) and a few others, the male inserts both pedipalps simultaneously into the female’s genital slit.
The female genital structure, or epigynum, is a hardened plate on the underside of the abdomen in front of the gonopore. After the sperm are transferred into the epigynum, they move into receptacles (spermathecae) that connect to the oviducts. Eggs are fertilized as they pass through the oviducts and out through the gonopore.
The force that causes the injection of male gamete from the pedipalp of the male into the receptacle of the female has not been established with certainty but may involve increased blood pressure expanding the soft vascular tissue (hematodocha) between the hard plates of pedipalps. This causes a bulbous structure containing a duct to twist and to hook into the epigynum of the female and inject the male gamete as if they were being squeezed from a bulb syringe.
Mating may require only seconds in some species but hours in others. Some males recharge their pedipalps and mate again with the same female. After mating, the males of some species smear a secretion over the epigynum, called an epigynal plug, that prevents the female from mating a second time. Male spiders usually die soon after, or even during, the mating process. The female of one European orb weaver species bites into the abdomen of the male and holds on during mating. Although some females eat the male after mating, this practice is not common. The male of the black widow (genus Latrodectus), for example, usually dies days after mating, although occasionally he is so weak after mating that he is captured and eaten by the female. Male Nephilengys malabarensis spiders of Southeast Asia and the southwestern Pacific region are thought to escape reproductive cannibalism through remote copulation, in which the male’s copulatory organ detaches during mating and remains in the female, enabling prolonged sperm transfer. Females of some species mate only once, whereas others mate several times with the same male or mate with several different males. The long-lived females of mygalomorph spiders must mate repeatedly because they shed their skins once or twice a year, including the lining of the spermathecae.
Eggs and egg sacs
Female spiders produce either one egg sac containing several to a thousand eggs or several egg sacs each with successively fewer eggs. Females of many species die after producing the last egg sac. Others provide care for the young for some period of time; these females live one or, at most, two years. Females of the mygalomorph spiders may live up to 25 years and those of the primitive haplogyne spiders up to 10 years.
The protective egg sac surrounding the eggs of most spiders is made of silk. Although a few spiders tie their eggs together with several strands of silk, most construct elaborate sacs of numerous layers of thick silk. Eggs, which often have the appearance of a drop of fluid, are deposited on a silk pad and then wrapped and covered so that the finished egg sac is spherical or disk-shaped. The females of many species place the egg sac on a stalk, attach it to a stone, or cover it with smooth silk before abandoning it. Other females guard their egg sacs or carry them either in their jaws or attached to the spinnerets. The European cobweb spider (Achaearanea saxatile) constructs a silken thimble-shaped structure and will move the egg sac into or out of this structure to regulate egg temperature. Female wolf spiders carry their egg sacs attached to the spinnerets and instinctively bite the egg sac to permit the young to emerge after a certain length of time has elapsed. If a female loses an egg sac, she will make searching movements and may pick up a pebble or a piece of paper and attach it to the spinnerets.
Maturation
The young of most species are independent when they emerge from the egg sac. After hatching, wolf spiderlings, usually numbering 20 to 100, climb onto the back of their mother and remain there about 10 days before dispersing. If they fall off, they climb back up again, seeking contact with bristlelike structures (setae). Some female spiders feed their young. When food has been sufficiently liquefied by the female (in spiders, digestion occurs outside the mouth), the young also feed on their mother’s prey. The female of some spiders, including one European species (Coelotes terrestris), dies at the time the young are ready to feed, and they eat her carcass. The mother of one web spider (Achaearanea riparia) plucks threads of the web to call her young, both to guide them to food sources and to warn them of danger.
Young spiderlings, except for size and undeveloped reproductive organs, resemble adults. They shed their skins (molt) as they increase in size. The number of molts varies among species, within a species, and even among related young of male/female. Males generally mature earlier and have fewer molts (2 to 8) than females have (6 to 12). Males of some species are mature when they emerge from the egg sac, one or two molts having occurred before emergence. Some spiders mature a few weeks after hatching, but many overwinter in an immature stage. Mygalomorph spiders require three to four years (some authorities claim nine years) to become sexually mature in warm climates.
Before molting, many spiderlings hang by the claws in some inconspicuous place, although mygalomorph spiders turn on their side or back. The protective covering (carapace) of the cephalothorax breaks, either below the eyes or at the posterior end, because of increased blood pressure. The spider then laboriously extracts its legs and abdomen from the old cuticle (skin). Emergence is accompanied by wide fluctuations of blood pressure. These pressure changes raise and lower the setae and gradually force the legs free. The cast-off cuticle, or exuviae, remains behind. Many web builders molt while suspended, with the newly emerged spider dangling from a strand of silk. Until the new exoskeleton hardens, the spider is helpless; thus, molting is hazardous for spiderlings. They may dry out before successfully emerging from the old cuticle, or they may fall victim to a predator while defenseless. Even a small injury during the molting period is usually fatal. Growth and molting are believed to be under the control of hormones. On occasion some spiderlings fail to molt, whereas others undergo delayed molts, perhaps because of faulty hormone balance, and may die. Many spiderlings eventually disperse by ballooning, usually in the fall.
Feeding behaviour
Stalking prey
Most hunting spiders locate prey by searching randomly or by responding to vibrations. Wolf spiders and jumping spiders have keen eyesight. The latter stalk their prey to within 5 to 10 cm (2 to 4 inches) and then pounce when it moves. Many crab spiders wait for prey on flowers of a colour similar to their own. They use their legs to grasp an unsuspecting insect and then give it a lethal bite.
Unique among the hunters are the spitting spiders (family Scytodidae). When these spiders encounter prey, they touch it, back off, and shoot a zigzag stream of sticky material over it. The sticky material, produced by modified venom glands in the cephalothorax, emerges from pores near the tips of the fangs located on the chelicerae. As the victim struggles, the spider approaches cautiously and bites the entangled insect.
Spider webs
Spiders that use silk to capture prey utilize various techniques. Ground-dwelling trap-door spiders construct silk-lined tubes, sometimes with silk trapdoors, from which they dart out to capture passing insects. Other tube-dwelling spiders place silk trip threads around the mouth of the tube. When an insect touches these threads, vibrations inform the spider of a victim’s presence. Funnel-weaving spiders live in silk tubes with a narrow end that extends into vegetation or a crevice and an expanded sheetlike end that vibrates when an insect walks across it. Many web spiders construct silk sheets in vegetation, sometimes one above the other, and often add anchor threads, which trip unsuspecting insects. The irregular three-dimensional web of cobweb spiders (family Theridiidae) has anchoring threads of sticky silk. An insect caught in the web or touching an anchor line becomes entangled, increasingly so if it struggles. If a thread breaks, the elasticity of it pulls the insect toward the centre of the web.
The most elaborate webs are those of the orb weavers, whose circular nets are conspicuous on dewy mornings. This type of web is constructed by several spider families, which suggests that it is an efficient trap that enables the largest area to be covered with the least possible silk. The web acts like an air filter, trapping weak-flying insects that cannot see the fine silk. Most orb webs are rebuilt every day. The web may be up only during the day or only at night. If a web is damaged during capture of prey, the spider will repair that area. The ways by which spiders keep from becoming entangled in their own webs are not completely understood, nor is their mechanism for cutting the extremely elastic silk threads that are used in web construction.
To begin orb-web construction, the spider releases a silk thread that is carried by wind. If the free end does not become attached to an object, the spider may pull it back and feed on it. If it becomes firmly attached—for example, to a twig—the spider secures the thread and crosses the newly formed bridge, reinforcing it with additional threads. The spider then descends from the centre of the bridge, securing a thread on the ground or on a twig. The centre, or hub, of the web is established when the spider returns to the bridge with a thread and carries it partway across the bridge before securing it; this thread is the first radius, or spoke. After all the spokes are in place, the spider returns to the hub and constructs a few temporary spirals of dry silk toward the outside of the web. The spider then reverses direction, deposits ensnaring silk, and removes the initial spiral. The ensnaring threads form a dense spiral. It takes only about an hour to weave the radii and orb.
Some species attach a signal thread from the hub to a retreat in a leaf so that they are informed (by vibrations) of trapped insects; others remain head-down in the centre of the orb, locating prey by sensing tensions or vibrations in individual spokes. Webs of two spider families (Araneidae and Tetragnathidae) have spirals constructed of a sticky material that dries out after several days and must be rebuilt.
Spiders of the family Uloboridae build a web of woolly (cribellate) ensnaring silk. One group within this family (genus Hyptiotes) weaves only a partial orb. The spider, attached by a thread to vegetation, holds one thread from the tip of the hub until an insect brushes the web. The spider then alternately relaxes and tightens the thread, and the struggling victim becomes completely entangled. Tiny theridiosomatid spiders also control web tension.
Ogre-faced spiders (family Deinopidae) build small flat webs during the evening hours and then cut the attachments and spread the web among their four long front legs. During the night the web is thrown over a passing insect. The spider abandons or eats the web in the morning and passes the day resting on a branch before constructing a new web.
Bolas spiders (Mastophora, Ordgarius) release a single thread with a sticky droplet at the end and hold it with one leg. Some species swing this “bola,” and others throw it when a moth approaches. Male moths are attracted to this spider by its odour, which mimics that of female moths. Many other examples of web specializations have been described.
Spiders usually wrap a captured insect in silk while turning it, as on a spit, before biting it and carrying it either to a retreat or to the hub of the web for feeding or storage. Although the detachable scales of butterfly and moth wings facilitate their escape from the web, spiders have evolved a counterstrategy: they bite before wrapping them rather than afterward.
Some tropical species of spiders are social and live in large communal webs containing hundreds of individuals, most of them female. They cooperate to build and repair the web. The pack of spiders subdues, kills, and consumes insects that have been caught in the communal web.
Classification
Distinguishing taxonomic features
The Araneida are separated into three suborders: Mesothelae (segmented spiders), Orthognatha (mygalomorph spiders), and Labidognatha (araneomorph spiders). The segmented spiders are easily distinguished by indentations on the top of the abdomen—evidence of spiders’ common ancestry with scorpions. The other two suborders are differentiated on the basis of the type of movement of the two jaws; i.e., movement forward and down is orthognath (paraxial), and movement sideways and together is labidognath (diaxial). Other external features that distinguish suborders include the structure of the male pedipalps and the presence or absence of an epigynum in the female. Internal differentiating features include the presence and number of book lungs, number of small openings (ostia) in the heart, and extent of fusion of nerve ganglia in the prosoma. Families are distinguished on the basis of such characteristics as number and spacing of simple eyes, number of tarsal claws, number of spinnerets, habits, structure of chelicerae, and specialized (apomorph) characters such as glands, setae, and teeth and peculiarities of the external genitalia. Species and also genera of araneomorph spiders are usually separated by specializations of the female epigynum and male pedipalp.
Annotated classification
Numerous classification schemes were published in the 1930s, most of them in response to one by Alexander Petrunkevitch, but none of these is now acceptable and up-to-date. All classifications have relied heavily on the work of Eugène Simon, who published in France in the late 19th century. Newer tools, such as scanning electron microscopy, molecular methods, and cladistics, remain little-tried for spiders, but they have changed traditional classification schemes. In addition, many new spiders have been found in the Southern Hemisphere that do not readily fit into established families, a situation that prompts the proposal of new ones, though without an overall view for a new system. Fewer than 30 percent of the large neotropical spiders are known (and probably fewer of the small neotropical spiders), while 80 percent or more of the species in northern and central Europe, northern North America, Korea, and Japan are known.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
454) Flywheel
Flywheel, heavy wheel attached to a rotating shaft so as to smooth out delivery of power from a motor to a machine. The inertia of the flywheel opposes and moderates fluctuations in the speed of the engine and stores the excess energy for intermittent use. To oppose speed fluctuations effectively, a flywheel is given a high rotational inertia; i.e., most of its weight is well out from the axis. A wheel with a heavy rim connected to the central hub by spokes or a web has a high rotational inertia. Many flywheels used on reciprocating engines to smooth out the flow of power are made in this way. The energy stored in a flywheel, however, depends on both the weight distribution and the rotary speed; if the speed is doubled, the kinetic energy is quadrupled. A rim-type flywheel will burst at a much lower rotary speed than a disk-type wheel of the same weight and diameter. For minimum weight and high energy-storing capacity, a flywheel may be made of high-strength steel and designed as a tapered disk, thick at the centre and thin at the rim.
In automobile engines the flywheel serves to smooth out the pulses of energy provided by the combustion in the cylinders and to provide energy for the compression stroke of the pistons. The larger the rotational inertia of the flywheel, the smaller the changes in speed resulting from the intermittent power supply and demand.
In power presses the actual punching, shearing, and forming are done in only a fraction of the operating cycle. During the longer, nonactive period, the speed of the flywheel is built up slowly by a comparatively low-powered motor. When the press is operating, most of the required energy is provided by the flywheel.
Flywheels : Introduction
Stop... start... stop... start—it's no way to drive! Every time you slow down or stop a vehicle or machine, you waste the momentum it's built up beforehand, turning its kinetic energy (energy of movement) into heat energy in the brakes. Wouldn't it be better if you could somehow store that energy when you stopped and get it back again the next time you started up? That's one of the jobs that a flywheel can do for you. First used in potters wheels, then hugely popular in giant engines and machines during the Industrial Revolution, flywheels are now making a comeback in everything from buses and trains to race cars and power plants. Let's take a closer look at how they work!
Why we need flywheels
Engines are happiest and at their most efficient when they're producing power at a constant, relatively high speed. The only trouble is, the vehicles and machines they drive need to operate at all kinds of different speeds and sometimes need to stop altogether. Clutches and gears partly solve this problem. (A clutch is a mechanical "switch" that can disengage an engine from the machine it's driving, while a gear is a pair of interlocked wheels with teeth that changes the speed and torque (turning force) of a machine, so it can go faster or slower even when the engine goes at the same speed.) But what clutches and gears can't do is save the energy you waste when you brake and give it back again later. That's a job for a flywheel!
What is a flywheel?
A flywheel is essentially a very heavy wheel that takes a lot of force to spin around. It might be a large-diameter wheel with spokes and a very heavy metal rim, or it could be a smaller-diameter cylinder made of something like a carbon-fiber composite. Either way, it's the kind of wheel you have to push really hard to set it spinning. Just as a flywheel needs lots of force to start it off, so it needs a lot of force to make it stop. As a result, when it's spinning at high speed, it tends to want to keep on spinning (we say it has a lot of angular momentum), which means it can store a great deal of kinetic energy. You can think of it as a kind of "mechanical battery," but it's storing energy in the form of movement (kinetic energy, in other words) rather than the energy stored in chemical form inside a traditional, electrical battery.
Flywheels come in all shapes and sizes. The laws of physics tell us that large diameter and heavy wheels store more energy than smaller and lighter wheels, while flywheels that spin faster store much more energy than ones that spin slower.
Modern flywheels are a bit different from the ones that were popular during the Industrial Revolution. Instead of wide and heavy steel wheels with even heavier steel rims, 21st-century flywheels tend to be more compact and made from carbon-fiber or composite materials, sometimes with steel rims, which work out perhaps a quarter as heavy.
What does a flywheel do?
Consider something like an old-fashioned steam traction engine—essentially a heavy old tractor powered by a steam engine that runs on the road instead of on rails. Let's say we have a traction engine with a large flywheel that sits between the engine producing the power and the wheels that are taking that power and moving the engine down the road. Further, let's suppose the flywheel has clutches so it can be connected or disconnected from either the steam engine, the driving wheels, or both. The flywheel can do three very useful jobs for us.
First, if the steam engine produces power intermittently (maybe because it has only one cylinder), the flywheel helps to smooth out the power the wheels receive. So while the engine's cylinder might add power to the flywheel every thirty seconds (every time the piston pushes out from the cylinder), the wheels could take power from the flywheel at steady, continual rate—and the engine would roll smoothly instead of jerking along in fits and starts (as it might if it were powered directly by the piston and cylinder).
Second, the flywheel can be used to slow down the vehicle, like a brake—but a brake that soaks up the vehicle's energy instead of wasting it like a normal brake. Suppose you're driving a traction engine down a street and you suddenly want to stop. You could disengage the steam engine with the clutch so that the vehicle would start to slow down. As it did so, energy would be transferred from the vehicle to the flywheel, which would pick up speed and keep spinning. You could then disengage the flywheel to make the vehicle stop completely. Next time you set off again, you'd use the clutch to reconnect the flywheel to the driving wheels, so the flywheel would give back much of the engine it absorbed during braking.
Third, a flywheel can be used to provide temporary extra power when the engine can't produce enough. Suppose you want to overtake a slow-moving horse and cart. Let's say the flywheel has been spinning for some time but isn't currently connected to either the engine or the wheels. When you reconnect it to the wheels, it's like a second engine that provides extra power. It only works temporarily, however, because the energy you feed to the wheels must be lost from the flywheel, causing it to slow down.
A brief history of flywheels
Ancient flywheels
You could argue that flywheels are among the oldest of inventions: the earliest wheels were made of heavy stone or solid wood and, because they had a high moment of inertia, worked like flywheels whether they were intended to or not. The potter's wheel (perhaps the oldest form of wheel in existence—even older than the wheels used in transportation) relies on its turntable being solid and heavy (or having a heavy rim), so it has a high moment of inertia that keeps it spinning all by itself while you shape the clay on top with your hands. Water wheels, which make power from rivers and creeks, are also designed like flywheels, with strong but light spokes and very heavy rims, so they keep on turning at a constant rate and powering mills at a steady speed. Water wheels like this became popular from Roman times onward.
Flywheels of the Industrial Revolution
The best known flywheels date from the Industrial Revolution and are used in things like factory steam engines and traction engines. Look closely at almost any factory machine from the 18th or 19th century and you'll see a huge flywheel somewhere in the mechanism. Since flywheels are often very large and spin at high speeds, their heavy rims have to withstand extreme forces. They also have to be precision made since, if they're even slightly unbalanced, they will wobble too much and destabilize whatever they're attached to. The widespread availability of iron and steel during the Industrial Revolution made it possible to engineer well-made, high precision flywheels, which played a vital role in ensuring that engines and machines could operate smoothly and efficiently.
Following the work of 19th-century electrical pioneers like Thomas Edison, electric power was soon widely available for driving factory machines, which no longer needed flywheels to smooth erratic, coal-powered steam engines. Meanwhile, road vehicles, ships, trains, and airplanes were using internal combustion engines powered by gasoline, diesel, and kerosene. Flywheels were generally large and heavy and had no place inside something like a car engine or a ship, let alone an airplane. As a result, flywheel technology fell somewhat by the wayside as the 20th century progressed.
Modern flywheels
Since the mid-20th century, interest in flywheels has picked up again, largely because people have become more concerned about the price of fuels and the environmental impact of using them; it makes sense to save energy—and flywheels are very good at doing that. Since about the 1950s, European bus makers such as M.A.N. and Mercedes-Benz have been experimenting with flywheel technology in vehicles known as gyrobuses. The basic idea is to mount a heavy steel flywheel (about 60cm or a couple of feet in diameter, spinning at about 10,000 rpm) between the rear engine of the bus and the rear axle, so it acts as a bridge between the engine and the wheels. Whenever the bus brakes, the flywheel works as a regenerative brake, absorbing kinetic energy and slowing the vehicle down. When the bus starts up again, the flywheel returns its energy to the transmission, saving much of the braking energy that would otherwise have been wasted. Modern railroad and subway trains also make widespread use of regenerative, flywheel brakes, which can give a total energy saving of perhaps a third or more. Some electric car makers have proposed using super-fast spinning flywheels as energy storage devices instead of batteries. One of the big advantages of this would be that flywheels could potentially last for the entire life of a car, unlike batteries, which are likely to need very expensive replacement after perhaps a decade or so.
In the last few years, formula 1 race cars have also been using flywheels, though more to provide a power boost than to save energy. The technology is called KERS (Kinetic Energy Recovery System) and consists of a very compact, very high speed flywheel (spinning at 64,000 rpm) that absorbs energy that would normally be lost as heat during braking. The driver can flick a switch on the steering wheel so the flywheel temporarily engages with the car's drive train, giving a brief speed boost when extra power is needed for acceleration. With such a high-speed flywheel, safety considerations become hugely important; the flywheel is fitted inside a super-sturdy carbon-fiber container to stop it injuring the driver if it explodes. (Some forms of KERS use electric motors, generators, and batteries to store energy instead of flywheels, in a similar way to hybrid cars.)
Just as flywheels—in the form of waterwheels—played an important part in human efforts to harness energy, so they're making a comeback in modern electricity production. One of the difficulties with power plants (and even more so with forms of renewable energy such as wind and solar power) is that they don't necessarily produce electricity constantly, or in a way that precisely matches the rise and fall in demand over the course of a day. A related problem is that it's much easier to make electricity than it is to store it in large quantities. Flywheels offer a solution to this. At times when there is more electricity supply than demand (such as during the night or on the weekend), power plants can feed their excess energy into huge flywheels, which will store it for periods ranging from minutes to hours and release it again at times of peak need. At three plants in New York, Massachusetts, and Pennsylvania, Beacon Power has pioneered using flywheels to provide up to 20 megawatts of power storage to meet temporary peaks in demand. They're also used in places like computer data centers to provide emergency, backup power in case of outages.
Advantages and disadvantages of flywheels
Flywheels are relatively simple technology with lots of plus points compared to rivals such as rechargeable batteries: in terms of initial cost and ongoing maintenance, they work out cheaper, last about 10 times longer (there are still many working flywheels in operation dating from the Industrial Revolution), are environmentally friendly (produce no carbon dioxide emissions and contain no hazardous chemicals that cause pollution), work in almost any climate, and are very quick to get up to speed (unlike batteries, for example, which can take many hours to charge). They're also extremely efficient (maybe 80 percent or more) and take up less space than batteries or other forms of energy storage (like pumped water storage reservoirs).
The biggest disadvantage of flywheels (certainly so far as vehicles are concerned) is the weight they add. A complete Formula 1 KERS flywheel system (including the container, hydraulics, and electronic control systems it needs) about 25kg to the car's weight, which is a significant extra load. Another problem (particulary for Formula 1 drivers) is that a large, heavy wheel spinning inside a moving car will tend to act like a gyroscope, resisting changes in its direction and potentially affecting the handling of the vehicle (although there are various solutions, including mounting flywheels on gimbals like a ship's compass). A further difficulty is the huge stresses and strains that flywheels experience when they rotate at extremely high speeds, which can cause them to shatter and explode into fragments. This acts as a limit on how fast flywheels can spin and, consequently, how much energy they can store. While traditional wheels were made from steel and spun around in the open air, modern ones are more likely to use high-performance composites or ceramics and be sealed inside containers, making higher speeds and energies possible without compromising on safety.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
455) Mobile telephone
Mobile telephone, also called mobile phone, portable device for connecting to a telecommunications network in order to transmit and receive voice, video, or other data. Mobile phones typically connect to the public switched telephone network (PSTN) through one of two categories: cellular telephone systems or global satellite-based telephony.
Cellular Telephones
Cellular telephones, or simply cell phones, are portable devices that may be used in motor vehicles or by pedestrians. Communicating by radio waves, they permit a significant degree of mobility within a defined serving region that may range in area from a few city blocks to hundreds of square kilometres. The first mobile and portable subscriber units for cellular systems were large and heavy. With significant advances in component technology, though, the weight and size of portable transceivers have been significantly reduced. In this section, the concept of cell phones and the development of cellular systems are discussed.
Cellular communication
All cellular telephone systems exhibit several fundamental characteristics, as summarized in the following:
1. The geographic area served by a cellular system is broken up into smaller geographic areas, or cells. Uniform hexagons most frequently are employed to represent these cells on maps and diagrams; in practice, though, radio waves do not confine themselves to hexagonal areas, so the actual cells have irregular shapes.
2. All communication with a mobile or portable instrument within a given cell is made to a base station that serves the cell.
3. Because of the low transmitting power of battery-operated portable instruments, specific sending and receiving frequencies assigned to a cell may be reused in other cells within the larger geographic area. Thus, the spectral efficiency of a cellular system (that is, the uses to which it can put its portion of the radio spectrum) is increased by a factor equal to the number of times a frequency may be reused within its service area.
4. As a mobile instrument proceeds from one cell to another during the course of a call, a central controller automatically reroutes the call from the old cell to the new cell without a noticeable interruption in the signal reception. This process is known as handoff. The central controller, or mobile telephone switching office (MTSO), thus acts as an intelligent central office switch that keeps track of the movement of the mobile subscriber.
5. As demand for the radio channels within a given cell increases beyond the capacity of that cell (as measured by the number of calls that may be supported simultaneously), the overloaded cell is “split” into smaller cells, each with its own base station and central controller. The radio-frequency allocations of the original cellular system are then rearranged to account for the greater number of smaller cells.
Frequency reuse between discontiguous cells and the splitting of cells as demand increases are the concepts that distinguish cellular systems from other wireless telephone systems. They allow cellular providers to serve large metropolitan areas that may contain hundreds of thousands of customers.
Development of cellular systems
In the United States, interconnection of mobile transmitters and receivers with the public switched telephone network (PSTN) began in 1946, with the introduction of mobile telephone service (MTS) by the American Telephone & Telegraph Company (AT&T). In the U.S. MTS system, a user who wished to place a call from a mobile phone had to search manually for an unused channel before placing the call. The user then spoke with a mobile operator, who actually dialed the call over the PSTN. The radio connection was simplex—i.e., only one party could speak at a time, the call direction being controlled by a push-to-talk switch in the mobile handset. In 1964 AT&T introduced the improved mobile telephone service (IMTS). This provided full duplex operation, automatic dialing, and automatic channel searching. Initially 11 channels were provided, but in 1969 an additional 12 channels were made available. Since only 11 (or 12) channels were available for all users of the system within a given geographic area (such as the metropolitan area around a large city), the IMTS system faced a high demand for a very limited channel resource. Moreover, each base-station antenna had to be located on a tall structure and had to transmit at high power in order to provide coverage throughout the entire service area. Because of these high power requirements, all subscriber units in the IMTS system were motor-vehicle-based instruments that carried large storage batteries.
During this time a truly cellular system, known as the advanced mobile phone system, or AMPS, was developed primarily by AT&T and Motorola, Inc. AMPS was based on 666 paired voice channels, spaced every 30 kilohertz in the 800-megahertz region. The system employed an analog modulation approach—frequency modulation, or FM—and was designed from the outset to support subscriber units for use both in automobiles and by pedestrians. It was publicly introduced in Chicago in 1983 and was a success from the beginning. At the end of the first year of service, there were a total of 200,000 AMPS subscribers throughout the United States; five years later there were more than 2,000,000. In response to expected service shortages, the American cellular industry proposed several methods for increasing capacity without requiring additional spectrum allocations. One analog FM approach, proposed by Motorola in 1991, was known as narrowband AMPS, or NAMPS. In NAMPS systems each existing 30-kilohertz voice channel was split into three 10-kilohertz channels. Thus, in place of the 832 channels available in AMPS systems, the NAMPS system offered 2,496 channels. A second approach, developed by a committee of the Telecommunications Industry Association (TIA) in 1988, employed digital modulation and digital voice compression in conjunction with a time-division multiple access (TDMA) method; this also permitted three new voice channels in place of one AMPS channel. Finally, in 1994 there surfaced a third approach, developed originally by Qualcomm, Inc., but also adopted as a standard by the TIA. This third approach used a form of spread spectrum multiple access known as code-division multiple access (CDMA)—a technique that, like the original TIA approach, combined digital voice compression with digital modulation. The CDMA system offered 10 to 20 times the capacity of existing AMPS cellular techniques. All of these improved-capacity cellular systems were eventually deployed in the United States, but, since they were incompatible with one another, they supported rather than replaced the older AMPS standard.
Although AMPS was the first cellular system to be developed, a Japanese system was the first cellular system to be deployed, in 1979. Other systems that preceded AMPS in operation include the Nordic mobile telephone (NMT) system, deployed in 1981 in Denmark, Finland, Norway, and Sweden, and the total access communication system (TACS), deployed in the United Kingdom in 1983. A number of other cellular systems were developed and deployed in many more countries in the following years. All of them were incompatible with one another. In 1988 a group of government-owned public telephone bodies within the European Community announced the digital global system for mobile communications, referred to as GSM, the first such system that would permit any cellular user in one European country to operate in another European country with the same equipment. GSM soon became ubiquitous throughout Europe.
The analog cellular systems of the 1980s are now referred to as “first-generation” (or 1G) systems, and the digital systems that began to appear in the late 1980s and early ’90s are known as the “second generation” (2G). Since the introduction of 2G cell phones, various enhancements have been made in order to provide data services and applications such as Internet browsing, two-way text messaging, still-image transmission, and mobile access by personal computers. One of the most successful applications of this kind is iMode, launched in 1999 in Japan by NTT DoCoMo, the mobile service division of the Nippon Telegraph and Telephone Corporation. Supporting Internet access to selected Web sites, interactive games, information retrieval, and text messaging, iMode became extremely successful; within three years of its introduction, more than 35 million users in Japan had iMode-enabled cell phones.
Beginning in 1985, a study group of the Geneva-based International Telecommunication Union (ITU) began to consider specifications for Future Public Land Mobile Telephone Systems (FPLMTS). These specifications eventually became the basis for a set of “third-generation” (3G) cellular standards, known collectively as IMT-2000. The 3G standards are based loosely on several attributes: the use of CDMA technology; the ability eventually to support three classes of users (vehicle-based, pedestrian, and fixed); and the ability to support voice, data, and multimedia services. The world’s first 3G service began in Japan in October 2001 with a system offered by NTT DoCoMo. Soon 3G service was being offered by a number of different carriers in Japan, South Korea, the United States, and other countries. Several new types of service compatible with the higher data rates of 3G systems have become commercially available, including full-motion video transmission, image transmission, location-aware services (through the use of global positioning system [GPS] technology), and high-rate data transmission.
The increasing demands placed on mobile telephones to handle even more data than 3G could led to the development of 4G technology. In 2008 the ITU set forward a list of requirements for what it called IMT-Advanced, or 4G; these requirements included data rates of 1 gigabit per second for a stationary user and 100 megabits per second for a moving user. The ITU in 2010 decided that two technologies, LTE-Advanced (Long Term Evolution; LTE) and WirelessMan-Advanced (also called WiMAX), met the requirements. The Swedish telephone company TeliaSonera introduced the first 4G LTE network in Stockholm in 2009.
Airborne cellular systems
In addition to the terrestrial cellular phone systems described above, there also exist several systems that permit the placement of telephone calls to the PSTN by passengers on commercial aircraft. These in-flight telephones, known by the generic name aeronautical public correspondence (APC) systems, are of two types: terrestrial-based, in which telephone calls are placed directly from an aircraft to an en route ground station; and satellite-based, in which telephone calls are relayed via satellite to a ground station. In the United States the North American terrestrial system (NATS) was introduced by GTE Corporation in 1984. Within a decade the system was installed in more than 1,700 aircraft, with ground stations in the United States providing coverage over most of the United States and southern Canada. A second-generation system, GTE Airfone GenStar, employed digital modulation. In Europe the European Telecommunications Standards Institute (ETSI) adopted a terrestrial APC system known as the terrestrial flight telephone system (TFTS) in 1992. This system employs digital modulation methods and operates in the 1,670–1,675- and 1,800–1,805-megahertz bands. In order to cover most of Europe, the ground stations must be spaced every 50 to 700 km (30 to 435 miles).
Satellite-Based Telephone Communication
In order to augment the terrestrial and aircraft-based mobile telephone systems, several satellite-based systems have been put into operation. The goal of these systems is to permit ready connection to the PSTN from anywhere on Earth’s surface, especially in areas not presently covered by cellular telephone service. A form of satellite-based mobile communication has been available for some time in airborne cellular systems that utilize Inmarsat satellites. However, the Inmarsat satellites are geostationary, remaining approximately 35,000 km (22,000 miles) above a single location on Earth’s surface. Because of this high-altitude orbit, Earth-based communication transceivers require high transmitting power, large communication antennas, or both in order to communicate with the satellite. In addition, such a long communication path introduces a noticeable delay, on the order of a quarter-second, in two-way voice conversations. One viable alternative to geostationary satellites would be a larger system of satellites in low Earth orbit (LEO). Orbiting less than 1,600 km (1,000 miles) above Earth, LEO satellites are not geostationary and therefore cannot provide constant coverage of specific areas on Earth. Nevertheless, by allowing radio communications with a mobile instrument to be handed off between satellites, an entire constellation of satellites can assure that no call will be dropped simply because a single satellite has moved out of range.
The first LEO system intended for commercial service was the Iridium system, designed by Motorola, Inc., and owned by Iridium LLC, a consortium made up of corporations and governments from around the world. The Iridium concept employed a constellation of 66 satellites orbiting in six planes around Earth. They were launched from May 1997 to May 1998, and commercial service began in November 1998. Each satellite, orbiting at an altitude of 778 kilometres (483 miles), had the capability to transmit 48 spot beams to Earth. Meanwhile, all the satellites were in communication with one another via 23-gigahertz radio “crosslinks,” thus permitting ready handoff between satellites when communicating with a fixed or mobile user on Earth. The crosslinks provided an uninterrupted communication path between the satellite serving a user at any particular instant and the satellite connecting the entire constellation with the gateway ground station to the PSTN. In this way, the 66 satellites provided continuous telephone communication service for subscriber units around the globe. However, the service failed to attract sufficient subscribers, and Iridium LLC went out of business in March 2000. Its assets were acquired by Iridium Satellite LLC, which continued to provide worldwide communication service to the U.S. Department of Defense as well as business and individual users.
Another LEO system, Globalstar, consisted of 48 satellites that were launched about the same time as the Iridium constellation. Globalstar began offering service in October 1999, though it too went into bankruptcy, in February 2002; a reorganized Globalstar LP continued to provide service thereafter.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline