You are not logged in.
1178) Dice
Dice, singular die, small objects (polyhedrons) used as implements for gambling and the playing of social games. The most common form of die is the cube, with each side marked with from one to six small dots (spots). The spots are arranged in conventional patterns and placed so that spots on opposite sides always add up to seven: one and six, two and five, three and four. There are, however, many dice with differing arrangements of spots or other face designs, such as poker dice and crown and anchor dice, and many other shapes of dice with 4, 5, 7, 8, 10, 12, 16, and 20 or more sides. Dice are generally used to generate a random outcome (most often a number or a combination of numbers) in which the physical design and quantity of the dice thrown determine the mathematical probabilities.
In most games played with dice, the dice are thrown (rolled, flipped, shot, tossed, or cast), from the hand or from a receptacle called a dice cup, in such a way that they will fall at random. The symbols that face up when the dice come to rest are the relevant ones, and their combination decides, according to the rules of the game being played, whether the thrower (often called the shooter) wins, loses, scores points, continues to throw, or loses possession of the dice to another shooter. Dice have also been used for at least 5,000 years in connection with board games, primarily for the movement of playing pieces.
History
Dice and their forerunners are the oldest gaming implements known to man. Sophocles reported that dice were invented by the legendary Greek Palamedes during the siege of Troy, whereas Herodotus maintained that they were invented by the Lydians in the days of King Atys. Both “inventions” have been discredited by numerous archaeological finds demonstrating that dice were used in many earlier societies.
The precursors of dice were magical devices that primitive people used for the casting of lots to divine the future. The probable immediate forerunners of dice were knucklebones (astragals: the anklebones of sheep, buffalo, or other animals), sometimes with markings on the four faces. Such objects are still used in some parts of the world.
In later Greek and Roman times, most dice were made of bone and ivory; others were of bronze, agate, rock crystal, onyx, jet, alabaster, marble, amber, porcelain, and other materials. Cubical dice with markings practically equivalent to those of modern dice have been found in Chinese excavations from 600 BCE and in Egyptian tombs dating from 2000 BCE. The first written records of dice are found in the ancient Sanskrit epic the Mahabharata, composed in India more than 2,000 years ago. Pyramidal dice (with four sides) are as old as cubical ones; such dice were found with the so-called Royal Game of Ur, one of the oldest complete board games ever discovered, dating back to Sumer in the 3rd millennium BCE. Another variation of dice is teetotums (a type of spinning top).
It was not until the 16th century that dice games were subjected to mathematical analysis—by Italians Girolamo Cardano and Galileo, among others—and the concepts of randomness and probability were conceived. Until then the prevalent attitude had been that dice and similar objects fell the way they did because of the indirect action of gods or supernatural forces.
Manufacture
Almost all modern dice are made of a cellulose or other plastic material. There are two kinds: perfect, or casino, dice with sharp edges and corners, commonly made by hand and true to a tolerance of 0.0001 inch (0.00026 cm) and used mostly in gambling casinos to play math or other gambling games, and round-cornered, or imperfect, dice, which are machine-made and are generally used to play social and board games.
Cheating with dice
Perfect dice are also known as fair dice, levels, or squares, whereas dice that have been tampered with, or expressly made for cheating, are known as crooked or gaffed dice. Such dice have been found in the tombs of ancient Egypt and the Orient, in prehistoric graves of North and South America, and in Viking graves. There are many forms of crooked dice. Any die that is not a perfect cube will not act according to correct mathematical odds and is called a shape, a brick, or a flat. For example, a cube that has been shaved down on one or more sides so that it is slightly brick-shaped will tend to settle down most often on its larger surfaces, whereas a cube with bevels, on which one or more sides have been trimmed so that they are slightly convex, will tend to roll off of its convex sides. Shapes are the most common of all crooked dice. Loaded dice (called tappers, missouts, passers, floppers, cappers, or spot loaders, depending on how and where extra weight has been applied) may prove to be perfect cubes when measured with calipers, but extra weight just below the surface on some sides will make the opposite sides come up more often than they should. The above forms of dice are classed as percentage dice: they will not always fall with the intended side up but will do so often enough in the long run for the cheaters to win the majority of their bets.
A die with one or more faces each duplicated on its opposite side and certain numbers omitted will produce some numbers in disproportionate frequency and never produce certain others; for example, two dice marked respectively with duplicates of 3-4-5 and 1-5-6 can never produce combinations totaling 2, 3, 7, or 12, which are the only combinations with which one can lose in the game of math. Such dice, called busters or tops and bottoms, are used as a rule only by accomplished dice cheats, who introduce them into the game by sleight of hand (“switching”). Since it is impossible to see more than three sides of a cube at any one time, tops and bottoms are unlikely to be detected by the inexperienced gambler.
Yet another form of cheating with dice produces controlled shots, in which one or more fair dice are spun, rolled, or thrown so that a certain side or sides will come up, or not come up, depending on the desired effect. Known by such colourful names as the whip shot, the blanket roll, the slide shot, the twist shot, and the Greek shot, this form of cheating requires considerable manual dexterity and practice. Fear of such ability led casinos to install tables with slanted end walls and to insist that dice be thrown so as to rebound from them.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1178) Paraffin wax
Paraffin wax is a white or colorless soft, solid wax. It's made from saturated hydrocarbons. It's often used in skin-softening salon and spa treatments on the hands, cuticles, and feet because it's colorless, tasteless, and odorless. It can also be used to provide pain relief to sore joints and muscles.
Paraffin wax, colourless or white, somewhat translucent, hard wax consisting of a mixture of solid straight-chain hydrocarbons ranging in melting point from about 48° to 66° C (120° to 150° F). Paraffin wax is obtained from petroleum by dewaxing light lubricating oil stocks. It is used in candles, wax paper, polishes, cosmetics, and electrical insulators. It assists in extracting perfumes from flowers, forms a base for medical ointments, and supplies a waterproof coating for wood. In wood and paper matches, it helps to ignite the matchstick by supplying an easily vaporized hydrocarbon fuel.
Paraffin wax was first produced commercially in 1867, less than 10 years after the first petroleum well was drilled. Paraffin wax precipitates readily from petroleum on chilling. Technical progress has served only to make the separations and filtration more efficient and economical. Purification methods consist of chemical treatment, decolorization by adsorbents, and fractionation of the separated waxes into grades by distillation, recrystallization, or both. Crude oils differ widely in wax content.
Synthetic paraffin wax was introduced commercially after World War II as one of the products obtained in the Fischer–Tropsch reaction, which converts coal gas to hydrocarbons. Snow-white and harder than petroleum paraffin wax, the synthetic product has a unique character and high purity that make it a suitable replacement for certain vegetable waxes and as a modifier for petroleum waxes and for some plastics, such as polyethylene. Synthetic paraffin waxes may be oxidized to yield pale-yellow, hard waxes of high molecular weight that can be saponified with aqueous solutions of organic or inorganic alkalies, such as borax, sodium hydroxide, triethanolamine, and morpholine. These wax dispersions serve as heavy-duty floor wax, as waterproofing for textiles and paper, as tanning agents for leather, as metal-drawing lubricants, as rust preventives, and for masonry and concrete treatment.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1179) Petroleum jelly
Petroleum jelly, petrolatum, white petrolatum, soft paraffin, or multi-hydrocarbon, CAS number 8009-03-8, is a semi-solid mixture of hydrocarbons (with carbon numbers mainly higher than 25), originally promoted as a topical ointment for its healing properties. The Vaseline brand is a well known American brand of petroleum jelly since 1870.
After petroleum jelly became a medicine-chest staple, consumers began to use it for cosmetic purposes and for many ailments including toenail fungus, genital rashes (non-STD: sexually transmitted diseases), nosebleeds, diaper rash, and common colds. Its folkloric medicinal value as a "cure-all" has since been limited by better scientific understanding of appropriate and inappropriate uses. It is recognized by the U.S. Food and Drug Administration (FDA) as an approved over-the-counter (OTC) skin protectant and remains widely used in cosmetic skin care, where it is often loosely referred to as mineral oil.
History
Marco polo on 1273 describes the oil exportation of Baku oil by hundreds of camels and ships for burning and as ointment for treating.
Native Americans discovered the use of petroleum jelly for protecting and healing skin. Sophisticated oil pits had been built as early as 1415–1450 in Western Pennsylvania. In 1859, workers operating the United States of America's first oil rigs noticed a paraffin-like material forming on rigs in the course of investigating malfunctions. Believing the substance hastened healing, the workers used the jelly on cuts and burns.
Robert Chesebrough, a young chemist whose previous work of distilling fuel from the oil of sperm whales had been rendered obsolete by petroleum, went to Titusville, Pennsylvania, US, to see what new materials had commercial potential. Chesebrough took the unrefined black "rod wax", as the drillers called it, back to his laboratory to refine it and explore potential uses. He discovered that by distilling the lighter, thinner oil products from the rod wax, he could create a light-colored gel. Chesebrough patented the process of making petroleum jelly by U.S. Patent 127,568 in 1872. The process involved vacuum distillation of the crude material followed by filtration of the still residue through bone char. Chesebrough traveled around New York demonstrating the product to encourage sales by burning his skin with acid or an open flame, then spreading the ointment on his injuries and showing his past injuries healed, he claimed, by his miracle product. He opened his first factory in 1870 in Brooklyn using the name Vaseline.
Physical properties
Petroleum jelly is a mixture of hydrocarbons, with a melting point that depends on the exact proportions. The melting point is typically between 40 and 70 °C (105 and 160 °F). It is flammable only when heated to liquid; then the fumes will light, not the liquid itself, so a wick material like leaves, bark, or small twigs is needed to ignite petroleum jelly. It is colorless (or of a pale yellow color when not highly distilled), translucent, and devoid of taste and smell when pure. It does not oxidize on exposure to the air and is not readily acted on by chemical reagents. It is insoluble in water. It is soluble in dichloromethane, chloroform, benzene, diethyl ether, carbon disulfide and turpentine. It acts as a plasticizer on polypropylene (PP), but is compatible with most other plastics. It is a semi-solid, in that it holds its shape indefinitely like a solid, but it can be forced to take the shape of its container without breaking apart, like a liquid, though it does not flow on its own.
Depending on the specific application of petroleum jelly, it may be USP, B.P., or Ph. Eur. grade. This pertains to the processing and handling of the petroleum jelly so it is suitable for medicinal and personal-care applications.
Uses
Most uses of petroleum jelly exploit its lubricating and coating properties, including use on dry lips and dry skin. Below are some examples of the uses of petroleum jelly.
Medical treatment
Vaseline brand First Aid Petroleum Jelly, or carbolated petroleum jelly containing phenol to give the jelly additional antibacterial effect, has been discontinued. During World War II, a variety of petroleum jelly called red veterinary petrolatum, or Red Vet Pet for short, was often included in life raft survival kits. Acting as a sunscreen, it provides protection against ultraviolet rays.
The American Academy of Dermatology recommends keeping skin injuries moist with petroleum jelly to reduce scarring. A verified medicinal use is to protect and prevent moisture loss of the skin of a patient in the initial post-operative period following laser skin resurfacing.
There is one case report published in 1994 indicating petroleum jelly should not be applied to the inside of the nose due to the risk of lipid pneumonia, but this was only ever reported in one patient. However, petroleum jelly is used extensively by otolaryngologists—ear, nose, and throat surgeons—for nasal moisture and epistaxis treatment, and to combat nasal crusting. Large studies have found petroleum jelly applied to the nose for short durations to have no significant side effects.
Historically, it was also consumed for internal use and even promoted as "Vaseline confection".
Skin and hair care
Most petroleum jelly today is used as an ingredient in skin lotions and cosmetics, providing various types of skin care and protection by minimizing friction or reducing moisture loss, or by functioning as a grooming aid (e.g., pomade). It's also used for treating dry scalp and dandruff.
Preventing moisture loss
By reducing moisture loss, petroleum jelly can prevent chapped hands and lips, and soften nail cuticles.
This property is exploited to provide heat insulation: petroleum jelly can be used to keep swimmers warm in water when training, or during channel crossings or long ocean swims. It can prevent chilling of the face due to evaporation of skin moisture during cold weather outdoor sports.
Hair grooming
In the first part of the twentieth century, petroleum jelly, either pure or as an ingredient, was also popular as a hair pomade. When used in a 50/50 mixture with pure beeswax, it makes an effective moustache wax.
Skin lubrication
Petroleum jelly can be used to reduce the friction between skin and clothing during various sport activities, for example to prevent chafing of the seat region of cyclists, or the nipples of long distance runners wearing loose T-shirts, and is commonly used in the groin area of wrestlers and footballers.
Petroleum jelly is commonly used as a personal lubricant, because it does not dry out like water-based lubricants, and has a distinctive "feel", different from that of K-Y and related methylcellulose products. However, it is not recommended for use with condoms during sexual activity, because it swells latex and thus increases the chance of rupture.
Product care and protection:
Coating
Petroleum jelly can be used to coat corrosion-prone items such as metallic trinkets, non-stainless steel blades, and gun barrels prior to storage as it serves as an excellent and inexpensive water repellent. It is used as an environmentally friendly underwater antifouling coating for motor boats and sailing yachts. It was recommended in the Porsche owner's manual as a preservative for light alloy (alleny) anodized Fuchs wheels to protect them against corrosion from road salts and brake dust. “Every three months (after regular cleaning) the wheels should be coated with petroleum jelly.”
Finishing
It can be used to finish and protect wood, much like a mineral oil finish. It is used to condition and protect smooth leather products like bicycle saddles, boots, motorcycle clothing, and used to put a shine on patent leather shoes (when applied in a thin coat and then gently buffed off).
Lubrication
Petroleum jelly can be used to lubricate zippers and slide rules. It was also recommended by Porsche in maintenance training documentation for lubrication (after cleaning) of "Weatherstrips on Doors, Hood, Tailgate, Sun Roof". The publication states, "…before applying a new coat of lubricant…" "Only acid-free lubricants may be used, for example: glycerine, Vaseline, tire mounting paste, etc. These lubricants should be rubbed in, and excessive lubricant wiped off with a soft cloth." It is used in bullet lubricant compounds. Petrolatum is also used as a light lubricating grease [28] as well as an anti-seize assembling grease.
Industrial production processes
Petroleum jelly is a useful material when incorporated into candle wax formulas. The petroleum jelly softens the overall blend, allows the candle to incorporate additional fragrance oil, and facilitates adhesion to the sidewall of the glass. Petroleum jelly is used to moisten nondrying modelling clay such as plasticine, as part of a mix of hydrocarbons including those with greater (paraffin wax) and lesser (mineral oil) molecular weights. It is used as a tack reducer additive to printing inks to reduce paper lint "picking" from uncalendered paper stocks. It can be used as a release agent for plaster molds and castings. It is used in the leather industry as a waterproofing cream.
Other:
Explosives
Petroleum jelly is mixed with a high proportion of strong inorganic chlorates due to it acting as a plasticizer and a fuel source. An example of this is Cheddite C which consists of a ratio of 9:1, KClO3 to petroleum jelly. This mixture is unable to detonate without the use of a blasting cap. It is also used as a stabiliser in the manufacture of the propellant Cordite.
Mechanical, barrier functions
Petroleum jelly can be used to fill copper or fibre-optic cables using plastic insulation to prevent the ingress of water, see icky-pick.
Petroleum jelly can be used to coat the inner walls of terrariums to prevent animals crawling out and escaping.
A stripe of petroleum jelly can be used to prevent the spread of a liquid. For example, it can be applied close to the hairline when using a home hair dye kit to prevent the hair dye from irritating or staining the skin. It is also used to prevent diaper rash.
Surface cleansing
Petroleum jelly is used to gently clean a variety of surfaces, ranging from makeup removal from faces to tar stain removal from leather.
Pet care
Petroleum jelly is used to moisturize the paws of dogs. It is a common ingredient in hairball remedies for domestic cats.
Petroleum jelly is slightly soluble in alcohol.
Health
In 2015, German consumer watchdog Stiftung Warentest analyzed cosmetics containing mineral oils. After developing a new detection method, they found high concentrations of Mineral Oil Aromatic Hydrocarbons (MOAH) and even polyaromatics in products containing mineral oils. Vaseline products contained the most MOAH of all tested cosmetics (up to 9%).[34] The European Food Safety Authority sees MOAH and polyaromatics as possibly carcinogenic. Based on the results, Stiftung Warentest warns not to use Vaseline or any product that is based on mineral oils for lip care.
A later study published in 2017 found at most 1% MOAH in petroleum jelly, and less than 1% in petroleum jelly-based beauty products.
Summary
Petroleum jelly, also called Petrolatum, translucent, yellowish to amber or white, unctuous substance having almost no odour or taste, derived from petroleum and used principally in medicine and pharmacy as a protective dressing and as a substitute for fats in ointments and cosmetics. It is also used in many types of polishes and in lubricating greases, rust preventives, and modeling clay.
Petrolatum is obtained by dewaxing heavy lubricating-oil stocks. It has a melting-point range from 38° to 54° C (100° to 130° F). Chemically, petrolatum is a mixture of hydrocarbons, chiefly of the paraffin series.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1180) Deja vu
Déjà vu is the feeling that one has lived through the present situation before. Although some interpret déjà vu in a paranormal context, mainstream scientific approaches reject the explanation of déjà vu as "precognition" or "prophecy". It is an anomaly of memory whereby, despite the strong sense of recollection, the time, place, and practical context of the "previous" experience are uncertain or believed to be impossible. Two types of déjà vu are recognized: the pathological déjà vu usually associated with epilepsy or that which, when unusually prolonged or frequent, or associated with other symptoms such as hallucinations, may be an indicator of neurological or psychiatric illness, and the non-pathological type characteristic of healthy people, about two-thirds of whom have had déjà vu experiences. People who travel often or frequently watch films are more likely to experience déjà vu than others. Furthermore, people also tend to experience déjà vu more in fragile conditions or under high pressure, and research shows that the experience of déjà vu also decreases with age.
Medical disorders
Déjà vu is associated with temporal lobe epilepsy. This experience is a neurological anomaly related to epileptic electrical discharge in the brain, creating a strong sensation that an event or experience currently being experienced has already been experienced in the past.
Migraines with aura are also associated with déjà vu.
Early researchers tried to establish a link between déjà vu and mental disorders such as anxiety, dissociative identity disorder and schizophrenia but failed to find correlations of any diagnostic value. No special association has been found between déjà vu and schizophrenia. A 2008 study found that déjà vu experiences are unlikely to be pathological dissociative experiences.
Some research has looked into genetics when considering déjà vu. Although there is not currently a gene associated with déjà vu, the LGI1 gene on chromosome 10 is being studied for a possible link. Certain forms of the gene are associated with a mild form of epilepsy, and, though by no means a certainty, déjà vu, along with jamais vu, occurs often enough during seizures (such as simple partial seizures) that researchers have reason to suspect a link.
Pharmacology
Certain drugs increase the chances of déjà vu occurring in the user, resulting in a strong sensation that an event or experience currently being experienced has already been experienced in the past. Some pharmaceutical drugs, when taken together, have also been implicated in the cause of déjà vu. Taiminen and Jääskeläinen (2001) reported the case of an otherwise healthy male who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. He found the experience so interesting that he completed the full course of his treatment and reported it to the psychologists to write up as a case study. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain (e.g. Bancaud, Brunet-Bourgin, Chauvel, & Halgren, 1994), Tamminen and Jääskeläinen speculate that déjà vu occurs as a result of hyperdopaminergic action in the medial temporal areas of the brain.
Explanations
Split perception explanation
Déjà vu may happen if a person experienced the current sensory experience twice successively. The first input experience is brief, degraded, occluded, or distracted. Immediately following that, the second perception might be familiar because the person naturally related it to the first input. One possibility behind this mechanism is that the first input experience involves shallow processing, which means that only some superficial physical attributes are extracted from the stimulus.
Memory-based explanation:
Implicit memory
Research has associated déjà vu experiences with good memory functions. Recognition memory enables people to realize the event or activity that they are experiencing has happened before. When people experience déjà vu, they may have their recognition memory triggered by certain situations which they have never encountered.
The similarity between a déjà-vu-eliciting stimulus and an existing, or non-existing but different, memory trace may lead to the sensation that an event or experience currently being experienced has already been experienced in the past. Thus, encountering something that evokes the implicit associations of an experience or sensation that "cannot be remembered" may lead to déjà vu. In an effort to reproduce the sensation experimentally, Banister and Zangwill (1941) used hypnosis to give participants posthypnotic amnesia for material they had already seen. When this was later re-encountered, the restricted activation caused thereafter by the posthypnotic amnesia resulted in 3 of the 10 participants reporting what the authors termed "paramnesias".
Two approaches are used by researchers to study feelings of previous experience, with the process of recollection and familiarity. Recollection-based recognition refers to an ostensible realization that the current situation has occurred before. Familiarity-based recognition refers to the feeling of familiarity with the current situation without being able to identify any specific memory or previous event that could be associated with the sensation.
In 2010, O’Connor, Moulin, and Conway developed another laboratory analog of déjà vu based on two contrast groups of carefully selected participants, a group under posthypnotic amnesia condition (PHA) and a group under posthypnotic familiarity condition (PHF). The idea of PHA group was based on the work done by Banister and Zangwill (1941), and the PHF group was built on the research results of O’Connor, Moulin, and Conway (2007). They applied the same puzzle game for both groups, "Railroad Rush Hour", a game in which one aims to slide a red car through the exit by rearranging and shifting other blocking trucks and cars on the road. After completing the puzzle, each participant in the PHA group received a posthypnotic amnesia suggestion to forget the game in the hypnosis. Then, each participant in the PHF group was not given the puzzle but received a posthypnotic familiarity suggestion that they would feel familiar with this game during the hypnosis. After the hypnosis, all participants were asked to play the puzzle (the second time for PHA group) and reported the feelings of playing.
In the PHA condition, if a participant reported no memory of completing the puzzle game during hypnosis, researchers scored the participant as passing the suggestion. In the PHF condition, if participants reported that the puzzle game felt familiar, researchers scored the participant as passing the suggestion. It turned out that, both in the PHA and PHF conditions, five participants passed the suggestion and one did not, which is 83.33% of the total sample. More participants in PHF group felt a strong sense of familiarity, for instance, comments like "I think I have done this several years ago." Furthermore, more participants in PHF group experienced a strong déjà vu, for example, "I think I have done the exact puzzle before." Three out of six participants in the PHA group felt a sense of déjà vu, and none of them experienced a strong sense of it. These figures are consistent with Banister and Zangwill's findings. Some participants in PHA group related the familiarity when completing the puzzle with an exact event that happened before, which is more likely to be a phenomenon of source amnesia. Other participants started to realize that they may have completed the puzzle game during hypnosis, which is more akin to the phenomenon of breaching. In contrast, participants in the PHF group reported that they felt confused about the strong familiarity of this puzzle, with the feeling of playing it just sliding across their minds. Overall, the experiences of participants in the PHF group is more likely to be the déjà vu in life, while the experiences of participants in the PHA group is unlikely to be real déjà vu.
A 2012 study in the journal Consciousness and Cognition, that used virtual reality technology to study reported déjà vu experiences, supported this idea. This virtual reality investigation suggested that similarity between a new scene's spatial layout and the layout of a previously experienced scene in memory (but which fails to be recalled) may contribute to the déjà vu experience.[37] When the previously experienced scene fails to come to mind in response to viewing the new scene, that previously experienced scene in memory can still exert an effect—that effect may be a feeling of familiarity with the new scene that is subjectively experienced as a feeling that an event or experience currently being experienced has already been experienced in the past, or of having been there before despite knowing otherwise.
Cryptomnesia
Another possible explanation for the phenomenon of déjà vu is the occurrence of "cryptomnesia", which is where information learned is forgotten but nevertheless stored in the brain, and similar occurrences invoke the contained knowledge, leading to a feeling of familiarity because the event or experience being experienced has already been experienced in the past, known as "déjà vu". Some experts suggest that memory is a process of reconstruction, rather than a recollection of fixed, established events. This reconstruction comes from stored components, involving elaborations, distortions, and omissions. Each successive recall of an event is merely a recall of the last reconstruction. The proposed sense of recognition (déjà vu) involves achieving a good "match" between the present experience and the stored data. This reconstruction, however, may now differ so much from the original event it is as though it had never been experienced before, even though it seems similar.
Dual neurological processing
In 1964, Robert Efron of Boston's Veterans Hospital proposed that déjà vu is caused by dual neurological processing caused by delayed signals. Efron found that the brain's sorting of incoming signals is done in the temporal lobe of the brain's left hemisphere. However, signals enter the temporal lobe twice before processing, once from each hemisphere of the brain, normally with a slight delay of milliseconds between them. Efron proposed that if the two signals were occasionally not synchronized properly, then they would be processed as two separate experiences, with the second seeming to be a re-living of the first.
Dream-based explanation
Dreams can also be used to explain the experience of déjà vu, and they are related in three different aspects. Firstly, some déjà vu experiences duplicate the situation in dreams instead of waking conditions, according to the survey done by Brown (2004). Twenty percent of the respondents reported their déjà vu experiences were from dreams and 40% of the respondents reported from both reality and dreams. Secondly, people may experience déjà vu because some elements in their remembered dreams were shown. Research done by Zuger (1966) supported this idea by investigating the relationship between remembered dreams and déjà vu experiences, and suggested that there is a strong correlation. Thirdly, people may experience déjà vu during a dream state, which links déjà vu with dream frequency.
Summary:
What is déjà vu?
The term déjà vu is French and means, literally, "already seen." Those who have experienced the feeling describe it as an overwhelming sense of familiarity with something that shouldn't be familiar at all. Say, for example, you are traveling to England for the first time. You are touring a cathedral, and suddenly it seems as if you have been in that very spot before. Or maybe you are having dinner with a group of friends, discussing some current political topic, and you have the feeling that you've already experienced this very thing -- same friends, same dinner, same topic.
The phenomenon is rather complex, and there are many different theories as to why déjà vu happens. Swiss scholar Arthur Funkhouser suggests that there are several "déjà experiences" and asserts that in order to better study the phenomenon, the nuances between the experiences need to be noted. In the examples mentioned above, Funkhouser would describe the first incidence as déjàvisite ("already visited") and the second as déjàvecu ("already experienced or lived through").
As much as 70 percent of the population reports having experienced some form of déjà vu. A higher number of incidents occurs in people 15 to 25 years old than in any other age group.
Déjà vu has been firmly associated with temporal-lobe epilepsy. Reportedly, déjà vu can occur just prior to a temporal-lobe seizure. People suffering a seizure of this kind can experience déjà vu during the actual seizure activity or in the moments between convulsions.
Since déjà vu occurs in individuals with and without a medical condition, there is much speculation as to how and why this phenomenon happens. Several psychoanalysts attribute déjà vu to simple fantasy or wish fulfillment, while some psychiatrists ascribe it to a mismatching in the brain that causes the brain to mistake the present for the past. Many parapsychologists believe it is related to a past-life experience. Obviously, there is more investigation to be done.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1181) Invention
Invention, the act of bringing ideas or objects together in a novel way to create something that did not exist before.
Building models of what might be
Ever since the first prehistoric stone tools, humans have lived in a world shaped by invention. Indeed, the brain appears to be a natural inventor. As part of the act of perception, humans assemble, arrange, and manipulate incoming sensory information so as to build a dynamic, constantly updated model of the outside world. The survival value of such a model lies in the fact that it functions as a template against which to match new experiences, so as to rapidly identify anything anomalous that might be life-threatening. Such a model would also make it possible to predict danger. The predictive act would involve the construction of hypothetical models of the way the world might be at some future point. Such models could include elements that might, for whatever reason, be assembled into novel submodels (inventive ideas).
One of the earliest and most literal examples of this model-building paradigm in action was the ancient Mesopotamian invention of writing. As early as 8000 BCE tiny geometric clay models, used to represent sheep and grain, were kept in clay envelopes, to be used as inventory tallies or else to represent goods during barter. Over time, the tokens were pressed onto the exterior of the wet envelope, which at some point was flattened into a tablet. By about 3100 BCE the impressions had become abstract designs marked on the tablet with a cut reed stalk. These pictograms, known today as cuneiform, were the first writing. And they changed the world.
Inventions almost always cause change. Paleolithic stone weapons made hunting possible and thereby triggered the emergence of permanent top-down command structures. The printing press, introduced by Johannes Gutenberg in the 15th century, once and for all curtailed the traditional authority of elders. The typewriter, brought onto the market by Christopher Latham Sholes in the 1870s, was instrumental in freeing women from housework and changing their social status for good (and also increasing the divorce rate).
What inventors are
Inventors are often extremely observant. In the 1940s Swiss engineer George de Mestral saw tiny hooks on the burrs clinging to his hunting jacket and invented the hook-and-loop fastener system known as Velcro.
Invention can be serendipitous. In the late 1800s a German medical scientist, Paul Ehrlich, spilled some new dye into a Petri dish containing bacilli, saw that the dye selectively stained and killed some of them, and invented chemotherapy. In the mid-1800s an American businessman, Charles Goodyear, dropped a rubber mixture containing sulfur on his hot stove and invented vulcanization.
Inventors do it for money. Austrian chemist Auer von Welsbach, in developing the gas mantle in the 1880s, provided 30 extra years of profitability to the shareholders of gaslight companies (which at the time were threatened by the new electric light).
Inventions are often unintended. In the early 1890s Edward Acheson, an American entrepreneur in the field of electric lighting, was seeking to invent artificial diamonds when an electrified mix of coke and clay produced the ultrahard abrasive Carborundum. In an attempt to develop artificial quinine in the mid-1800s, British chemist William Perkin’s investigation of coal tar instead created the first artificial dye, tyrian purple—which later fell into Ehrlich’s Petri dish.
Inventors solve puzzles. In the course of investigating why suction pumps would lift water only about 9 metres (30 feet), Evangelista Torricelli identified air pressure and invented the barometer.
Inventors are dogged. The American inventor Thomas Edison, who tested thousands of materials before he chose bamboo to make the carbon filament for his incandescent lightbulb, described his work as "one percent inspiration and 99 percent perspiration.” At his laboratory in Menlo Park, New Jersey, Edison’s approach was to identify a potential gap in the market and fill it with an invention. His workers were told, “There’s a way to do it better. Find it.”
Serendipity and inspiration
The key to inventive success often requires being in the right place at the right time. Christopher Latham Sholes and Carlos Glidden took their invention to arms manufacturer Remington just when that company’s production lines were running down after the end of the American Civil War. A quick retool turned Remington into the world’s first typewriter manufacturer.
An invention developed for one purpose will sometimes find use in entirely different circumstances. In medieval Afghanistan somebody invented a leather loop to hang on the side of a camel for use as a step when loading the animal. By 1066 the Normans had put the loop on each side of a horse and invented the stirrup. With their feet thus firmly anchored, at the Battle of Hastings that year Norman knights hit opposing English foot soldiers with their lances and the full weight of the horse without being unseated by the shock of the encounter. The Normans won the battle and took over England (and made English the French-Saxon mix it is today).
One invention can inspire another. Gaslight distribution pipes gave Edison the idea for his electricity network. Perforated cards used to control the Jacquard loom led Herman Hollerith to invent punch cards for tabulator use in the 1890 U.S. census.
The quickening pace of invention
Above all, invention appears primarily to involve a “1 + 1 = 3” process similar to the brain’s model-building activity, in which concepts or techniques are brought together for the first time and the outcome is more than the sum of the parts (e.g., spray + gasoline = carburetor).
The more often ideas come together, the more frequently invention occurs. The rate of invention increased sharply, each time, when the exchange of ideas became easier after the invention of the printing press, telecommunications, the computer, and above all the Internet. Today new fields such as data mining and nanotechnology offer would-be inventors (or semi-intelligent software programs) massive amounts of “1 + 1 = 3” opportunities. As a result, the rate of innovation seems poised to increase dramatically in the coming decades.
It is going to become harder than ever to keep up with the secondary results of invention as the general public gains access to information and technology denied them for millennia and as billions of brains, each with its own natural inventive capabilities, innovate faster than social institutions can adapt. In some cases, as occurred during the global financial crisis of 2007–08, institutions will face severe challenges from the introduction of technologies for which their old-fashioned infrastructures will be ill-prepared. It may be that the only safe way to deal with the potentially disruptive effects of an avalanche of invention, so as to develop the new social processes required to manage a permanent state of change, will be to do what the brain does: invent a comprehensive virtual world in which one can safely test innovative ideas before applying them.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1182) Mensa International
Mensa is the largest and oldest high IQ society in the world. It is a non-profit organization open to people who score at the 98th percentile or higher on a standardised, supervised IQ or other approved intelligence test. Mensa formally comprises national groups and the umbrella organisation Mensa International, with a registered office in Caythorpe, Lincolnshire, England, which is separate from the British Mensa office in Wolverhampton. The word mensa is Latin for 'table', as is symbolised in the organisation's logo, and was chosen to demonstrate the round-table nature of the organisation; the coming together of equals.
History
Roland Berrill, an Australian barrister, and Lancelot Ware, a British scientist and lawyer, founded Mensa at Lincoln College, in Oxford, England in 1946, with the intention of forming a society for the most intelligent, with the only qualification being a high IQ.
The society was ostensibly to be non-political in its aims, and free from all other social distinctions, such as race and religion. However, Berrill and Ware were both disappointed with the resulting society. Berrill had intended Mensa as "an aristocracy of the intellect" and was unhappy that the majority of members came from working or lower-class homes, while Ware said: "I do get disappointed that so many members spend so much time solving puzzles."
American Mensa was the second major branch of Mensa. Its success has been linked to the efforts of early and longstanding organiser Margot Seitelman.
Membership requirement
Mensa's requirement for membership is a score at or above the 98th percentile on certain standardised IQ or other approved intelligence tests, such as the Stanford–Binet Intelligence Scales. The minimum accepted score on the Stanford–Binet is 132, while for the Cattell it is 148. Most IQ tests are designed to yield a mean score of 100 with a standard deviation of 15; the 98th-percentile score under these conditions is 131, assuming a normal distribution.
Most national groups test using well-established IQ test batteries, but American Mensa has developed its own application exam. This exam is proctored[clarification needed] by American Mensa and does not provide a score comparable to scores on other tests; it serves only to qualify a person for membership. In some national groups, a person may take a Mensa-offered test only once, although one may later submit an application with results from a different qualifying test. The Mensa test is also available in some developing countries such as India and Pakistan, and societies in developing countries have been growing at a rapid pace.
Organizational structure
Mensa International consists of around 134,000 members in 100 countries and in 54 national groups. The national groups issue periodicals, such as Mensa Bulletin, the monthly publication of American Mensa, and Mensa Magazine, the monthly publication of British Mensa. Individuals who live in a country with a national group join the national group, while those living in countries without a recognised chapter may join Mensa International directly.
The largest national groups are:
* American Mensa, with more than 57,000 members,
* British Mensa, with over 21,000 members,
* Mensa Germany, with about 15,000 members.
Larger national groups are further subdivided into local groups. For example, American Mensa has 134 local groups, with the largest having over 2,000 members and the smallest having fewer than 100.
Members may form Special Interest Groups (SIGs) at international, national, and local levels; these SIGs represent a wide variety of interests, ranging from motorcycle clubs to entrepreneurial co-operations. Some SIGs are associated with various geographic groups, whereas others act independently of official hierarchy. There are also electronic SIGs (eSIGs), which operate primarily as email lists, where members may or may not meet each other in person.
The Mensa Foundation, a separate charitable U.S. corporation, edits and publishes its own Mensa Research Journal, in which both Mensans and non-Mensans are published on various topics surrounding the concept and measure of intelligence.
Gatherings
Mensa has many events for members, from the local to the international level. Several countries hold a large event called the Annual Gathering (AG). It is held in a different city every year, with speakers, dances, leadership workshops, children's events, games, and other activities. The American and Canadian AGs are usually held during the American Independence Day (4 July) or Canada Day (1 July) weekends respectively.
Smaller gatherings called Regional Gatherings (RGs), which are held in various cities, attract members from large areas. The largest in the United States is held in the Chicago area around Halloween, notably featuring a costume party for which many members create pun-based costumes.
In 2006, the Mensa World Gathering was held from 8–13 August in Orlando, Florida to celebrate the 60th anniversary of the founding of Mensa. An estimated 2,500 attendees from over 30 countries gathered for this celebration. The International Board of Directors had a formal meeting there.
In 2010, a joint American-Canadian Annual Gathering was held in Dearborn, Michigan, to mark the 50th anniversary of Mensa in North America, one of several times the US and Canada AGs have been combined. Other multinational gatherings are the European Mensas Annual Gathering (EMAG) and the Asian Mensa Gathering (AMG).
Since 1990, American Mensa has sponsored the annual Mensa Mind Games competition, at which the Mensa Select award is given to five board games that are "original, challenging, and well designed".
Individual local groups and their members host smaller events for members and their guests. Lunch or dinner events, lectures, tours, theatre outings, and games nights are all common.
In Europe, since 2008 international meetings have been held under the name [EMAG] (European Mensa Annual Gathering), starting in Cologne that year. The next meetings were in Utrecht (2009), Prague (2010), Paris (2011), Stockholm (2012), Bratislava (2013), Zürich (2014), Berlin (2015), Kraków (2016), Barcelona (2017), Belgrade (2018) and Ghent (2019). The 2020 event was postponed and took place in 2021 in Brno. Upcoming EMAGs will be held in Strassbourg (2022) and Århus (2023).
In the Asia-Pacific region, there is an Asia-Pacific Mensa Annual Gathering (AMAG), with rotating countries hosting the event. This has included Gold Coast, Australia (2017), Cebu, Philippines (2018), New Zealand (2019), and South Korea (2020).
Publications
All Mensa groups publish members-only newsletters or magazines, which include articles and columns written by members, and information about upcoming Mensa events. Examples include the American Mensa Bulletin, the British Mensa magazine, Serbian MozaIQ, the Australian TableAus, the Mexican El Mensajero, and the French Contacts. Some local or regional groups have their own newsletter, such as those in the United States, UK, Germany, and France.
Mensa International publishes a Mensa World Journal, which "contains views and information about Mensa around the world". This journal is generally included in each national magazine.
Mensa also publishes the Mensa Research Journal, which "highlights scholarly articles and recent research related to intelligence". Unlike most Mensa publications, this journal is available to non-members.
Demographics
Only some national Mensas accept child members; many offer activities, resources, and newsletters specifically geared toward gifted children and their parents. Both American Mensa's youngest member (Kashe Quest), British Mensa's youngest member (Adam Kirby), and several Australian Mensa members joined at the age of two. The current youngest member of Mensa is Adam Kirby, from Mitcham, London who was invited to join at the age of two years and four months and gained full membership at the age of two years five months. He scored 141 on the Stanford-Binet IQ test. Elise Tan-Roberts of the UK is the youngest person ever to join Mensa, having gained full membership at the age of two years and four months. In 2018, Mehul Garg became the youngest person in a decade to score the maximum of 162 in the test.
American Mensa's oldest member is 102, and British Mensa had a member aged 103.
According to American Mensa's website (as of 2013), 38 percent of its members are baby boomers between the ages of 51 and 68, 31 percent are Gen-Xers or Millennials between the ages of 27 and 48, and more than 2,600 members are under the age of 18. There are more than 1,800 families in the United States with two or more Mensa members. In addition, the American Mensa general membership is "66 percent male, 34 percent female". The aggregate of local and national leadership is distributed equally between the sexes.
Summary
Mensa International, organization of individuals with high IQs that aims to identify, understand, and support intelligence; encourage research into intelligence; and create and seek both social and intellectual experiences for its members. The society was founded in England in 1946 by attorney Roland Berrill and scientist Lance Ware. They chose the word mensa as its name because it means table in Latin and is also reminiscent of the Latin words for mind and month, suggesting the monthly meeting of great minds around a table. Members vary widely in education, income, and occupation. Mensa membership is open to adults and children. To become a Mensan, the only qualification is to report a score at the 98th percentile (meaning a score that is greater than or equal to that achieved by 98 percent of the general population taking the test) on an approved intelligence test that has been administered and supervised by a qualified examiner. Mensa also administers such tests itself.
Membership benefits include opportunities to participate in discussion groups, social events, and annual meetings. Mensa International offers some 200 special interest groups (SIGs) devoted to a variety of scholarly disciplines and recreational pursuits. Individual Mensa chapters organize workshops and special events, publish newsletters and magazines, and conduct annual conferences.
American Mensa was founded in 1960 by Peter Sturgeon. Its national office is in Arlington, Texas. There are chapters in large cities such as New York, Chicago, and Los Angeles, and regional groups in many areas of the United States. The Mensa Education & Research Foundation (MERF) was established in 1971 to promote Mensa’s mission. It grants awards and scholarships and publishes the Mensa Research Journal.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1183) Subway
Subway, also called underground, tube, or métro, underground railway system used to transport large numbers of passengers within urban and suburban areas. Subways are usually built under city streets for ease of construction, but they may take shortcuts and sometimes must pass under rivers. Outlying sections of the system usually emerge aboveground, becoming conventional railways or elevated transit lines. Subway trains are usually made up of a number of cars operated on the multiple-unit system.
The first subway system was proposed for London by Charles Pearson, a city solicitor, as part of a city-improvement plan shortly after the opening of the Thames Tunnel in 1843. After 10 years of discussion, Parliament authorized the construction of 3.75 miles (6 km) of underground railway between Farringdon Street and Bishop’s Road, Paddington. Work on the Metropolitan Railway began in 1860 by cut-and-cover methods—that is, by making trenches along the streets, giving them brick sides, providing girders or a brick arch for the roof, and then restoring the roadway on top. On Jan. 10, 1863, the line was opened using steam locomotives that burned coke and, later, coal; despite sulfurous fumes, the line was a success from its opening, carrying 9,500,000 passengers in the first year of its existence. In 1866 the City of London and Southwark Subway Company (later the City and South London Railway) began work on their “tube” line, using a tunneling shield developed by J.H. Greathead. The tunnels were driven at a depth sufficient to avoid interference with building foundations or public-utility works, and there was no disruption of street traffic. The original plan called for cable operation, but electric traction was substituted before the line was opened. Operation began on this first electric underground railway in 1890 with a uniform fare of twopence for any journey on the 3-mile (5-kilometre) line. In 1900 Charles Tyson Yerkes, an American railway magnate, arrived in London, and he was subsequently responsible for the construction of more tube railways and for the electrification of the cut-and-cover lines. During World Wars I and II the tube stations performed the unplanned function of air-raid shelters.
Many other cities followed London’s lead. In Budapest, a 2.5-mile (4-kilometre) electric subway was opened in 1896, using single cars with trolley poles; it was the first subway on the European continent. Considerable savings were achieved in its construction over earlier cut-and-cover methods by using a flat roof with steel beams instead of a brick arch, and therefore, a shallower trench.
In Paris, the Métro (Chemin de Fer Métropolitain de Paris) was started in 1898, and the first 6.25 miles (10 km) were opened in 1900. The rapid progress was attributed to the wide streets overhead and the modification of the cut-and-cover method devised by the French engineer Fulgence Bienvenue. Vertical shafts were sunk at intervals along the route; and, from there, side trenches were dug and masonry foundations to support wooden shuttering were placed immediately under the road surfaces. Construction of the roof arch then proceeded with relatively little disturbance to street traffic. This method, while it is still used in Paris, has not been widely copied in subway construction elsewhere.
In the United States the first practical subway line was constructed in Boston between 1895 and 1897. It was 1.5 miles (2.4 km) long and at first used trolley streetcars, or tramcars. Later, Boston acquired conventional subway trains. New York City opened the first section of what was to become the largest system in the world on Oct. 27, 1904. In Philadelphia, a subway system was opened in 1907, and Chicago’s system opened in 1943. Moscow constructed its original system in the 1930s.
In Canada, Toronto opened a subway in 1954; a second system was constructed in Montreal during the 1960s using Paris-type rubber-tired cars. In Mexico City the first stage of a combined underground and surface metro system (designed after the Paris Métro) was opened in 1969. In South America, the Buenos Aires subway opened in 1913. In Japan, the Tokyo subway opened in 1927, the Kyōto in 1931, the Ōsaka in 1933, and the Nagoya in 1957.
Automatic trains, designed, built, and operated using aerospace and computer technology, have been developed in a few metropolitan areas, including a section of the London subway system, the Victoria Line (completed 1971). The first rapid-transit system to be designed for completely automatic operation is BART (Bay Area Rapid Transit) in the San Francisco Bay area, completed in 1976. Trains are operated by remote control, requiring only one crewman per train to stand by in case of computer failure. The Washington, D.C., Metro, with an automatic railway control system and 600-foot- (183-metre-) long underground coffered-vault stations, opened its first subway line in 1976. Air-conditioned trains with lightweight aluminum cars, smoother and faster rides due to refinements in track construction and car-support systems, and attention to the architectural appearance of and passenger safety in underground stations are other features of modern subway construction.
London Underground
The London Underground (also known simply as the Underground, or by its nickname the Tube) is a rapid transit system serving Greater London and some parts of the adjacent counties of Buckinghamshire, Essex and Hertfordshire in the United Kingdom.
The Underground has its origins in the Metropolitan Railway, the world's first underground passenger railway. Opened in January 1863, it is now part of the Circle, Hammersmith & City and Metropolitan lines. The first line to operate underground electric traction trains, the City & South London Railway in 1890, is now part of the Northern line. The network has expanded to 11 lines, and in 2020/21 was used for 296 million passenger journeys, making it the world's 12th busiest metro system. The 11 lines collectively handle up to 5 million passenger journeys a day and serve 272 stations.
The system's first tunnels were built just below the ground, using the cut-and-cover method; later, smaller, roughly circular tunnels—which gave rise to its nickname, the Tube—were dug through at a deeper level. The system has 272 stations and 250 miles (400 km) of track. Despite its name, only 45% of the system is under the ground: much of the network in the outer environs of London is on the surface. In addition, the Underground does not cover most southern parts of Greater London, and there are only 31 stations south of the River Thames.
The early tube lines, originally owned by several private companies, were brought together under the "UndergrounD" brand in the early 20th century, and eventually merged along with the sub-surface lines and bus services in 1933 to form London Transport under the control of the London Passenger Transport Board (LPTB). The current operator, London Underground Limited (LUL), is a wholly owned subsidiary of Transport for London (TfL), the statutory corporation responsible for the transport network in London. As of 2015, 92% of operational expenditure is covered by passenger fares. The Travelcard ticket was introduced in 1983 and Oyster, a contactless ticketing system, in 2003. Contactless bank card payments were introduced in 2014, the first public transport system in the world to do so.
The LPTB commissioned many new station buildings, posters and public artworks in a modernist style. The schematic Tube map, designed by Harry Beck in 1931, was voted a national design icon in 2006 and now includes other TfL transport systems such as the Docklands Light Railway, London Overground, TfL Rail, and Tramlink. Other famous London Underground branding includes the roundel and the Johnston typeface, created by Edward Johnston in 1916.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1184) Wax
Waxes are a diverse class of organic compounds that are lipophilic, malleable solids near ambient temperatures. They include higher alkanes and lipids, typically with melting points above about 40 °C (104 °F), melting to give low viscosity liquids. Waxes are insoluble in water but soluble in organic, nonpolar solvents. Natural waxes of different types are produced by plants and animals and occur in petroleum.
Chemistry
Waxes are organic compounds that characteristically consist of long aliphatic alkyl chains, although aromatic compounds may also be present. Natural waxes may contain unsaturated bonds and include various functional groups such as fatty acids, primary and secondary alcohols, ketones, aldehydes and fatty acid esters. Synthetic waxes often consist of homologous series of long-chain aliphatic hydrocarbons (alkanes or paraffins) that lack functional groups.
Plant and animal waxes
Waxes are synthesized by many plants and animals. Those of animal origin typically consist of wax esters derived from a variety of fatty acids and carboxylic alcohols. In waxes of plant origin, characteristic mixtures of unesterified hydrocarbons may predominate over esters. The composition depends not only on species, but also on geographic location of the organism.
Animal waxes
The best-known animal wax is beeswax used in constructing the honeycombs of beehives, but other insects also secrete waxes. A major component of beeswax is myricyl palmitate which is an ester of triacontanol and palmitic acid. Its melting point is 62-65 °C. Spermaceti occurs in large amounts in the head oil of the sperm whale. One of its main constituents is cetyl palmitate, another ester of a fatty acid and a fatty alcohol. Lanolin is a wax obtained from wool, consisting of esters of sterols.
Plant waxes
Plants secrete waxes into and on the surface of their cuticles as a way to control evaporation, wettability and hydration. The epicuticular waxes of plants are mixtures of substituted long-chain aliphatic hydrocarbons, containing alkanes, alkyl esters, fatty acids, primary and secondary alcohols, diols, ketones and aldehydes. From the commercial perspective, the most important plant wax is carnauba wax, a hard wax obtained from the Brazilian palm Copernicia prunifera. Containing the ester myricyl cerotate, it has many applications, such as confectionery and other food coatings, car and furniture polish, floss coating, and surfboard wax. Other more specialized vegetable waxes include jojoba oil, candelilla wax and ouricury wax.
Modified plant and animal waxes
Plant and animal based waxes or oils can undergo selective chemical modifications to produce waxes with more desirable properties than are available in the unmodified starting material. This approach has relied on green chemistry approaches including olefin metathesis and enzymatic reactions and can be used to produce waxes from inexpensive starting materials like vegetable oils.
Petroleum derived waxes
Although many natural waxes contain esters, paraffin waxes are hydrocarbons, mixtures of alkanes usually in a homologous series of chain lengths. These materials represent a significant fraction of petroleum. They are refined by vacuum distillation. Paraffin waxes are mixtures of saturated n- and iso- alkanes, naphthenes, and alkyl- and naphthene-substituted aromatic compounds. A typical alkane paraffin wax chemical composition comprises hydrocarbons with the general formula CnH2n+2, such as hentriacontane, C31H64. The degree of branching has an important influence on the properties. Microcrystalline wax is a lesser produced petroleum based wax that contains higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons.
Millions of tons of paraffin waxes are produced annually. They are used in foods (such as chewing gum and cheese wrapping), in candles and cosmetics, as non-stick and waterproofing coatings and in polishes.
Montan wax
Montan wax is a fossilized wax extracted from coal and lignite. It is very hard, reflecting the high concentration of saturated fatty acids and alcohols. Although dark brown and odorous, they can be purified and bleached to give commercially useful products.
Polyethylene and related derivatives
As of 1995, about 200 million kilograms of polyethylene waxes were consumed annually.
Polyethylene waxes are manufactured by one of three methods:
* The direct polymerization of ethylene, potentially including co-monomers also;
* The thermal degradation of high molecular weight polyethylene resin;
* The recovery of low molecular weight fractions from high molecular weight resin production.
Each production technique generates products with slightly different properties. Key properties of low molecular weight polyethylene waxes are viscosity, density and melt point.
Polyethylene waxes produced by means of degradation or recovery from polyethylene resin streams contain very low molecular weight materials that must be removed to prevent volatilization and potential fire hazards during use. Polyethylene waxes manufactured by this method are usually stripped of low molecular weight fractions to yield a flash point >500°F (>260°C). Many polyethylene resin plants produce a low molecular weight stream often referred to as Low Polymer Wax (LPW). LPW is unrefined and contains volatile oligomers, corrosive catalyst and may contain other foreign material and water. Refining of LPW to produce a polyethylene wax involves removal of oligomers and hazardous catalyst. Proper refining of LPW to produce polyethylene wax is especially important when being used in applications requiring FDA or other regulatory certification.
Uses
Waxes are mainly consumed industrially as components of complex formulations, often for coatings. The main use of polyethylene and polypropylene waxes is in the formulation of colourants for plastics. Waxes confer matting effects[clarification needed] and wear resistance to paints. Polyethylene waxes are incorporated into inks in the form of dispersions to decrease friction. They are employed as release agents, find use as slip agents in furniture, and confer corrosion resistance.
Candles
Waxes such as paraffin wax or beeswax, and hard fats such as tallow are used to make candles, used for lighting and decoration. Another fuel type used in candle manufacturing includes soy. Soy wax is made by the hydrogenation process using soybean oil.
Wax products
Waxes are used as finishes and coatings for wood products.[8] Beeswax is frequently used as a lubricant on drawer slides where wood to wood contact occurs.
Other uses
Sealing wax was used to close important documents in the Middle Ages. Wax tablets were used as writing surfaces. There were different types of wax in the Middle Ages, namely four kinds of wax (Ragusan, Montenegro, Byzantine, and Bulgarian), "ordinary" waxes from Spain, Poland, and Riga, unrefined waxes and colored waxes (red, white, and green). Waxes are used to make wax paper, impregnating and coating paper and card to waterproof it or make it resistant to staining, or to modify its surface properties. Waxes are also used in shoe polishes, wood polishes, and automotive polishes, as mold release agents in mold making, as a coating for many cheeses, and to waterproof leather and fabric. Wax has been used since antiquity as a temporary, removable model in lost-wax casting of gold, silver and other materials.
Wax with colorful pigments added has been used as a medium in encaustic painting, and is used today in the manufacture of crayons, china markers and colored pencils. Carbon paper, used for making duplicate typewritten documents was coated with carbon black suspended in wax, typically montan wax, but has largely been superseded by photocopiers and computer printers. In another context, lipstick and mascara are blends of various fats and waxes colored with pigments, and both beeswax and lanolin are used in other cosmetics. Ski wax is used in skiing and snowboarding. Also, the sports of surfing and skateboarding often use wax to enhance the performance.
Some waxes are considered food-safe and are used to coat wooden cutting boards and other items that come into contact with food. Beeswax or coloured synthetic wax is used to decorate Easter eggs in Romania, Ukraine, Poland, Lithuania and the Czech Republic. Paraffin wax is used in making chocolate covered sweets.
Wax is also used in wax bullets, which are used as simulation aids.
Specific examples:
Animal waxes
* Beeswax - produced by honey bees
* Chinese wax - produced by the scale insect Ceroplastes ceriferus
* Lanolin (wool wax) - from the sebaceous glands of sheep
* Shellac wax - from the lac insect Kerria lacca
* Spermaceti - from the head cavities and blubber of the sperm whale
Vegetable waxes
* Bayberry wax - from the surface wax of the fruits of the bayberry shrub, Myrica faya
* Candelilla wax - from the Mexican shrubs Euphorbia cerifera and Euphorbia antisyphilitica
* Carnauba wax - from the leaves of the Carnauba palm, Copernicia cerifera
* Castor wax - catalytically hydrogenated castor oil
* Esparto wax - a byproduct of making paper from esparto grass, (Macrochloa tenacissima)
* Japan wax - a vegetable triglyceride (not a true wax), from the berries of Rhus and Toxicodendron species
* Jojoba oil - a liquid wax ester, from the seed of Simmondsia chinensis.
* Ouricury wax - from the Brazilian feather palm, Syagrus coronata.
* Rice bran wax - obtained from rice bran (Oryza sativa)
* Soy wax - from soybean oil
* Tallow Tree wax - from the seeds of the tallow tree Triadica sebifera.
Mineral waxes
* Ceresin waxes
* Montan wax - extracted from lignite and brown coal
* Ozocerite - found in lignite beds
* Peat waxes
Petroleum waxes
* Paraffin wax - made of long-chain alkane hydrocarbons
* Microcrystalline wax - with very fine crystalline structure
Summary
Wax, any of a class of pliable substances of animal, plant, mineral, or synthetic origin that differ from fats in being less greasy, harder, and more brittle and in containing principally compounds of high molecular weight (e.g., fatty acids, alcohols, and saturated hydrocarbons). Waxes share certain characteristic physical properties. Many of them melt at moderate temperatures (i.e., between about 35° and 100° C, or 95° and 212° F) and form hard films that can be polished to a high gloss, making them ideal for use in a wide array of polishes. They do share some of the same properties as fats. Waxes and fats, for example, are soluble in the same solvents and both leave grease spots on paper.
Notwithstanding such physical similarities, animal and plant waxes differ chemically from petroleum, or hydrocarbon, waxes and synthetic waxes. They are esters that result from a reaction between fatty acids and certain alcohols other than glycerol, either of a group called sterols (e.g., cholesterol) or an alcohol containing 12 or a larger even number of carbon atoms in a straight chain (e.g., cetyl alcohol). The fatty acids found in animal and vegetable waxes are almost always saturated. They vary from lauric to octatriacontanoic acid (C37H75COOH). Saturated alcohols from C12 to C36 have been identified in various waxes. Several dihydric (two hydroxyl groups) alcohols have been separated, but they do not form a large proportion of any wax. Also, several unidentified branched-chain fatty acids and alcohols have been found in minor quantities. Several cyclic sterols (e.g., cholesterol and analogues) make up major portions of wool wax.
Only a few vegetable waxes are produced in commercial quantities. Carnauba wax, which is very hard and is used in some high-gloss polishes, is probably the most important of these. It is obtained from the surface of the fronds of a species of palm tree native to Brazil. A similar wax, candelilla wax, is obtained commercially from the surface of the candelilla plant, which grows wild in Texas and Mexico. Sugarcane wax, which occurs on the surface of sugarcane leaves and stalks, is obtainable from the sludges of cane-juice processing. Its properties and uses are similar to those of carnauba wax, but it is normally dark in colour and contains more impurities. Other cuticle waxes occur in trace quantities in such vegetable oils as linseed, soybean, corn (maize), and sesame. They are undesirable because they may precipitate when the oil stands at room temperature, but they can be removed by cooling and filtering. Cuticle wax accounts for the beautiful gloss of polished apples.
Beeswax, the most widely distributed and important animal wax, is softer than the waxes mentioned and finds little use in gloss polishes. It is used, however, for its gliding and lubricating properties as well as in waterproofing formulations. Wool wax, the main constituent of the fat that covers the wool of sheep, is obtained as a by-product in scouring raw wool. Its purified form, called lanolin, is used as a pharmaceutical or cosmetic base because it is easily assimilated by the human skin. Sperm oil and spermaceti, both obtained from sperm whales, are liquid at ordinary temperatures and are used mainly as lubricants.
About 90 percent of the wax used for commercial purposes is recovered from petroleum by dewaxing lubricating-oil stocks. Petroleum wax is generally classified into three principal types: paraffin (see paraffin wax), microcrystalline, and petrolatum. Paraffin is widely used in candles, crayons, and industrial polishes. It is also employed for insulating components of electrical equipment and for waterproofing wood and certain other materials. Microcrystalline wax is used chiefly for coating paper for packaging, and petrolatum is employed in the manufacture of medicinal ointments and cosmetics. Synthetic wax is derived from ethylene glycol, an organic compound commercially produced from ethylene gas. It is commonly blended with petroleum waxes to manufacture a variety of products.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1185) Perfume
Perfume is a mixture of fragrant essential oils or aroma compounds (fragrances), fixatives and solvents, usually in liquid form, used to give the human body, animals, food, objects, and living-spaces an agreeable scent. The 1939 Nobel Laureate for Chemistry, Leopold Ružička stated in 1945 that "right from the earliest days of scientific chemistry up to the present time perfumes have substantially contributed to the development of organic chemistry as regards methods, systematic classification, and theory."
Ancient texts and archaeological excavations show the use of perfumes in some of the earliest human civilizations. Modern perfumery began in the late 19th century with the commercial synthesis of aroma compounds such as vanillin or coumarin, which allowed for the composition of perfumes with smells previously unattainable solely from natural aromatics.
Dilution classes
Perfume types reflect the concentration of aromatic compounds in a solvent, which in fine fragrance is typically ethanol or a mix of water and ethanol. Various sources differ considerably in the definitions of perfume types. The intensity and longevity of a perfume is based on the concentration, intensity, and longevity of the aromatic compounds, or perfume oils, used. As the percentage of aromatic compounds increases, so does the intensity and longevity of the scent. Specific terms are used to describe a fragrance's approximate concentration by the percent of perfume oil in the volume of the final product. The most widespread terms are:
* parfum or extrait, in English known as perfume extract, pure perfume, or simply perfume: 15–40% aromatic compounds (IFRA: typically ~20%);
* esprit de parfum (ESdP): 15–30% aromatic compounds, a seldom used strength concentration in between EdP and perfume;
* eau de parfum (EdP) or parfum de toilette (PdT): 10–20% aromatic compounds (typically ~15%); sometimes called "eau de perfume" or "millésime." Parfum de toilette is a less common term, most popular in the 1980s, that is generally analogous to eau de parfum.
* eau de toilette (EdT): 5–15% aromatic compounds (typically ~10%); This is the staple for most masculine perfumes.
* eau de Cologne (EdC): 3–8% aromatic compounds (typically ~5%). This concentration is often simply called cologne; see below for more information on the confusing nature of the term.
* eau fraîche: products sold as "splashes", "mists", "veils" and other imprecise terms. Generally these products contain 3% or less aromatic compounds and are diluted with water rather than oil or alcohol.
There is much confusion over the term "cologne", which has three meanings. The first and oldest definition refers to a family of fresh, citrus-based fragrances distilled using extracts from citrus, floral, and woody ingredients. Supposedly these were first developed in the early 18th century in Cologne, Germany, hence the name. This type of "classical cologne" describes unisex compositions "which are basically citrus blends and do not have a perfume parent." Examples include Mäurer & Wirtz's 4711 (created in 1799), and Guerlain's Eau de Cologne impériale (1853).
In the 20th century, the term took on a second meaning. Fragrance companies began to offer lighter, less concentrated interpretations of their existing perfumes, making their products available to a wider range of customers. Guerlain, for example, offered an eau de Cologne version of its flagship perfume Shalimar. In contrast to classical colognes, this type of modern cologne is a lighter, diluted, less concentrated interpretation of a more concentrated product, typically a pure parfum. The cologne version is often the lightest concentration from a line of fragrance products.
Finally, the term "cologne" has entered the English language as a generic, overarching term to denote a fragrance typically worn by a man as opposed to a woman, regardless of its concentration. The actual product worn by a man may technically be an eau de toilette, but he may still say that he "wears cologne". A similar problem surrounds the term "perfume", which is sometimes used in a generic sense to refer to fragrances marketed to women, whether or not the fragrance is actually an extrait.
Classical colognes first appeared in Europe in the 17th century. The first fragrance labeled a "parfum" extract with a high concentration of aromatic compounds was Guerlain's Jicky in 1889. Eau de toilette appeared alongside parfum around the turn of the century. The EdP concentration and terminology is the most recent, being originally developed to offer the radiance of an EdT with the longevity of an extrait. Parfum de toilette and EdP began to appear in the 1970s and gained popularity in the 1980s. EdP is pobably the most widespread strength concentration, often the first concentration offered, and usually referred to generically as "perfume."
Aromatics sources:
Plant sources
Plants have long been used in perfumery as a source of essential oils and aroma compounds. These aromatics are usually secondary metabolites produced by plants as protection against herbivores, infections, as well as to attract pollinators. Plants are by far the largest source of fragrant compounds used in perfumery. The sources of these compounds may be derived from various parts of a plant. A plant can offer more than one source of aromatics, for instance the aerial portions and seeds of coriander have remarkably different odors from each other. Orange leaves, blossoms, and fruit zest are the respective sources of petitgrain, neroli, and orange oils.
* Bark: Commonly used barks include cinnamon and cascarilla. The fragrant oil in sassafras root bark is also used either directly or purified for its main constituent, safrole, which is used in the synthesis of other fragrant compounds.
* Flowers and blossoms: Undoubtedly the largest and most common source of perfume aromatics. Includes the flowers of several species of rose and jasmine, as well as osmanthus, plumeria, mimosa, tuberose, narcissus, scented geranium, cassie, ambrette as well as the blossoms of citrus and ylang-ylang trees. Although not traditionally thought of as a flower, the unopened flower buds of the clove are also commonly used. Most orchid flowers are not commercially used to produce essential oils or absolutes, except in the case of vanilla, an orchid, which must be pollinated
* Fruits: Fresh fruits such as apples, strawberries, cherries rarely yield the expected odors when extracted; if such fragrance notes are found in a perfume, they are more likely to be of synthetic origin. Notable exceptions include blackcurrant leaf, litsea cubeba, vanilla, and juniper berry. The most commonly used fruits yield their aromatics from the rind; they include citrus such as oranges, lemons, and limes. Although grapefruit rind is still used for aromatics, more and more commercially used grapefruit aromatics are artificially synthesized since the natural aromatic contains sulfur and its degradation product is quite unpleasant in smell.
* Leaves and twigs: Commonly used for perfumery are lavender leaf, patchouli, sage, violets, rosemary, and citrus leaves. Sometimes leaves are valued for the "green" smell they bring to perfumes, examples of this include hay and tomato leaf.
* Resins: Valued since antiquity, resins have been widely used in incense and perfumery. Highly fragrant and antiseptic resins and resin-containing perfumes have been used by many cultures as medicines for a large variety of ailments. Commonly used resins in perfumery include labdanum, frankincense/olibanum, myrrh, balsam of Peru, benzoin. Pine and fir resins are a particularly valued source of terpenes used in the organic synthesis of many other synthetic or naturally occurring aromatic compounds. Some of what is called amber and copal in perfumery today is the resinous secretion of fossil conifers.
* Roots, rhizomes and bulbs: Commonly used terrestrial portions in perfumery include iris rhizomes, vetiver roots, various rhizomes of the ginger family.
* Seeds: Commonly used seeds include tonka bean, carrot seed, coriander, caraway, cocoa, nutmeg, mace, cardamom, and anise.
* Woods: Highly important in providing the base notes to a perfume, wood oils and distillates are indispensable in perfumery. Commonly used woods include sandalwood, rosewood, agarwood, birch, cedar, juniper, and pine. These are used in the form of macerations or dry-distilled (rectified) forms.
Animal sources:
* Ambergris: Lumps of oxidized fatty compounds, whose precursors were secreted and expelled by the sperm whale. Ambergris should not be confused with yellow amber, which is used in jewelry. Because the harvesting of ambergris involves no harm to its animal source, it remains one of the few animalic fragrancing agents around which little controversy now exists.
* Castoreum: Obtained from the odorous sacs of the North American beaver.
* Civet: Also called civet musk, this is obtained from the odorous sacs of the civets, animals in the family Viverridae, related to the mongoose. World Animal Protection investigated African civets caught for this purpose.
* Hyraceum: Commonly known as "Africa stone", is the petrified excrement of the rock hyrax.
* Honeycomb: From the honeycomb of the honeybee. Both beeswax and honey can be solvent extracted to produce an absolute. Beeswax is extracted with ethanol and the ethanol evaporated to produce beeswax absolute.
* Musk: Originally derived from a gland (sac or pod) located between the genitals and the umbilicus of the Himalayan male musk deer Moschus moschiferus, it has now mainly been replaced by the use of synthetic musks sometimes known as "white musk".
Other natural sources
* Lichens: Commonly used lichens include oakmoss and treemoss thalli.
* "Seaweed": Distillates are sometimes used as essential oil in perfumes. An example of a commonly used seaweed is Fucus vesiculosus, which is commonly referred to as bladder wrack. Natural seaweed fragrances are rarely used due to their higher cost and lower potency than synthetics.
Synthetic sources
Many modern perfumes contain synthesized odorants. Synthetics can provide fragrances which are not found in nature. For instance, Calone, a compound of synthetic origin, imparts a fresh ozonous metallic marine scent that is widely used in contemporary perfumes. Synthetic aromatics are often used as an alternate source of compounds that are not easily obtained from natural sources. For example, linalool and coumarin are both naturally occurring compounds that can be inexpensively synthesized from terpenes. Orchid scents (typically salicylates) are usually not obtained directly from the plant itself but are instead synthetically created to match the fragrant compounds found in various orchids.
One of the most commonly used classes of synthetic aromatics by far are the white musks. These materials are found in all forms of commercial perfumes as a neutral background to the middle notes. These musks are added in large quantities to laundry detergents in order to give washed clothes a lasting "clean" scent.
The majority of the world's synthetic aromatics are created by relatively few companies. They include:
* Givaudan
* International Flavors and Fragrances (IFF)
* Firmenich
* Takasago
* Symrise
Each of these companies patents several processes for the production of aromatic synthetics annually.
Summary:
Pperfume, fragrant product that results from the artful blending of certain odoriferous substances in appropriate proportions. The word is derived from the Latin per fumum, meaning “through smoke.” The art of perfumery was apparently known to the ancient Chinese, Hindus, Egyptians, Israelites, Carthaginians, Arabs, Greeks, and Romans. References to perfumery materials and even perfume formulas are found in the Bible.
Raw materials used in perfumery include natural products, of plant or animal origin, and synthetic materials. Essential oils (q.v.) are most often obtained from plant materials by steam distillation. Certain delicate oils may be obtained by solvent extraction, a process also employed to extract waxes and perfume oil, yielding—by removal of the solvent—a solid substance called a concrete. Treatment of the concrete with a second substance, usually alcohol, leaves the waxes undissolved and provides the concentrated flower oil called an absolute. In the extraction method called enfleurage, petals are placed between layers of purified animal fat, which become saturated with flower oil, and alcohol is then used to obtain the absolute. The expression method, used to recover citrus oils from fruit peels, ranges from a traditional procedure of pressing with sponges to mechanical maceration. Individual compounds used in perfumery may be isolated from the essential oils, usually by distillation, and may sometimes be reprocessed to obtain still other perfumery chemicals.
Certain animal secretions contain odoriferous substances that increase the lasting qualities of perfumes. Such substances and some of their constituents act as fixatives, preventing more volatile perfume ingredients from evaporating too rapidly. They are usually employed in the form of alcoholic solutions. The animal products include ambergris from the sperm whale, castor (also called castoreum) from the beaver, civet from the civet cat, and musk from the musk deer.
Odour characteristics ranging from floral effects to odours unknown in nature are available with the use of synthetic, aromatic materials.
Fine perfumes may contain more than 100 ingredients. Each perfume is composed of a top note, the refreshing, volatile odour perceived immediately; a middle note, or modifier, providing full, solid character; and a base note, also called an end note or basic note, which is the most persistent. Perfumes can generally be classified according to one or more identifiable dominant odours. The floral group blends such odours as jasmine, rose, lily of the valley, and gardenia. The spicy blends feature such aromas as carnation, clove, cinnamon, and nutmeg. The woody group is characterized by such odours as vetiver (derived from an aromatic grass called vetiver, or khuskhus), sandalwood, and cedarwood. The mossy family is dominated by an aroma of oak moss. The group known as the Orientals combines woody, mossy, and spicy notes with such sweet odours as vanilla or balsam and is usually accentuated by such animal odours as musk or civet. The herbal group is characterized by such odours as clover and sweet grass. The leather–tobacco group features the aromas of leather, tobacco, and the smokiness of birch tar. The aldehydic group is dominated by odours of aldehydes, usually having a fruity character. Fragrances designed for men are generally classified as citrus, spice, leather, lavender, fern, or woody.
Perfumes are usually alcoholic solutions. The solutions, generally known as perfumes but also called extraits, extracts, or handkerchief perfumes, contain about 10–25 percent perfume concentrates. The terms toilet water and cologne are commonly used interchangeably; such products contain about 2–6 percent perfume concentrate. Originally, eau de cologne was a mixture of citrus oils from such fruits as lemons and oranges, combined with such substances as lavender and neroli (orange-flower oil); toilet waters were less concentrated forms of other types of perfume. Aftershave lotions and splash colognes usually contain about 0.5–2 percent perfume oil. Recent developments include aerosol sprays and highly concentrated bath oils, sometimes called skin perfumes.
Perfumes employed to scent soaps, talcums, face powders, deodorants and antiperspirants, and other cosmetic products must be formulated to avoid being changed or becoming unstable in the new medium. They must also be formulated so as to avoid unacceptable alterations in the colour or consistency of the product.
Industrial perfumes are employed to cover up undesirable odours, as in paints and cleaning materials, or to impart a distinctive odour, as in the addition of leather odours to plastics used for furniture coverings and the addition of bread odours to wrapping papers used for breads.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1186) Hemostasis
Hemostasis or haemostasis is a process to prevent and stop bleeding, meaning to keep blood within a damaged blood vessel (the opposite of hemostasis is hemorrhage). It is the first stage of wound healing. This involves coagulation, blood changing from a liquid to a gel. Intact blood vessels are central to moderating blood's tendency to form clots. The endothelial cells of intact vessels prevent blood clotting with a heparin-like molecule and thrombomodulin and prevent platelet aggregation with nitric oxide and prostacyclin. When endothelial injury occurs, the endothelial cells stop secretion of coagulation and aggregation inhibitors and instead secrete von Willebrand factor, which initiate the maintenance of hemostasis after injury. Hemostasis has three major steps: 1) vasoconstriction, 2) temporary blockage of a break by a platelet plug, and 3) blood coagulation, or formation of a fibrin clot. These processes seal the hole until tissues are repaired.
Steps of mechanism
Hemostasis occurs when blood is present outside of the body or blood vessels. It is the innate response for the body to stop bleeding and loss of blood. During hemostasis three steps occur in a rapid sequence. Vascular spasm is the first response as the blood vessels constrict to allow less blood to be lost. In the second step, platelet plug formation, platelets stick together to form a temporary seal to cover the break in the vessel wall. The third and last step is called coagulation or blood clotting. Coagulation reinforces the platelet plug with fibrin threads that act as a "molecular glue". Platelets are a large factor in the hemostatic process. They allow for the creation of the "platelet plug" that forms almost directly after a blood vessel has been ruptured. Within seconds of a blood vessel's epithelial wall being disrupted, platelets begin to adhere to the sub-endothelium surface. It takes approximately sixty seconds until the first fibrin strands begin to intersperse among the wound. After several minutes the platelet plug is completely formed by fibrin. Hemostasis is maintained in the body via three mechanisms:
* Vascular spasm (Vasoconstriction) - Vasoconstriction is produced by vascular smooth muscle cells, and is the blood vessel's first response to injury. The smooth muscle cells are controlled by vascular endothelium, which releases intravascular signals to control the contracting properties. When a blood vessel is damaged, there is an immediate reflex, initiated by local sympathetic pain receptors, which helps promote vasoconstriction. The damaged vessels will constrict (vasoconstrict) which reduces the amount of blood flow through the area and limits the amount of blood loss. Collagen is exposed at the site of injury, the collagen promotes platelets to adhere to the injury site. Platelets release cytoplasmic granules which contain serotonin, ADP and thromboxane A2, all of which increase the effect of vasoconstriction. The spasm response becomes more effective as the amount of damage is increased. Vascular spasm is much more effective in smaller blood vessels.
* Platelet plug formation- Platelets adhere to damaged endothelium to form a platelet plug (primary hemostasis) and then degranulate. This process is regulated through thromboregulation. Plug formation is activated by a glycoprotein called Von Willebrand factor (vWF), which is found in plasma. Platelets play one of major roles in the hemostatic process. When platelets come across the injured endothelium cells, they change shape, release granules and ultimately become ‘sticky’. Platelets express certain receptors, some of which are used for the adhesion of platelets to collagen. When platelets are activated, they express glycoprotein receptors that interact with other platelets, producing aggregation and adhesion. Platelets release cytoplasmic granules such as adenosine diphosphate (ADP), serotonin and thromboxane A2. Adenosine diphosphate (ADP) attracts more platelets to the affected area, serotonin is a vasoconstrictor and thromboxane A2 assists in platelet aggregation, vasoconstriction and degranulation. As more chemicals are released more platelets stick and release their chemicals; creating a platelet plug and continuing the process in a positive feedback loop. Platelets alone are responsible for stopping the bleeding of unnoticed wear and tear of our skin on a daily basis. This is referred to as primary hemostasis.
* Clot formation - Once the platelet plug has been formed by the platelets, the clotting factors (a dozen proteins that travel along the blood plasma in an inactive state) are activated in a sequence of events known as 'coagulation cascade' which leads to the formation of Fibrin from inactive fibrinogen plasma protein. Thus, a Fibrin mesh is produced all around the platelet plug to hold it in place; this step is called "Secondary Hemostasis". During this process some red and white blood cells are trapped in the mesh which causes the primary hemostasis plug to become harder: the resultant plug is called as 'thrombus' or 'Clot'. Therefore, 'blood clot' contains secondary hemostasis plug with blood cells trapped in it. Though this is often a good step for wound healing, it has the ability to cause severe health problems if the thrombus becomes detached from the vessel wall and travels through the circulatory system; If it reaches the brain, heart or lungs it could lead to stroke, heart attack, or pulmonary embolism respectively. However, without this process the healing of a wound would not be possible.
Types
Hemostasis can be achieved in various other ways if the body cannot do it naturally (or needs help) during surgery or medical treatment. When the body is under shock and stress, hemostasis is harder to achieve. Though natural hemostasis is most desired, having other means of achieving this is vital for survival in many emergency settings. Without the ability to stimulate hemostasis the risk of hemorrhaging is great. During surgical procedures, the types of hemostasis listed below can be used to control bleeding while avoiding and reducing the risk of tissue destruction. Hemostasis can be achieved by chemical agent as well as mechanical or physical agents. Which hemostasis type used is determined based on the situation.
Developmental Haemostasis refers to the differences in the haemostatic system between children and adults.
In emergency medicine
Debates by physicians and medical practitioners still continue to arise on the subject of hemostasis and how to handle situations with large injuries. If an individual acquires a large injury resulting in extreme blood loss, then a hemostatic agent alone would not be very effective. Medical professionals continue to debate on what the best ways are to assist a patient in a chronic state; however, it is universally accepted that hemostatic agents are the primary tool for smaller bleeding injuries.
Some main types of hemostasis used in emergency medicine include:
* Chemical/topical - This is a topical agent often used in surgery settings to stop bleeding. Microfibrillar collagen is the most popular choice among surgeons because it attracts the patient's natural platelets and starts the blood clotting process when it comes in contact with the platelets. This topical agent requires the normal hemostatic pathway to be properly functional.
* Direct pressure or pressure dressing - This type of hemostasis approach is most commonly used in situations where proper medical attention is not available. Putting pressure and/or dressing to a bleeding wound slows the process of blood loss, allowing for more time to get to an emergency medical setting. Soldiers use this skill during combat when someone has been injured because this process allows for blood loss to be decreased, giving the system time to start coagulation.
* Sutures and ties - Sutures are often used to close an open wound, allowing for the injured area to stay free of pathogens and other unwanted debris to enter the site; however, it is also essential to the process of hemostasis. Sutures and ties allow for skin to be joined back together allowing for platelets to start the process of hemostasis at a quicker pace. Using sutures results in a quicker recovery period because the surface area of the wound has been decreased.
* Physical agents (gelatin sponge) - Gelatin sponges have been indicated as great hemostatic devices. Once applied to a bleeding area, a gelatin sponge quickly stops or reduces the amount of bleeding present. These physical agents are mostly used in surgical settings as well as after surgery treatments. These sponges absorb blood, allow for coagulation to occur faster, and give off chemical responses that decrease the time it takes for the hemostasis pathway to start.
Disorders
The body's hemostasis system requires careful regulation in order to work properly. If the blood does not clot sufficiently, it may be due to bleeding disorders such as hemophilia or immune thrombocytopenia; this requires careful investigation. Over-active clotting can also cause problems; thrombosis, where blood clots form abnormally, can potentially cause embolisms, where blood clots break off and subsequently become lodged in a vein or artery.
Hemostasis disorders can develop for many different reasons. They may be congenital, due to a deficiency or defect in an individual's platelets or clotting factors. A number of disorders can be acquired as well, such as in HELLP syndrome, which is due to pregnancy, or Hemolytic-uremic syndrome (HUS), which is due to E. coli toxins.
(HELLP syndrome is a complication of pregnancy; the acronym stands for Hemolysis, Elevated Liver enzymes, and Low Platelet count. It usually begins during the last three months of pregnancy or shortly after childbirth. Symptoms may include feeling tired, retaining fluid, headache, nausea, upper right abdominal pain, blurry vision, nosebleeds, and seizures. Complications may include disseminated intravascular coagulation, placental abruption, and kidney failure.)
History of artificial hemostasis
The process of preventing blood loss from a vessel or organ of the body is referred to as hemostasis. The term comes from the Ancient Greek roots "heme" meaning blood, and "stasis" meaning halting; Put together means the "halting of the blood". The origin of hemostasis dates back as far as ancient Greece; first referenced to being used in the Battle of Troy. It started with the realization that excessive bleeding inevitably equaled death. Vegetable and mineral styptics were used on large wounds by the Greeks and Romans until the takeover of Egypt around 332BC by Greece. At this time many more advances in the general medical field were developed through the study of Egyptian mummification practice, which led to greater knowledge of the hemostatic process. It was during this time that many of the veins and arteries running throughout the human body were found and the directions in which they traveled. Doctors of this time realized if these were plugged, blood could not continue to flow out of the body. Nevertheless, it took until the invention of the printing press during the fifteenth century for medical notes and ideas to travel westward, allowing for the idea and practice of hemostasis to be expanded.
Research
There is currently a great deal of research being conducted on hemostasis. The most current research is based on genetic factors of hemostasis and how it can be altered to reduce the cause of genetic disorders that alter the natural process hemostasis.
Von Willebrand disease is associated with a defect in the ability of the body to create the platelet plug and the fibrin mesh that ultimately stops the bleeding. New research is concluding that the von Willebrand disease is much more common in adolescence. This disease negatively hinders the natural process of Hemostasis causing excessive bleeding to be a concern in patients with this disease. There are complex treatments that can be done including a combination of therapies, estrogen-progesterone preparations, desmopressin, and Von Willebrand factor concentrates. Current research is trying to find better ways to deal with this disease; however, much more research is needed in order to find out the effectiveness of the current treatments and if there are more operative ways to treat this disease.
(Von Willebrand disease (VWD) is the most common hereditary blood-clotting disorder in humans. An acquired form can sometimes result from other medical conditions. It arises from a deficiency in the quality or quantity of von Willebrand factor (VWF), a multimeric protein that is required for platelet adhesion. It is known to affect several breeds of dogs as well as humans. The three forms of VWD are hereditary, acquired, and pseudo or platelet type. The three types of hereditary VWD are VWD type 1, VWD type 2, and VWD type 3. Type 2 contains various subtypes. Platelet type VWD is also an inherited condition.)
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1187) Polymer
Polymer is any of a class of natural or synthetic substances composed of very large molecules, called macromolecules, that are multiples of simpler chemical units called monomers. Polymers make up many of the materials in living organisms, including, for example, proteins, cellulose, and nucleic acids. Moreover, they constitute the basis of such minerals as diamond, quartz, and feldspar and such man-made materials as concrete, glass, paper, plastics, and rubbers.
The word polymer designates an unspecified number of monomer units. When the number of monomers is very large, the compound is sometimes called a high polymer. Polymers are not restricted to monomers of the same chemical composition or molecular weight and structure. Some natural polymers are composed of one kind of monomer. Most natural and synthetic polymers, however, are made up of two or more different types of monomers; such polymers are known as copolymers.
Natural polymers: organic and inorganic
Organic polymers play a crucial role in living things, providing basic structural materials and participating in vital life processes. For example, the solid parts of all plants are made up of polymers. These include cellulose, lignin, and various resins. Cellulose is a polysaccharide, a polymer that is composed of sugar molecules. Lignin consists of a complicated three-dimensional network of polymers. Wood resins are polymers of a simple hydrocarbon, isoprene. Another familiar isoprene polymer is rubber.
Other important natural polymers include the proteins, which are polymers of amino acids, and the nucleic acids, which are polymers of nucleotides—complex molecules composed of nitrogen-containing bases, sugars, and phosphoric acid. The nucleic acids carry genetic information in the cell. Starches, important sources of food energy derived from plants, are natural polymers composed of glucose.
Many inorganic polymers also are found in nature, including diamond and graphite. Both are composed of carbon. In diamond, carbon atoms are linked in a three-dimensional network that gives the material its hardness. In graphite, used as a lubricant and in pencil “leads,” the carbon atoms link in planes that can slide across one another.
Synthetic polymers
Synthetic polymers are produced in different types of reactions. Many simple hydrocarbons, such as ethylene and propylene, can be transformed into polymers by adding one monomer after another to the growing chain. Polyethylene, composed of repeating ethylene monomers, is an addition polymer. It may have as many as 10,000 monomers joined in long coiled chains. Polyethylene is crystalline, translucent, and thermoplastic—i.e., it softens when heated. It is used for coatings, packaging, molded parts, and the manufacture of bottles and containers. Polypropylene is also crystalline and thermoplastic but is harder than polyethylene. Its molecules may consist of from 50,000 to 200,000 monomers. This compound is used in the textile industry and to make molded objects.
Other addition polymers include polybutadiene, polyisoprene, and polychloroprene, which are all important in the manufacture of synthetic rubbers. Some polymers, such as polystyrene, are glassy and transparent at room temperature, as well as being thermoplastic. Polystyrene can be coloured any shade and is used in the manufacture of toys and other plastic objects.
If one hydrogen atom in ethylene is replaced by a chlorine atom, vinyl chloride is produced. This polymerizes to polyvinyl chloride (PVC), a colourless, hard, tough, thermoplastic material that can be manufactured in a number of forms, including foams, films, and fibres. Vinyl acetate, produced by the reaction of ethylene and acetic acid, polymerizes to amorphous, soft resins used as coatings and adhesives. It copolymerizes with vinyl chloride to produce a large family of thermoplastic materials.
Many important polymers have oxygen or nitrogen atoms, along with those of carbon, in the backbone chain. Among such macromolecular materials with oxygen atoms are polyacetals. The simplest polyacetal is polyformaldehyde. It has a high melting point and is crystalline and resistant to abrasion and the action of solvents. Acetal resins are more like metal than are any other plastics and are used in the manufacture of machine parts such as gears and bearings.
A linear polymer characterized by a repetition of ester groups along the backbone chain is called a polyester. Open-chain polyesters are colourless, crystalline, thermoplastic materials. Those with high molecular weights (10,000 to 15,000 molecules) are employed in the manufacture of films, molded objects, and fibres such as Dacron.
The polyamides include the naturally occurring proteins casein, found in milk, and zein, found in corn (maize), from which plastics, fibres, adhesives, and coatings are made. Among the synthetic polyamides are the urea-formaldehyde resins, which are thermosetting. They are used to produce molded objects and as adhesives and coatings for textiles and paper. Also important are the polyamide resins known as nylons. They are strong, resistant to heat and abrasion, noncombustible, and nontoxic, and they can be coloured. Their best-known use is as textile fibres, but they have many other applications.
Another important family of synthetic organic polymers is formed of linear repetitions of the urethane group. Polyurethanes are employed in making elastomeric fibres known as spandex and in the production of coating bases and soft and rigid foams.
A different class of polymers are the mixed organic-inorganic compounds. The most important representatives of this polymer family are the silicones. Their backbone consists of alternating silicon and oxygen atoms with organic groups attached to each of the silicon atoms. Silicones with low molecular weight are oils and greases. Higher-molecular-weight species are versatile elastic materials that remain soft and rubbery at very low temperatures. They are also relatively stable at high temperatures.
Fluorocarbon-containing polymers, known as fluoropolymers, are made up of carbon–fluorine bonds, which are highly stable and render the compound resistant to solvents. The nature of carbon–fluorine bonding further imparts a nonstick quality to fluoropolymers; this is most widely evident in the polytetrafluoroethylene (PFTE) Teflon.
Polymer chemistry
The study of such materials lies within the domain of polymer chemistry. The investigation of natural polymers overlaps considerably with biochemistry, but the synthesis of new polymers, the investigation of polymerization processes, and the characterization of the structure and properties of polymeric materials all pose unique problems for polymer chemists.
Polymer chemists have designed and synthesized polymers that vary in hardness, flexibility, softening temperature, solubility in water, and biodegradability. They have produced polymeric materials that are as strong as steel yet lighter and more resistant to corrosion. Oil, natural gas, and water pipelines are now routinely constructed of plastic pipe. In recent years, automakers have increased their use of plastic components to build lighter vehicles that consume less fuel. Other industries such as those involved in the manufacture of textiles, rubber, paper, and packaging materials are built upon polymer chemistry.
Besides producing new kinds of polymeric materials, researchers are concerned with developing special catalysts that are required by the large-scale industrial synthesis of commercial polymers. Without such catalysts, the polymerization process would be very slow in certain cases.
Summary
A polymer is a substance or material consisting of very large molecules, or macromolecules, composed of many repeating subunits. Due to their broad spectrum of properties, both synthetic and natural polymers play essential and ubiquitous roles in everyday life. Polymers range from familiar synthetic plastics such as polystyrene to natural biopolymers such as DNA and proteins that are fundamental to biological structure and function. Polymers, both natural and synthetic, are created via polymerization of many small molecules, known as monomers. Their consequently large molecular mass, relative to small molecule compounds, produces unique physical properties including toughness, high elasticity, viscoelasticity, and a tendency to form amorphous and semicrystalline structures rather than crystals.
The term "polymer" derives from the Greek word (polus, meaning "many, much") and (meros, meaning "part"). The term was coined in 1833 by Jöns Jacob Berzelius, though with a definition distinct from the modern IUPAC definition. (The International Union of Pure and Applied Chemistry (IUPAC)). The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger, who spent the next decade finding experimental evidence for this hypothesis.
Polymers are studied in the fields of polymer science (which includes polymer chemistry and polymer physics), biophysics and materials science and engineering. Historically, products arising from the linkage of repeating units by covalent chemical bonds have been the primary focus of polymer science. An emerging important area now focuses on supramolecular polymers formed by non-covalent links. Polyisoprene of latex rubber is an example of a natural polymer, and the polystyrene of styrofoam is an example of a synthetic polymer. In biological contexts, essentially all biological macromolecules—i.e., proteins (polyamides), nucleic acids (polynucleotides), and polysaccharides—are purely polymeric, or are composed in large part of polymeric components.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1188) Sealing wax
Sealing wax is a wax material of a seal which, after melting, hardens quickly (to paper, parchment, ribbons and wire, and other material) forming a bond that is difficult to separate without noticeable tampering. Wax is used to verify something such as a document is unopened, to verify the sender's identity, for example with a signet ring, and as decoration. Sealing wax can be used to take impressions of other seals. Wax was used to seal letters close and later, from about the 16th century, envelopes. Before sealing wax, the Romans used bitumen for this purpose.
Composition
Formulas vary, but there was a major shift after European trade with the Indies opened. In the Middle Ages sealing wax was typically made of beeswax and "Venice turpentine", a greenish-yellow resinous extract of the European Larch tree. The earliest such wax was uncoloured; later the wax was coloured red with vermilion. From the 16th century it was compounded of various proportions of shellac, turpentine, resin, chalk or plaster, and colouring matter (often vermilion, or red lead), but not necessarily beeswax. The proportion of chalk varied; coarser grades are used to seal wine bottles and fruit preserves, finer grades for documents. In some situations, such as large seals on public documents, beeswax was used. On occasion, sealing wax has historically been perfumed by ambergris, musk and other scents.
By 1866 many colours were available: gold (using mica), blue (using smalt or verditer), black (using lamp black), white (using lead white), yellow (using the mercuric mineral turpeth, also known as Schuetteite), green (using verdigris) and so on. Some users, such as the British Crown, assigned different colours to different types of documents. Today a range of synthetic colours are available.
Method of application
Sealing wax is available in the form of sticks, sometimes with a wick, or as granules. The stick is melted at one end (but not ignited or blackened), or the granules heated in a spoon, normally using a flame, and then placed where required, usually on the flap of an envelope. While the wax is still soft and warm, the seal (preferably at the same temperature as the wax, for the best impression) should be quickly and firmly pressed into it and released.
Modern use
At the end of 19th century and in the first half of the 20th century, sealing wax was used in laboratories as a vacuum cement. It was gradually replaced by other materials like plasticine, but according to Nobel Laureate Patrick Blackett, "at one time it might have been hard to find in an English laboratory an apparatus which did not use red Bank of England sealing-wax as a vacuum cement."
The modern day has brought sealing wax to a new level of use and application. Traditional sealing wax candles are produced in Canada, Spain, France, Italy and Scotland, with formulations similar to those used historically.
Since the advent of a postal system, the use of sealing wax has become more for ceremony than security. Modern times have required new styles of wax, allowing for mailing of the seal without damage or removal. These new waxes are flexible for mailing and are referred to as glue-gun sealing wax, faux sealing wax and flexible sealing wax.
Summary
Sealing wax is a substance that was formerly in wide use for sealing letters and attaching impressions of seals to documents. In the Middle Ages it consisted of a mixture of beeswax, turpentine, and coloring matter. Lac from Indonesia eventually replaced the beeswax. The wax mixture was poured into molds. The molds were then held over the article to be sealed and heat was applied. Melted wax dropped onto the article and was pressed with a die containing the seal.
In medieval times, when the principal use of sealing wax was for attaching the impression of seals to official documents, the composition used consisted of a mixture of Venice turpentine, beeswax and colouring matter, usually Vermilion. The preparation now employed contains no wax. Fine red stationery sealing wax is composed of about seven parts by weight of shellac, four of Venice turpentine, and three to four of Vermilion. The resins are melted together in an earthenware pot over a moderate fire, and the colouring matter is added slowly with careful stirring. The mass when taken from the fire is poured into oiled tin moulds the form of the sticks required, and when hard the sticks are polished by passing them rapidly over a charcoal fire, or through a spirit flame, which melts the superficial film. For the brightest qualities of sealing wax bleached lac is employed, and a proportion of perfuming matter—storax or balsam of Peru—is added. In the commoner qualities considerable admixtures of chalk, carbonate of magnesia, baryta white or other earthy matters are employed, and for the various colours appropriate mineral pigments. In inferior waxes ordinary resin takes the place of lac, and the dragon gum of Australia (from Xanthorrhoea hastilis) and other resins are similarly substituted. Such waxes, used for bottling, parcelling and other coarser applications, run thin when heated, and are comparatively brittle, whereas fine wax should soften slowly and is tenacious and adhesive.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1189) Talc
Talc, or talcum, is a clay mineral, composed of hydrated magnesium silicate with the chemical formula Mg3Si4O10(OH)2. Talc in powdered form, often combined with corn starch, is used as baby powder. This mineral is used as a thickening agent and lubricant; is an ingredient in ceramics, paint, and roofing material; and is a main ingredient in many cosmetics. It occurs as foliated to fibrous masses, and in an exceptionally rare crystal form. It has a perfect basal cleavage and an uneven flat fracture, and it is foliated with a two-dimensional platy form.
The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 1 as the hardness of talc, the softest mineral. When scraped on a streak plate, talc produces a white streak; though this indicator is of little importance, because most silicate minerals produce a white streak. Talc is translucent to opaque, with colors ranging from whitish grey to green with a vitreous and pearly luster. Talc is not soluble in water, and is slightly soluble in dilute mineral acids.
Soapstone is a metamorphic rock composed predominantly of talc.
Formation
Talc dominantly forms from the metamorphism of magnesian minerals such as serpentine, pyroxene, amphibole, and olivine, in the presence of carbon dioxide and water. This is known as "talc carbonation" or "steatization" and produces a suite of rocks known as talc carbonates.
Talc is also found as a diagenetic mineral in sedimentary rocks where it can form from the transformation of metastable hydrated magnesium-clay precursors such as kerolite, sepiolite, or stevensite that can precipitate from marine and lake water in certain conditions.
In this reaction, the ratio of talc and kyanite depends on aluminium content, with more aluminous rocks favoring production of kyanite. This is typically associated with high-pressure, low-temperature minerals such as phengite, garnet, and glaucophane within the lower blueschist facies. Such rocks are typically white, friable, and fibrous, and are known as whiteschist.
Talc is a trioctahedral layered mineral; its structure is similar to pyrophyllite, but with magnesium in the octahedral sites of the composite layers.
Occurrence
Talc is a common metamorphic mineral in metamorphic belts that contain ultramafic rocks, such as soapstone (a high-talc rock), and within whiteschist and blueschist metamorphic terranes. Prime examples of whiteschists include the Franciscan Metamorphic Belt of the western United States, the western European Alps especially in Italy, certain areas of the Musgrave Block, and some collisional orogens such as the Himalayas, which stretch along Pakistan, India, Nepal, and Bhutan.
Talc carbonate ultramafics are typical of many areas of the Archaean cratons, notably the komatiite belts of the Yilgarn Craton in Western Australia. Talc-carbonate ultramafics are also known from the Lachlan Fold Belt, eastern Australia, from Brazil, the Guiana Shield, and from the ophiolite belts of Turkey, Oman, and the Middle East.
China is the key world talc and steatite producing country with an output of about 2.2M tonnes(2016), which accounts for 30% of total global output. The other major producers are Brazil (12%), India (11%), the U.S. (9%), France (6%), Finland (4%), Italy, Russia, Canada, and Austria (2%, each).
Notable economic talc occurrences include the Mount Seabrook talc mine, Western Australia, formed upon a polydeformed, layered ultramafic intrusion. The France-based Luzenac Group is the world's largest supplier of mined talc. Its largest talc mine at Trimouns near Luzenac in southern France produces 400,000 tonnes of talc per year.
Uses
The structure of talc is composed of Si2O5 sheets with magnesium sandwiched between sheets in octahedral sites.
Talc is used in many industries, including paper making, plastic, paint and coatings (e.g. for metal casting molds), rubber, food, electric cable, pharmaceuticals, cosmetics, and ceramics. A coarse grayish-green high-talc rock is soapstone or steatite, used for stoves, sinks, electrical switchboards, etc. It is often used for surfaces of laboratory table tops and electrical switchboards because of its resistance to heat, electricity and acids.
In finely ground form, talc finds use as a cosmetic (talcum powder), as a lubricant, and as a filler in paper manufacture. It is used to coat the insides of inner tubes and rubber gloves during manufacture to keep the surfaces from sticking. Talcum powder, with heavy refinement, has been used in baby powder, an astringent powder used to prevent diaper rash. The American Academy of Pediatrics recommends that parents not use baby powder because it poses a risk of respiratory problems, including breathing trouble and serious lung damage if the baby inhales it. The small size of the particles makes it difficult to keep them out of the air while applying the powder. Zinc oxide-based ointments are a much safer alternative.
It is also often used in basketball to keep a player's hands dry. Most tailor's chalk, or French chalk, is talc, as is the chalk often used for welding or metalworking.
Talc is also used as food additive or in pharmaceutical products as a glidant. In medicine, talc is used as a pleurodesis agent to prevent recurrent pleural effusion or pneumothorax. In the European Union, the additive number is E553b.
Talc may be used in the processing of white rice as a buffing agent in the polishing stage.
Due to its low shear strength, talc is one of the oldest known solid lubricants. Also a limited use of talc as friction-reducing additive in lubricating oils is made.
Talc is widely used in the ceramics industry in both bodies and glazes. In low-fire art-ware bodies, it imparts whiteness and increases thermal expansion to resist crazing. In stonewares, small percentages of talc are used to flux the body and therefore improve strength and vitrification. It is a source of MgO flux in high-temperature glazes (to control melting temperature). It is also employed as a matting agent in earthenware glazes and can be used to produce magnesia mattes at high temperatures.
Patents are pending on the use of magnesium silicate as a cement substitute. Its production requirements are less energy-intensive than ordinary Portland cement (at a heating requirement of around 650 °C for talc compared to 1500 °C for limestone to produce Portland cement), while it absorbs far more carbon dioxide as it hardens. This results in a negative carbon footprint overall, as the cement substitute removes 0.6 tonnes of CO2 per tonne used. This contrasts with a positive carbon footprint of 0.4 tonne per tonne of conventional cement.
Talc is used in the production of the materials that are widely used in the building interiors such as base content paints in wall coatings. Other areas that use talc to a great extent are organic agriculture, food industry, cosmetics, and hygiene products such as baby powder and detergent powder.
Talc is sometimes used as an adulterant to illegal heroin, to expand volume and weight and thereby increase its street value. With intravenous use, it may lead to pulmonary talcosis, a granulomatous inflammation in the lungs.
Sterile talc powder
Sterile talc powder (NDC 63256-200-05) is a sclerosing agent used in the procedure of pleurodesis. This can be helpful as a cancer treatment to prevent pleural effusions (an abnormal collection of fluid in the space between the lungs and the thoracic wall). It is inserted into the space via a chest tube, causing it to close up, so fluid cannot collect there. The product can be sterilized by dry heat, ethylene oxide, or gamma irradiation.
Safety
Suspicions have been raised that talc use contributes to certain types of disease, mainly cancers of the ovaries and lungs. According to the IARC, talc containing asbestos is classified as a group 1 agent (carcinogenic to humans), talc use in the perineum is classified as group 2B (possibly carcinogenic to humans), and talc not containing asbestos is classified as group 3 (unclassifiable as to carcinogenicity in humans). Reviews by Cancer Research UK and the American Cancer Society conclude that some studies have found a link, but other studies have not.
The studies discuss pulmonary issues, lung cancer, and ovarian cancer. One of these, published in 1993, was a US National Toxicology Program report, which found that cosmetic grade talc containing no asbestos-like fibres was correlated with tumor formation in rats forced to inhale talc for 6 hours a day, five days a week over at least 113 weeks. A 1971 paper found particles of talc embedded in 75% of the ovarian tumors studied. Research published in 1995 and 2000 concluded that it was plausible that talc could cause ovarian cancer, but no conclusive evidence was shown. The Cosmetic Ingredient Review Expert Panel concluded in 2015 that talc, in the concentrations currently used in cosmetics, is safe. In 2018, Health Canada issued a warning, advising against inhaling talcum powder or using it in the female perineal area.
Industrial grade
In the United States, the Occupational Safety and Health Administration and National Institute for Occupational Safety and Health have set occupational exposure limits to respirable talc dusts at 2 mg/m3 over an eight-hour workday. At levels of 1000 mg/cubic meter, inhalation of talc is considered immediately dangerous to life and health.
Food grade
The United States Food and Drug Administration considers talc (magnesium silicate) generally recognized as safe (GRAS) to use as an anticaking agent in table salt in concentrations smaller than 2%.
Association with asbestos
One particular issue with commercial use of talc is its frequent co-location in underground deposits with asbestos ore. Asbestos is a general term for different types of fibrous silicate minerals, desirable in construction for their heat resistant properties. There are six varieties of asbestos; the most common variety in manufacturing, white asbestos, is in the serpentine family. Serpentine minerals are sheet silicates; although not in the serpentine family, talc is also a sheet silicate, with two sheets connected by magnesium cations. The frequent co-location of talc deposits with asbestos may result in contamination of mined talc with white asbestos, which poses serious health risks when dispersed into the air and inhaled. Stringent quality control since 1976, including separating cosmetic- and food-grade talc from "industrial"-grade talc, has largely eliminated this issue, but it remains a potential hazard requiring mitigation in the mining and processing of talc. A 2010 US FDA survey failed to find asbestos in a variety of talc-containing products. A 2018 Reuters investigation asserted that pharmaceuticals company Johnson & Johnson knew for decades that there was asbestos in its baby powder, and in 2020 the company stopped selling its baby powder in the US and Canada.
Litigation
In 2006 the International Agency for Research on Cancer classified talcum powder as a possible human carcinogen if used in the female genital area. Yet no federal agency in the US acted to remove talcum powder from the market or add warnings.
In February 2016, as the result of a lawsuit against Johnson & Johnson (J&J), a St. Louis jury awarded $72 million to the family of an Alabama woman who died from ovarian cancer. The family claimed that the use of talcum powder was responsible for her cancer.
In May 2016, a South Dakota woman was awarded $55 million as the result of another lawsuit against J&J. The woman had used Johnson & Johnson's Baby Powder for more than 35 years before being diagnosed with ovarian cancer in 2011.
In October 2016, a St. Louis jury awarded $70.1 million to a Californian woman with ovarian cancer who had used Johnson's Baby Powder for 45 years.
In August 2017, a Los Angeles jury awarded $417 million to a Californian woman, Eva Echeverria, who developed ovarian cancer as a "proximate result of the unreasonably dangerous and defective nature of talcum powder", her lawsuit against Johnson & Johnson stated. On 20 October 2017, Los Angeles Superior Court judge Maren Nelson dismissed the verdict. The judge stated that Echeverria proved there is "an ongoing debate in the scientific and medical community about whether talc more probably than not causes ovarian cancer and thus (gives) rise to a duty to warn", but not enough to sustain the jury's imposition of liability against Johnson & Johnson stated, and concluded that Echeverria did not adequately establish that talc causes ovarian cancer.
In July 2018, a court in St. Louis awarded a $4.7bn claim ($4.14bn in punitive damages and $550m in compensatory damages) against J&J to 22 claimant women, concluding that the company had suppressed evidence of asbestos in its products for more than four decades.
At least 1,200 to 2,000 other talcum powder-related lawsuits are pending.
Summary
Talc is common silicate mineral that is distinguished from almost all other minerals by its extreme softness (it has the lowest rating on the Mohs scale of hardness). Its soapy or greasy feel accounts for the name soapstone given to compact aggregates of talc and other rock-forming minerals. Dense aggregates of high-purity talc are called steatite.
Since ancient times, soapstones have been employed for carvings, ornaments, and utensils; Assyrian cylinder seals, Egyptian scarabs, and Chinese statuary are notable examples. Soapstones are resistant to most reagents and to moderate heat; thus, they are especially suitable for sinks and countertops. Talc is also used in lubricants, leather dressings, toilet and dusting powders, and certain marking pencils. It is used as a filler in ceramics, paint, paper, roofing materials, plastic, and rubber; as a carrier in insecticides; and as a mild abrasive in the polishing of cereal grains such as rice and corn.
Talc is found as a metamorphic mineral in veins, in foliated masses, and in certain rocks. It is often associated with serpentine, tremolite, forsterite, and almost always with carbonates (calcite, dolomite, or magnesite) in the lower metamorphic facies. It also occurs as an alteration product, as from tremolite or forsterite.
One of the remarkable features of talc is its simple, almost constant composition; talc is a basic magnesium silicate, Mg3Si4O10(OH)2. Unlike other silicates, even closely related ones, talc appears to be unable to accept iron or aluminum into its structure to form chemical-replacement series, even though an iron analog of talc is known, and the structurally related chlorite forms at least a partial series between iron and magnesium end-members. Talc is distinguishable from pyrophyllite chemically and optically.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1190) Trachea
The trachea, also known as the windpipe, is a cartilaginous tube that connects the larynx to the bronchi of the lungs, allowing the passage of air, and so is present in almost all air-breathing animals with lungs. The trachea extends from the larynx and branches into the two primary bronchi. At the top of the trachea the cricoid cartilage attaches it to the larynx. The trachea is formed by a number of horseshoe-shaped rings, joined together vertically by overlying ligaments, and by the trachealis muscle at their ends. The epiglottis closes the opening to the larynx during swallowing.
The trachea begins to form in the second month of embryo development, becoming longer and more fixed in its position over time. It is epithelium lined with column-shaped cells that have hair-like extensions called cilia, with scattered goblet cells that produce protective mucins. The trachea can be affected by inflammation or infection, usually as a result of a viral illness affecting other parts of the respiratory tract, such as the larynx and bronchi, called croup, that can result in a barking cough. Infection with bacteria usually affects the trachea only and can cause narrowing or even obstruction. As a major part of the respiratory tract, when obstructed the trachea prevents air entering the lungs and so a tracheostomy may be required if the trachea is obstructed. Additionally, during surgery if mechanical ventilation is required when a person is sedated, a tube is inserted into the trachea, called intubation.
The word trachea is used to define a very different organ in invertebrates than in vertebrates. Insects have an open respiratory system made up of spiracles, tracheae, and tracheoles to transport metabolic gases to and from tissues.
The trachea, commonly known as the windpipe, is a tube about 4 inches long and less than an inch in diameter in most people. The trachea begins just under the larynx (voice box) and runs down behind the breastbone (sternum). The trachea then divides into two smaller tubes called bronchi: one bronchus for each lung.
Structure
An adult's trachea has an inner diameter of about 1.5 to 2 centimetres (0.59 to 0.79 in) and a length of about 10 to 11 centimetres (3.9 to 4.3 in); wider in males than females. It begins at the bottom of the larynx and ends at the carina, the point where the trachea branches into the left and right main bronchi. The trachea is surrounded by 16–20 rings of hyaline cartilage; these 'rings' are 4 millimetres high in the adult, incomplete and C-shaped. Ligaments connect the rings. The trachealis muscle connects the ends of the incomplete rings and runs along the back wall of the trachea. Also adventitia, which is the outermost layer of connective tissue that surrounds the hyaline cartilage, contributes to the trachea's ability to bend and stretch with movement.
The trachea begins at the lower edge of the cricoid cartilage of the larynx and ends at the carina, the point where the trachea branches into left and right main bronchi. The trachea begins level with the sixth cervical vertebra (C6), and the carina is found at the level of the fourth thoracic vertebra (T4), although its position may change with breathing.
Nearby structures
The trachea passes by many structures of the neck and chest (thorax) along its course.
In front of the upper trachea lies connective tissue and skin.[2] Several other structures pass over or sit on the trachea; the jugular arch, which joins the two anterior jugular veins, sits in front of the upper part of the trachea. The sternohyoid and sternothyroid muscles stretch along its length. The thyroid gland also stretches across the upper trachea, with the isthmus overlying the second to fourth rings, and the lobes stretching to the level of the fifth or sixth cartilage. The blood vessels of the thyroid rest on the trachea next to the isthmus; superior thyroid arteries join just above it, and the inferior thyroid veins below it. In front of the lower trachea lies the manubrium of the sternum, the remnants of the thymus in adults. To the front left lie the large blood vessels the aortic arch and its branches the left common carotid artery and the brachiocephalic trunk; and the left brachiocephalic vein. The deep cardiac plexus and lymph nodes are also positioned in front of the lower trachea.
Behind the trachea, along its length, sits the oesophagus, followed by connective tissue and the vertebral column. To its sides run the carotid arteries and inferior thyroid arteries; and to its sides on its back surface run the recurrent laryngeal nerves in the upper trachea, and the vagus nerves in the lower trachea.
The trachealis muscle contracts during coughing, reducing the size of the lumen of the trachea.
The thyroid gland also lies on top of the trachea, and lies below the cricoid cartilage. The isthmus of the thyroid, which connects both wings, lies directly in front, whereas the wings lie on the front and stretch to the side.
Blood and lymphatic supply
The upper part of trachea receives and drains blood through the inferior thyroid arteries and veins; the lower trachea receives blood from bronchial arteries. Arteries that supply the trachea do so via small branches that supply the trachea from the sides. As the branches approach the wall of the trachea, they split into inferior and superior branches, which join with the branches of the arteries above and below; these then split into branches that supply the anterior and posterior parts of the trachea. The inferior thyroid arteries arise just below the isthmus of the thyroid, which sits atop the trachea. These arteries join (anastamoses) with ascending branches of the bronchial arteries, which are direct branches from the aorta, to supply blood to the trachea. The lymphatic vessels of the trachea drain into the pretracheal nodes that lie in front of the trachea, and paratracheal lymph nodes that lie beside it.
Development
In the fourth week of development of the human embryo as the respiratory bud grows, the trachea separates from the foregut through the formation of ridges which eventually separate the trachea from the oesophagus, the tracheoesophageal septum. This separates the future trachea from the oesophagus and divides the foregut tube into the laryngotracheal tube. By the start of the fifth week, the left and right main bronchi have begin to form, initially as buds at the terminal end of the trachea.
The trachea is no more than 4mm diameter during the first year of life, expanding to its adult diameter of approximately 2cm by late childhood. The trachea is more circular and more vertical in children compared to adults, varies more in size, and also varies more in its position in relation to its surrounding structures.
Microanatomy
The trachea is lined with a layer of interspersed layers of column-shaped cells with cilia. The epithelium contains goblet cells, which are glandular, column-shaped cells that produce mucins, the main component of mucus. Mucus helps to moisten and protect the airways. Mucus lines the ciliated cells of the trachea to trap inhaled foreign particles that the cilia then waft upward toward the larynx and then the pharynx where it can be either swallowed into the stomach or expelled as phlegm. This self-clearing mechanism is termed mucociliary clearance.
The trachea is surrounded by 16 to 20 rings of hyaline cartilage; these 'rings' are incomplete and C-shaped. Two or more of the cartilages often unite, partially or completely, and they are sometimes bifurcated at their extremities. The rings are generally highly elastic but they may calcify with age.
Function
The trachea is one part of the respiratory tree that is a conduit for air to pass through on its way to or from the alveoli of the lungs. This transmits oxygen to the body and removes carbon dioxide.
Clinical significance:
Inflammation and infection
Inflammation of the trachea is known as tracheitis, usually due to an infection. It is usually caused by viral infections, with bacterial infections occurring almost entirely in children. Most commonly, infections occur with inflammation of other parts of the respiratory tract, such as the larynx and bronchi, known as croup, however bacterial infections may also affect the trachea alone, although they are often associated with a recent viral infection. Viruses that cause croup are generally the parainfluenza viruses 1–3, with influenza viruses A and B also causing croup, but usually causing more serious infections; bacteria may also cause croup and include Staphylococcus aureus, Haemophilus influenzae, Streptococcus pneumoniae and Moraxella catarrhalis. Causes of bacterial infection of the trachea are most commonly Staphylococcus aureus and Streptococcus pneumoniae. In patients who are in hospital, additional bacteria that may cause tracheitis include Escherichia coli, Klebsiella pneumoniae, and Pseudomonas aeruginosa.
A person affected with tracheitis may start with symptoms that suggest an upper respiratory tract infection such as a cough, sore throat, or coryzal symptoms such as a runny nose. Fevers may develop and an affected child may develop difficulty breathing and sepsis. Swelling of the airway can cause narrowing of the airway, causing a hoarse breathing sound called stridor, or even cause complete blockage. Unfortunately, up to 80% of people affected by bacterial tracheitis require the use of mechanical ventilation, and treatment may include endoscopy for the purposes of acquiring microbiological specimens for culture and sensitivity, as well as removal of any dead tissue associated with the infection. Treatment in such situations usually includes antibiotics.
Narrowing
A trachea may be narrowed or compressed, usually a result of enlarged nearby lymph nodes; cancers of the trachea or nearby structures; large thyroid goitres; or rarely as a result of other processes such as unusually swollen blood vessels. Scarring from tracheobronchial injury or intubation; or inflammation associated with granulomatosis with polyangiitis may also cause a narrowing of the trachea (tracheal stenosis). Obstruction invariably causes a harsh breathing sound known as stridor. A camera inserted via the mouth down into the trachea, called bronchoscopy, may be performed to investigate the cause of an obstruction. Management of obstructions depends on the cause. Obstructions as a result of malignancy may be managed with surgery, chemotherapy or radiotherapy. A stent may be inserted over the obstruction. Benign lesions, such as narrowing resulting from scarring, are likely to be surgically excised.
One cause of narrowing is tracheomalacia, which is the tendency for the trachea to collapse when there is increased external pressure, such as when airflow is increased during breathing in or out, due to decreased compliance. It can be due to congenital causes, or due to things that develop after birth, such as compression from nearby masses or swelling, or trauma. Congenital tracheomalacia can occur by itself or in association with other abnormalities such as bronchomalacia or laryngomalacia, and abnormal connections between the trachea and the oesophagus, amongst others. Congenital tracheomalacia often improves without specific intervention; when required, interventions may include beta agonists and muscarinic agonists, which enhance the tone of the smooth muscle surrounding the trachea; positive pressure ventilation, or surgery, which may include the placement of a stent, or the removal of the affected part of the trachea. In dogs, particularly miniature dogs and toy dogs, tracheomalacia, as well as bronchomalacia, can lead to tracheal collapse, which often presents with a honking goose-like cough.
Intubation
Tracheal intubation refers to the insertion of a tube down the trachea. This procedure is commonly performed during surgery, in order to ensure a person receives enough oxygen when sedated. The catheter is connected to a machine that monitors the airflow, oxygenation and several other metrics. This is often one of the responsibilities of an anaesthetist during surgery.
In an emergency, or when tracheal intubation is deemed impossible, a tracheotomy is often performed to insert a tube for ventilation, usually when needed for particular types of surgery to be carried out so that the airway can be kept open. The provision of the opening via a tracheotomy is called a tracheostomy. Another method procedure can be carried, in an emergency situation, and this is a cricothyrotomy.
Congenital disorders
Tracheal agenesis is a rare birth defect in which the trachea fails to develop. The defect is usually fatal though sometimes surgical intervention has been successful.
A tracheoesophageal fistula is a congenital defect in which the trachea and esophagus are abnormally connected (a fistula). This is because of abnormalities in the separation between the trachea and oesophagus during development. This occurs in approximately 1 in 3000 births, and the most common abnormalities is a separation of the upper and lower ends of the oesophagus, with the upper end finishing in a closed pouch. Other abnormalities may be associated with this, including cardiac abnormalities, or VACTERL syndrome. Such fistulas may be detected before a baby is born because of excess amniotic fluid; after birth, they are often associated with pneumonitis and pneumonia because of aspiration of food contents. Congenital fistulas are often treated by surgical repair. In adults, fistulas may occur because of erosion into the trachea from nearby malignant tumours, which erode into both the trachea and the oesophagus. Initially, these often result in coughing from swallowed contents of the oesophagus that are aspirated through the trachea, often progressing to fatal pneumonia; unfortunately, there is rarely a curative treatment. A tracheo-oesophageal puncture is a surgically created hole between the trachea and the esophagus in a person who has had their larynx removed. Air travels upwards from the surgical connection to the upper oesophagus and the pharynx, creating vibrations that create sound that can be used for speech. The purpose of the puncture is to restore a person's ability to speak after the vocal cords have been removed.
Sometimes as an anatomical variation one or more of the tracheal rings are formed as complete rings, rather than horseshoe shaped rings. These O rings are smaller than the normal C-shaped rings and can cause narrowing (stenosis) of the trachea, resulting in breathing difficulties. An operation called a slide tracheoplasty can open up the rings and rejoin them as wider rings, shortening the length of the trachea. Slide tracheoplasty is said to be the best option in treating tracheal stenosis.
Mounier-Kuhn syndrome is a rare congenital disorder of an abnormally enlarged trachea, characterised by absent elastic fibres, smooth muscle thinning, and a tendency to get recurrent respiratory tract infections.
Replacement
From 2008, operations have experimentally replaced tracheas, with those grown from stem cells, or with synthetic substitutes, however this is regarded as experimental and there is no standardised method. Difficulties with ensuring adequate blood supply to the replaced trachea is considered a major challenge to any replacement. Additionally, no evidence has been found to support the placement of stem cells taken from bone marrow on the trachea as a way of stimulating tissue regeneration, and such a method remains hypothetical.
In January 2021, surgeons at Mount Sinai Hospital in New York performed the first complete trachea transplantation. The 18-hour procedure included harvesting a trachea from a donor and implanting it in the patient, connecting numerous veins and arteries to provide sufficient blood flow to the organ.
Other animals
Allowing for variations in the length of the neck, the trachea in other mammals is, in general, similar to that in humans. Generally, it is also similar to the reptilian trachea.
Vertebrates
In birds, the trachea runs from the pharynx to the syrinx, from which the primary bronchi diverge. Swans have an unusually elongated trachea, part of which is coiled beneath the sternum; this may act as a resonator to amplify sound. In some birds, the tracheal rings are complete, and may even be ossified.
In amphibians, the trachea is normally extremely short, and leads directly into the lungs, without clear primary bronchi. A longer trachea is, however, found in some long-necked salamanders, and in caecilians. While there are irregular cartilagenous nodules on the amphibian trachea, these do not form the rings found in amniotes.
The only vertebrates to have lungs, but no trachea, are the lungfish and the Polypterus, in which the lungs arise directly from the pharynx.
Invertebrates
The word trachea is used to define a very different organ in invertebrates than in vertebrates. Insects have an open respiratory system made up of spiracles, tracheae, and tracheoles to transport metabolic gases to and from tissues. The distribution of spiracles can vary greatly among the many orders of insects, but in general each segment of the body can have only one pair of spiracles, each of which connects to an atrium and has a relatively large tracheal tube behind it. The tracheae are invaginations of the cuticular exoskeleton that branch (anastomose) throughout the body with diameters from only a few micrometres up to 0.8 mm. Diffusion of oxygen and carbon dioxide takes place across the walls of the smallest tubes, called tracheoles, which penetrate tissues and even indent individual cells. Gas may be conducted through the respiratory system by means of active ventilation or passive diffusion. Unlike vertebrates, insects do not generally carry oxygen in their haemolymph.This is one of the factors that may limit their size.
A tracheal tube may contain ridge-like circumferential rings of taenidia in various geometries such as loops or helices. Taenidia provide strength and flexibility to the trachea. In the head, thorax, or abdomen, tracheae may also be connected to air sacs. Many insects, such as grasshoppers and bees, which actively pump the air sacs in their abdomen, are able to control the flow of air through their body. In some aquatic insects, the tracheae exchange gas through the body wall directly, in the form of a gill, or function essentially as normal, via a plastron. Note that despite being internal, the tracheae of arthropods are lined with cuticular tissue and are shed during moulting (ecdysis).
The trachea is composed of about 20 rings of tough cartilage. The back part of each ring is made of muscle and connective tissue. Moist, smooth tissue called mucosa lines the inside of the trachea. The trachea widens and lengthens slightly with each breath in, returning to its resting size with each breath out.
Trachea Conditions
* Tracheal stenosis: Inflammation in the trachea can lead to scarring and narrowing of the windpipe. Surgery or endoscopy may be needed to correct the narrowing (stenosis), if severe.
* Tracheoesophageal fistula: An abnormal channel forms to connect the trachea and the esophagus. Passage of swallowed food from the esophagus into the trachea causes serious lung problems.
* Tracheal foreign body: An object is inhaled (aspirated) and lodges in the trachea or one of its branches. A procedure called bronchoscopy is usually needed to remove a foreign body from the trachea.
* Tracheal cancer: Cancer of the trachea is quite rare. Symptoms can include coughing or difficulty breathing.
* Tracheomalacia: The trachea is soft and floppy rather than rigid, usually due to a birth defect. In adults, tracheomalacia is generally caused by injury or by smoking.
* Tracheal obstruction: A tumor or other growth can compress and narrow the trachea, causing difficulty breathing. A stent or surgery is needed to open the trachea and improve breathing.
Trachea Tests
* Flexible bronchoscopy: An endoscope (flexible tube with a lighted camera on its end) is passed through the nose or mouth into the trachea. Using bronchoscopy, a doctor can examine the trachea and its branches.
* Rigid bronchoscopy: A rigid metal tube is introduced through the mouth into the trachea. Rigid bronchoscopy is often more effective than flexible bronchoscopy, but it requires deep anesthesia.
* Computed tomography (CT scan): A CT scanner takes a series of X-rays, and a computer creates detailed images of the trachea and nearby structures.
* Magnetic resonance imaging (MRI scan): An MRI scanner uses radio waves in a magnetic field to create images of the trachea and nearby structures.
* Chest X-ray: A plain X-ray can tell if the trachea is deviated to either side of the chest. An X-ray might also identify masses or foreign bodies.
Trachea Treatments
* Tracheostomy: A small hole is cut in the front of the trachea, through an incision in the neck. Tracheostomy is usually done for people who need a long period of mechanical ventilation (breathing support).
* Tracheal dilation: During bronchoscopy, a balloon can be inflated in the trachea, opening a narrowing (stenosis). Sequentially larger rings can also be used to gradually open the trachea.
* Laser therapy: Blockages in the trachea (such as from cancer) can be destroyed with a high-energy laser.
* Tracheal stenting: After dilation of a tracheal obstruction, a stent is often placed to keep the trachea open. Silicone or metal stents may be used.
* Tracheal surgery: Surgery may be best for removing certain tumors obstructing the trachea. Surgery may also correct a tracheoesophageal fistula.
* Cryotherapy: During bronchoscopy, a tool can freeze and destroy a tumor obstructing the trachea.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1191) Larynx
The larynx, commonly called the voice box, is an organ in the top of the neck involved in breathing, producing sound and protecting the trachea against food aspiration. The opening of larynx into pharynx known as the laryngeal inlet is about 4–5 centimeters in diameter. The larynx houses the vocal cords, and manipulates pitch and volume, which is essential for phonation. It is situated just below where the tract of the pharynx splits into the trachea and the esophagus.
Structure
The triangle-shaped larynx consists largely of cartilages that are attached to one another, and to surrounding structures, by muscles or by fibrous and elastic tissue components. The larynx is lined by a ciliated columnar epithelium except for the vocal folds. The cavity of the larynx extends from its triangle-shaped inlet, to the epiglottis, and to the circular outlet at the lower border of the cricoid cartilage, where it is continuous with the lumen of the trachea. The mucous membrane lining the larynx forms two pairs of lateral folds that project inward into its cavity. The upper folds are called the vestibular folds. They are also sometimes called the false vocal cords for the rather obvious reason that they play no part in vocalization. The lower pair of folds are known as the vocal cords, which produce sounds needed for speech and other vocalizations. The slit-like space between the left and right vocal cords, called the rima glottidis, is the narrowest part of the larynx. The vocal cords and the rima glottidis are together designated as the glottis. The laryngeal cavity above the vestibular folds is called the vestibule. The very middle portion of the cavity between the vestibular folds and the vocal cords is the ventricle of the larynx, or laryngeal ventricle. The infraglottic cavity is the open space below the glottis.
Location
In adult humans, the larynx is found in the anterior neck at the level of the cervical vertebrae C3–C6. It connects the inferior part of the pharynx (hypopharynx) with the trachea. The laryngeal skeleton consists of nine cartilages: three single (epiglottic, thyroid and cricoid) and three paired (arytenoid, corniculate, and cuneiform). The hyoid bone is not part of the larynx, though the larynx is suspended from the hyoid. The larynx extends vertically from the tip of the epiglottis to the inferior border of the cricoid cartilage. Its interior can be divided in supraglottis, glottis and subglottis.
Cartilages
There are nine cartilages, three unpaired and three paired (3 pairs=6), that support the mammalian larynx and form its skeleton.
Unpaired cartilages:
* Thyroid cartilage: This forms the Adam's apple (also called the laryngeal prominence). It is usually larger in males than in females. The thyrohyoid membrane is a ligament associated with the thyroid cartilage that connects it with the hyoid bone. It supports the front portion of the larynx.
* Cricoid cartilage: A ring of hyaline cartilage that forms the inferior wall of the larynx. It is attached to the top of the trachea. The median cricothyroid ligament connects the cricoid cartilage to the thyroid cartilage.
* Epiglottis: A large, spoon-shaped piece of elastic cartilage. During swallowing, the pharynx and larynx rise. Elevation of the pharynx widens it to receive food and drink; elevation of the larynx causes the epiglottis to move down and form a lid over the glottis, closing it off.
Paired cartilages:
* Arytenoid cartilages: Of the paired cartilages, the arytenoid cartilages are the most important because they influence the position and tension of the vocal cords. These are triangular pieces of mostly hyaline cartilage located at the posterosuperior border of the cricoid cartilage.
* Corniculate cartilages: Horn-shaped pieces of elastic cartilage located at the apex of each arytenoid cartilage.
* Cuneiform cartilages: Club-shaped pieces of elastic cartilage located anterior to the corniculate cartilages.
Muscles
The muscles of the larynx are divided into intrinsic and extrinsic muscles. The extrinsic muscles act on the region and pass between the larynx and parts around it but have their origin elsewhere; the intrinsic muscles are confined entirely within the larynx and have their origin and insertion there.
The intrinsic muscles are divided into respiratory and the phonatory muscles (the muscles of phonation). The respiratory muscles move the vocal cords apart and serve breathing. The phonatory muscles move the vocal cords together and serve the production of voice. The main respiratory muscles are the posterior cricoarytenoid muscles. The phonatory muscles are divided into adductors (lateral cricoarytenoid muscles, arytenoid muscles) and tensors (cricothyroid muscles, thyroarytenoid muscles).
Intrinsic
The intrinsic laryngeal muscles are responsible for controlling sound production.
* Cricothyroid muscle lengthen and tense the vocal cords.
* Posterior cricoarytenoid muscles abduct and externally rotate the arytenoid cartilages, resulting in abducted vocal cords.
* Lateral cricoarytenoid muscles adduct and internally rotate the arytenoid cartilages, increase medial compression.
* Transverse arytenoid muscle adduct the arytenoid cartilages, resulting in adducted vocal cords.
* Oblique arytenoid muscles narrow the laryngeal inlet by constricting the distance between the arytenoid cartilages.
* Thyroarytenoid muscles narrow the laryngeal inlet, shortening the vocal cords, and lowering voice pitch. The internal thyroarytenoid is the portion of the thyroarytenoid that vibrates to produce sound.
Notably the only muscle capable of separating the vocal cords for normal breathing is the posterior cricoarytenoid. If this muscle is incapacitated on both sides, the inability to pull the vocal cords apart (abduct) will cause difficulty breathing. Bilateral injury to the recurrent laryngeal nerve would cause this condition. It is also worth noting that all muscles are innervated by the recurrent laryngeal branch of the vagus except the cricothyroid muscle, which is innervated by the external laryngeal branch of the superior laryngeal nerve (a branch of the vagus).
Additionally, intrinsic laryngeal muscles present a constitutive Ca2+-buffering profile that predicts their better ability to handle calcium changes in comparison to other muscles. This profile is in agreement with their function as very fast muscles with a well-developed capacity for prolonged work. Studies suggests that mechanisms involved in the prompt sequestering of Ca2+ (sarcoplasmic reticulum Ca2+-reuptake proteins, plasma membrane pumps, and cytosolic Ca2+-buffering proteins) are particularly elevated in laryngeal muscles, indicating their importance for the myofiber function and protection against disease, such as Duchenne muscular dystrophy. Furthermore, different levels of Orai1 in rat intrinsic laryngeal muscles and extraocular muscles over the limb muscle suggests a role for store operated calcium entry channels in those muscles' functional properties and signaling mechanisms.
Extrinsic
The extrinsic laryngeal muscles support and position the larynx within the mid-cervical region. [trachea.]
Extrinsic laryngeal muscles
* Sternothyroid muscles depress the larynx. (Innervated by ansa cervicalis)
* Omohyoid muscles depress the larynx. (Ansa cervicalis)
* Sternohyoid muscles depress the larynx. (Ansa cervicalis)
* Inferior constrictor muscles. (CN X)
* Thyrohyoid muscles elevates the larynx. (C1)
* Digastric elevates the larynx. (CN V3, CN VII)
* Stylohyoid elevates the larynx. (CN VII)
* Mylohyoid elevates the larynx. (CN V3)
* Geniohyoid elevates the larynx. (C1)
* Hyoglossus elevates the larynx. (CN XII)
* Genioglossus elevates the larynx. (CN XII)
Nerve supply
The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and laryngeal vestibule is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve. While the sensory input described above is (general) visceral sensation (diffuse, poorly localized), the vocal cords also receives general somatic sensory innervation (proprioceptive and touch) by the superior laryngeal nerve.
Injury to the external laryngeal nerve causes weakened phonation because the vocal cords cannot be tightened. Injury to one of the recurrent laryngeal nerves produces hoarseness, if both are damaged the voice may or may not be preserved, but breathing becomes difficult.
Development
In newborn infants, the larynx is initially at the level of the C2–C3 vertebrae, and is further forward and higher relative to its position in the adult body. The larynx descends as the child grows.
Function:
:
Sound generation
Sound is generated in the larynx, and that is where pitch and volume are manipulated. The strength of expiration from the lungs also contributes to loudness.
Manipulation of the larynx is used to generate a source sound with a particular fundamental frequency, or pitch. This source sound is altered as it travels through the vocal tract, configured differently based on the position of the tongue, lips, mouth, and pharynx. The process of altering a source sound as it passes through the filter of the vocal tract creates the many different vowel and consonant sounds of the world's languages as well as tone, certain realizations of stress and other types of linguistic prosody. The larynx also has a similar function to the lungs in creating pressure differences required for sound production; a constricted larynx can be raised or lowered affecting the volume of the oral cavity as necessary in glottalic consonants.
The vocal cords can be held close together (by adducting the arytenoid cartilages) so that they vibrate. The muscles attached to the arytenoid cartilages control the degree of opening. Vocal cord length and tension can be controlled by rocking the thyroid cartilage forward and backward on the cricoid cartilage (either directly by contracting the cricothyroids or indirectly by changing the vertical position of the larynx), by manipulating the tension of the muscles within the vocal cords, and by moving the arytenoids forward or backward. This causes the pitch produced during phonation to rise or fall. In most males the vocal cords are longer and with a greater mass than most females' vocal cords, producing a lower pitch.
The vocal apparatus consists of two pairs of folds, the vestibular folds (false vocal cords) and the true vocal cords. The vestibular folds are covered by respiratory epithelium, while the vocal cords are covered by stratified squamous epithelium. The vestibular folds are not responsible for sound production, but rather for resonance. The exceptions to this are found in Tibetan chanting and Kargyraa, a style of Tuvan throat singing. Both make use of the vestibular folds to create an undertone. These false vocal cords do not contain muscle, while the true vocal cords do have skeletal muscle.
Other
The most important role of the larynx is its protecting function; the prevention of foreign objects from entering the lungs by coughing and other reflexive actions. A cough is initiated by a deep inhalation through the vocal cords, followed by the elevation of the larynx and the tight adduction (closing) of the vocal cords. The forced expiration that follows, assisted by tissue recoil and the muscles of expiration, blows the vocal cords apart, and the high pressure expels the irritating object out of the throat. Throat clearing is less violent than coughing, but is a similar increased respiratory effort countered by the tightening of the laryngeal musculature. Both coughing and throat clearing are predictable and necessary actions because they clear the respiratory passageway, but both place the vocal cords under significant strain.
Another important role of the larynx is abdominal fixation, a kind of Valsalva maneuver in which the lungs are filled with air in order to stiffen the thorax so that forces applied for lifting can be translated down to the legs. This is achieved by a deep inhalation followed by the adduction of the vocal cords. Grunting while lifting heavy objects is the result of some air escaping through the adducted vocal cords ready for phonation.
Abduction of the vocal cords is important during physical exertion. The vocal cords are separated by about 8 mm (0.31 in) during normal respiration, but this width is doubled during forced respiration.
During swallowing, elevation of the posterior portion of the tongue levers (inverts) the epiglottis over the glottis' opening to prevent swallowed material from entering the larynx which leads to the lungs, and provides a path for a food or liquid bolus to "slide" into the esophagus; the hyo-laryngeal complex is also pulled upwards to assist this process. Stimulation of the larynx by aspirated food or liquid produces a strong cough reflex to protect the lungs.
In addition, intrinsic laryngeal muscles are spared from some muscle wasting disorders, such as Duchenne muscular dystrophy, may facilitate the development of novel strategies for the prevention and treatment of muscle wasting in a variety of clinical scenarios. ILM have a calcium regulation system profile suggestive of a better ability to handle calcium changes in comparison to other muscles, and this may provide a mechanistic insight for their unique pathophysiological properties.
Clinical significance
Disorders
There are several things that can cause a larynx to not function properly. Some symptoms are hoarseness, loss of voice, pain in the throat or ears, and breathing difficulties.
* Acute laryngitis is the sudden inflammation and swelling of the larynx. It is caused by the common cold or by excessive shouting. It is not serious. Chronic laryngitis is caused by smoking, dust, frequent yelling, or prolonged exposure to polluted air. It is much more serious than acute laryngitis.
* Presbylarynx is a condition in which age-related atrophy of the soft tissues of the larynx results in weak voice and restricted vocal range and stamina. Bowing of the anterior portion of the vocal colds is found on laryngoscopy.
* Ulcers may be caused by the prolonged presence of an endotracheal tube.
* Polyps and vocal cord nodules are small bumps caused by prolonged exposure to tobacco smoke and vocal misuse, respectively.
* Two related types of cancer of the larynx, namely squamous cell carcinoma and verrucous carcinoma, are strongly associated with repeated exposure to cigarette smoke and alcohol.
* Vocal cord paresis is weakness of one or both vocal cords that can greatly impact daily life.
* Idiopathic laryngeal spasm.
* Laryngopharyngeal reflux is a condition in which acid from the stomach irritates and burns the larynx. Similar damage can occur with gastroesophageal reflux disease (GERD).
* Laryngomalacia is a very common condition of infancy, in which the soft, immature cartilage of the upper larynx collapses inward during inhalation, causing airway obstruction.
* Laryngeal perichondritis, the inflammation of the perichondrium of laryngeal cartilages, causing airway obstruction.
* Laryngeal paralysis is a condition seen in some mammals (including dogs) in which the larynx no longer opens as wide as required for the passage of air, and impedes respiration. In mild cases it can lead to exaggerated or "raspy" breathing or panting, and in serious cases can pose a considerable need for treatment.
* Duchenne muscular dystrophy, intrinsic laryngeal muscles (ILM) are spared from the lack of dystrophin and may serve as a useful model to study the mechanisms of muscle sparing in neuromuscular diseases. Dystrophic ILM presented a significant increase in the expression of calcium-binding proteins. The increase of calcium-binding proteins in dystrophic ILM may permit better maintenance of calcium homeostasis, with the consequent absence of myonecrosis. The results further support the concept that abnormal calcium buffering is involved in these neuromuscular diseases.
Treatments
Patients who have lost the use of their larynx are typically prescribed the use of an electrolarynx device. Larynx transplants are a rare procedure. The world's first successful operation took place in 1998 at the Cleveland Clinic, and the second took place in October 2010 at the University of California Davis Medical Center in Sacramento.
Other animals
Pioneering work on the structure and evolution of the larynx was carried out in the 1920s by the British comparative anatomist Victor Negus, culminating in his monumental work The Mechanism of the Larynx (1929). Negus, however, pointed out that the descent of the larynx reflected the reshaping and descent of the human tongue into the pharynx. This process is not complete until age six to eight years. Some researchers, such as Philip Lieberman, Dennis Klatt, Bart de Boer and Kenneth Stevens using computer-modeling techniques have suggested that the species-specific human tongue allows the vocal tract (the airway above the larynx) to assume the shapes necessary to produce speech sounds that enhance the robustness of human speech. Sounds such as the vowels of the words see and do, 'i' and 'u', (in phonetic notation) have been shown to be less subject to confusion in classic studies such as the 1950 Peterson and Barney investigation of the possibilities for computerized speech recognition.
In contrast, though other species have low larynges, their tongues remain anchored in their mouths and their vocal tracts cannot produce the range of speech sounds of humans. The ability to lower the larynx transiently in some species extends the length of their vocal tract, which as Fitch showed creates the acoustic illusion that they are larger. Research at Haskins Laboratories in the 1960s showed that speech allows humans to achieve a vocal communication rate that exceeds the fusion frequency of the auditory system by fusing sounds together into syllables and words. The additional speech sounds that the human tongue enables us to produce, particularly, allow humans to unconsciously infer the length of the vocal tract of the person who is talking, a critical element in recovering the phonemes that make up a word.
Non-mammals
Most tetrapod species possess a larynx, but its structure is typically simpler than that found in mammals. The cartilages surrounding the larynx are apparently a remnant of the original gill arches in fish, and are a common feature, but not all are always present. For example, the thyroid cartilage is found only in mammals. Similarly, only mammals possess a true epiglottis, although a flap of non-cartilagenous mucosa is found in a similar position in many other groups. In modern amphibians, the laryngeal skeleton is considerably reduced; frogs have only the cricoid and arytenoid cartilages, while salamanders possess only the arytenoids.
Vocal folds are found only in mammals, and a few lizards. As a result, many reptiles and amphibians are essentially voiceless; frogs use ridges in the trachea to modulate sound, while birds have a separate sound-producing organ, the syrinx.
History
The ancient Greek physician Galen first described the larynx, describing it as the "first and supremely most important instrument of the voice".
Summary
Larynx, also called voice box, is a hollow, tubular structure connected to the top of the windpipe (trachea); air passes through the larynx on its way to the lungs. The larynx also produces vocal sounds and prevents the passage of food and other foreign particles into the lower respiratory tracts.
The larynx is composed of an external skeleton of cartilage plates that prevents collapse of the structure. The plates are fastened together by membranes and muscle fibres. The front set of plates, called thyroid cartilage, has a central ridge and elevation commonly known as the Adam’s apple. The plates tend to be replaced by bone cells beginning from about 20 years of age onward.
The epiglottis, at the upper part of the larynx, is a flaplike projection into the throat. As food is swallowed, the whole larynx structure rises to the epiglottis so that the passageway to the respiratory tract is blocked. After the food passes into the esophagus (food tube), the larynx relaxes and resumes its natural position.
The centre portion of the larynx is reduced to slitlike openings in two sites. Both sites represent large folds in the mucous membrane lining the larynx. The first pair is known as the false vocal cords, while the second is the true vocal cords (glottis). Muscles attached directly and indirectly to the vocal cords permit the opening and closing of the folds. Speech is normally produced when air expelled from the lungs moves up the trachea and strikes the underside of the vocal cords, setting up vibrations as it passes through them; raw sound emerges from the larynx and passes to the upper cavities, which act as resonating chambers (or in some languages, such as Arabic, as shapers of sound), and then passes through the mouth for articulation by the tongue, teeth, hard and soft palates, and lips. If the larynx is removed, the esophagus can function as the source for sound, but the control of pitch and volume is lacking.
In ali other forms of animal life, sounds can be produced by the glottis, but in most, the ability to form words is lacking. Reptiles can produce a hissing sound by rushing air through the glottis, which is at the back of the mouth. Frogs produce their croaking sounds by passing air back and forth over the vocal folds; a pair of vocal sacs near the mouth serve as resonating chambers. In birds the larynx is a small structure in front of the trachea; it serves only to guard the air passage.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1192) Radiator
Radiators are heat exchangers used to transfer thermal energy from one medium to another for the purpose of cooling and heating. The majority of radiators are constructed to function in cars, buildings, and electronics.
A radiator is always a source of heat to its environment, although this may be for either the purpose of heating this environment, or for cooling the fluid or coolant supplied to it, as for automotive engine cooling and HVAC dry cooling towers. Despite the name, most radiators transfer the bulk of their heat via convection instead of thermal radiation.
(Heating, ventilation, and air conditioning (HVAC) is the use of various technologies to heat, cool, purify, replace, circulate, and control the humidity of the air in an enclosed space. Its goal is to provide thermal comfort and acceptable indoor air quality. HVAC system design is a subdiscipline of mechanical engineering, based on the principles of thermodynamics, fluid mechanics, and heat transfer. "Refrigeration" is sometimes added to the field's abbreviation as HVAC&R or HVACR, or "ventilation" is dropped, as in HACR (as in the designation of HACR-rated circuit breakers).
HVAC is an important part of residential structures such as single family homes, apartment buildings, hotels, and senior living facilities; medium to large industrial and office buildings such as skyscrapers and hospitals; vehicles such as cars, trains, airplanes, ships and submarines; and in marine environments, where safe and healthy building conditions are regulated with respect to temperature and humidity, using fresh air from outdoors.
Ventilating or ventilation (the "V" in HVAC) is the process of exchanging or replacing air in any space to provide high indoor air quality which involves temperature control, oxygen replenishment, and removal of moisture, odors, smoke, heat, dust, airborne bacteria, carbon dioxide, and other gases. Ventilation removes unpleasant smells and excessive moisture, introduces outside air, keeps interior building air circulating, and prevents stagnation of the interior air. Methods for ventilating a building are divided into mechanical/forced and natural types.)
History
The Roman hypocaust is an early example of a type of radiator for building space heating. Franz San Galli, a Prussian-born Russian businessman living in St. Petersburg, is credited with inventing the heating radiator around 1855, having received a radiator patent in 1857, but American Joseph Nason developed a primitive radiator in 1841 and received a number of U.S. patents for hot water and steam heating.
Radiation and convection
Heat transfer from a radiator occurs by all the usual mechanisms: thermal radiation, convection into flowing air or liquid, and conduction into the air or liquid. A radiator may even transfer heat by phase change, for example, drying a pair of socks. In practice, the term "radiator" refers to any of a number of devices in which a liquid circulates through exposed pipes (often with fins or other means of increasing surface area). The term "convector" refers to a class of devices in which the source of heat is not directly exposed.
To increase the surface area available for heat exchange with the surroundings, a radiator will have multiple fins, in contact with the tube carrying liquid pumped through the radiator. Air (or other exterior fluid) in contact with the fins carries off heat. If air flow is obstructed by dirt or damage to the fins, that portion of the radiator is ineffective at heat transfer.
Heating
Radiators are commonly used to heat buildings on the European continent. In a radiative central heating system, hot water or sometimes steam is generated in a central boiler and circulated by pumps through radiators within the building, where this heat is transferred to the surroundings.
HVAC
Radiators are used in dry cooling towers and closed-loop cooling towers for cooling buildings using liquid-cooled chillers for HVAC while keeping the chiller coolant isolated from the surroundings.
Engine cooling
Radiators are used for cooling internal combustion engines, mainly in automobiles but also in piston-engined aircraft, railway locomotives, motorcycles, stationary generating plants and other places where heat engines are used.
To cool down the heat engine, a coolant is passed through the engine block, where it absorbs heat from the engine. The hot coolant is then fed into the inlet tank of the radiator (located either on the top of the radiator, or along one side), from which it is distributed across the radiator core through tubes to another tank on the opposite end of the radiator. As the coolant passes through the radiator tubes on its way to the opposite tank, it transfers much of its heat to the tubes which, in turn, transfer the heat to the fins that are lodged between each row of tubes. The fins then release the heat to the ambient air. Fins are used to greatly increase the contact surface of the tubes to the air, thus increasing the exchange efficiency. The cooled liquid is fed back to the engine, and the cycle repeats. Normally, the radiator does not reduce the temperature of the coolant back to ambient air temperature, but it is still sufficiently cooled to keep the engine from overheating.
This coolant is usually water-based, with the addition of glycols to prevent freezing and other additives to limit corrosion, erosion and cavitation. However, the coolant may also be an oil. The first engines used thermosiphons to circulate the coolant; today, however, all but the smallest engines use pumps.
Up to the 1980s, radiator cores were often made of copper (for fins) and brass (for tubes, headers, and side-plates, while tanks could also be made of brass or of plastic, often a polyamide). Starting in the 1970s, use of aluminium increased, eventually taking over the vast majority of vehicular radiator applications. The main inducements for aluminium are reduced weight and cost.
Since air has a lower heat capacity and density than liquid coolants, a fairly large volume flow rate (relative to the coolant's) must be blown through the radiator core to capture the heat from the coolant. Radiators often have one or more fans that blow air through the radiator. To save fan power consumption in vehicles, radiators are often behind the grille at the front end of a vehicle. Ram air can give a portion or all of the necessary cooling air flow when the coolant temperature remains below the system's designed maximum temperature, and the fan remains disengaged.
(A ram-air intake is any intake design which uses the dynamic air pressure created by vehicle motion to increase the static air pressure inside of the intake manifold on an internal combustion engine, thus allowing a greater massflow through the engine and hence increasing engine power.)
Electronics and computers
As electronic devices become smaller, the problem of dispersing waste heat becomes more difficult. Tiny radiators known as heat sinks are used to convey heat from the electronic components into a cooling air stream. Heatsinks do not use water, rather they conduct the heat from the source. High-performance heat sinks have copper to conduct better. Heat is transferred to the air by conduction and convection; a relatively small proportion of heat is transferred by radiation owing to the low temperature of semiconductor devices compared to their surroundings.
Spacecraft
Radiators are found as components of some spacecraft. These radiators work by radiating heat energy away as light (generally infrared given the temperatures at which spacecraft try to operate) because in the vacuum of space neither convection nor conduction can work to transfer heat away. On the International Space Station, these can be seen clearly as large white panels attached to the main truss. They can be found on both manned and unmanned craft.
Additional Information:
How Car Cooling Systems Work
A radiator is a type of heat exchanger. It is designed to transfer heat from the hot coolant that flows through it to the air blown through it by the fan.
Most modern cars use aluminum radiators. These radiators are made by brazing thin aluminum fins to flattened aluminum tubes. The coolant flows from the inlet to the outlet through many tubes mounted in a parallel arrangement. The fins conduct the heat from the tubes and transfer it to the air flowing through the radiator.
The tubes sometimes have a type of fin inserted into them called a turbulator, which increases the turbulence of the fluid flowing through the tubes. If the fluid flowed very smoothly through the tubes, only the fluid actually touching the tubes would be cooled directly. The amount of heat transferred to the tubes from the fluid running through them depends on the difference in temperature between the tube and the fluid touching it. So if the fluid that is in contact with the tube cools down quickly, less heat will be transferred. By creating turbulence inside the tube, all of the fluid mixes together, keeping the temperature of the fluid touching the tubes up so that more heat can be extracted, and all of the fluid inside the tube is used effectively.
Radiators usually have a tank on each side, and inside the tank is a transmission cooler. The inlet and outlet where the oil from the transmission enters the cooler. The transmission cooler is like a radiator within a radiator, except instead of exchanging heat with the air, the oil exchanges heat with the coolant in the radiator.
What are radiators made from?
Radiators are made from conductive metals, which means that heat travels through it quickly. The most common radiator materials are; steel, stainless steel, aluminium and cast iron.
Read on to discover the pros and cons of each of these different radiator materials.
Mild Steel Radiators
In general, mild steel tends to be the most popular material used to make radiators and heated towel rails. This is because it is cheap, and it can be moulded into many different weird and wonderful shapes, take our Terma ribbon vertical radiator for example, which can be seen below.
Mild steel also has the benefit of being available in various colours, so unique designer radiators can be crafted easily by mild steel. No other metal can compare to mild steel when it comes to the different designs and colours on offer. Because it’s cheaper, if you are on a budget then a mild steel radiator could be the right option for you.
Something to look out for with mild steel radiators is that they can be prone to rusting, so you may find that they come with shorter guarantees. But this can be prevented! Make sure your boiler is regularly serviced and add radiator inhibitor to the system. This will ensure that your mild steel radiators last.
Stainless Steel Radiators
Stainless steel radiators are built to last. They are excellent heat conductors, and they keep warm for a long time even after they have been turned off. They also have the benefit of being able to be crafted into unique shapes for a stunning heating statement piece.
As stainless steel is a more premium metal, the prices will be higher than mild steel radiators, but they are a worthwhile and long-term investment.
A fantastic property of stainless steel is that they do not rust and they are resistant to corrosion, so you can enjoy less maintenance. Because of this, you will also find that stainless steel radiators come with a very long manufactures warranty for peace of mind. All of our stainless-steel radiators come with a 20 to 25-year warranty!
Having said this, anti-rust properties does not mean the radiator will be 100% rust-proof, so it will still be a good idea to keep your system topped up with radiator inhibitor, to be on the safe side. Also, if you tend to hang wet towels over the radiator, regularly wipe it dry to avoid any rust forming on the outside of the radiator. After all, rusting will not be covered by the warranty.
Aluminium Radiators
The newest of all radiator materials, aluminium is quickly becoming the most popular choice and its not hard to see why.
Aluminium is a superconductor, this means it produces a lot of heat, the most out of all radiator materials. The result is much higher BTU output; so if you lined up the exact same sized radiator in mild steel, stainless steel, aluminium and cast iron, the aluminium radiator would produce 2-3 times the BTU output compared with the rest. This means you can heat up a bigger space with just the one aluminium radiator, rather than having to get multiple.
Aluminium is also resistant to corrosion and rusting, so you can feel assured that they will last.
Aluminium radiators have great environmental benefits. You can enjoy their fast response times, as they heat up and cool down rapidly so are fantastic at regulating the temperature within a room. They also have low water content so they require less hot water from the boiler, this means they do not need to be on for as long to reach their maximum temperature. This will reduce the amount of heat lost, making them a much more energy efficient option.
Plus, at the end of their life they are easily recycled. The process of recycling aluminium takes only 6 weeks.
Because aluminium is lightweight, installation is much quicker and easier, meaning cheaper installation costs. This also makes them ideal for hanging onto walls that cannot take heavy weight. While this carries many benefits, it does mean you have to be much more careful when handling aluminium radiators, to avoid any dents or damages.
Cast Iron Radiators
The original radiator material; cast iron radiators carry a nostalgic feel that for many homeowners, you cannot beat.
And while technology progresses and new options appear on the market, cast iron radiators remain a popular, if not the only choice for many. This is because if you have your heart set on a period-style radiator, any other material will only leave you disappointed.
The way that cast iron radiators heat and cool differs from all other radiator materials. Cast iron radiators take a while to heat up, so if you’re feeling cold don’t expect this radiator to warm you up quickly. You need to be prepared and get them switched on before you start getting cold.
But don’t be alarmed thinking that your money will be wasted while they heat up, because once they reach the desired temperature they can be switched off, because they will hold their heat for hours.
Cast iron radiators are very well built, they are the strongest radiator material and will last you a lifetime. But because of this, they are heavy. Very heavy. So, if you do go for a cast iron radiator, be prepared to pay more for the installation.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1193) Fan
Summary
Fan is device for producing a current of air or other gases or vapours. Fans are used for circulating air in rooms and buildings; for cooling motors and transmissions; for cooling and drying people, materials, or products; for exhausting dust and noxious fumes; for conveying light materials; for forced draft in steam boilers; and in heating, ventilating, and air-conditioning systems.
A fan consists of a series of radial blades attached to a central rotating hub. The rotating assembly of blades and hub is known as an impeller, a rotor, or a runner; and it may or may not be enclosed in a housing. Fans may be driven by an electric motor, an internal-combustion engine, a steam turbine, a gas turbine, or other motive power.
Enclosed fans may be classified as centrifugal or axial-flow. In centrifugal fans air is led through an inlet pipe to the centre, or eye, of the impeller, which forces it radially outward into the volute, or spiral, casing from which it flows to a discharge pipe.
In an axial-flow fan, with the runner and guide vanes in a cylindrical housing, air passes through the runner essentially without changing its distance from the axis of rotation. There is no centrifugal effect. Guide, or stator, vanes serve to smooth the airflow and improve efficiency.
In general, an axial-flow fan is suitable for a relatively large rate of flow with a relatively small pressure gain, and a centrifugal fan for a small rate of flow and a large pressure gain. Actually, the pressure developed in a fan is small compared with the pressure developed in a compressor. The capacities of fans range from 100 to 500,000 cubic feet per minute (3 to 14,000 cubic metres per minute).
Details
A fan is a powered machine used to create a flow of air. A fan consists of a rotating arrangement of vanes or blades, generally made of wood, plastic, or metal, which act on the air. The rotating assembly of blades and hub is known as an impeller, rotor, or runner. Usually, it is contained within some form of housing, or case. This may direct the airflow, or increase safety by preventing objects from contacting the fan blades. Most fans are powered by electric motors, but other sources of power may be used, including hydraulic motors, handcranks, and internal combustion engines.
Mechanically, a fan can be any revolving vane, or vanes used for producing currents of air. Fans produce air flows with high volume and low pressure (although higher than ambient pressure), as opposed to compressors which produce high pressures at a comparatively low volume. A fan blade will often rotate when exposed to an air-fluid stream, and devices that take advantage of this, such as anemometers and wind turbines, often have designs similar to that of a fan.
Typical applications include climate control and personal thermal comfort (e.g., an electric table or floor fan), vehicle engine cooling systems (e.g., in front of a radiator), machinery cooling systems (e.g., inside computers and audio power amplifiers), ventilation, fume extraction, winnowing (e.g., separating chaff of cereal grains), removing dust (e.g. sucking as in a vacuum cleaner), drying (usually in combination with a heat source) and providing draft for a fire. Fans also have several applications in the industries. Some fans directly cool the machine and process, and may be indirectly used for cooling in the case of industrial heat exchangers.
While fans are often used to cool people, they do not cool air (electric fans may warm it slightly due to the warming of their motors), but work by evaporative cooling of sweat and increased heat convection into the surrounding air, due to the airflow from the fans. Thus, fans may become ineffective at cooling the body if the surrounding air is near body temperature and contains high humidity.
History
The punkah fan was used in India about 500 BCE. It was a handheld fan made from bamboo strips or other plant fiber, that could be rotated or fanned to move air. During British rule, the word came to be used by Anglo-Indians to mean a large swinging flat fan, fixed to the ceiling and pulled by a servant called the punkawallah.
For purposes of air conditioning, the Han Dynasty craftsman and engineer Ding Huan (fl. 180 CE) invented a manually operated rotary fan with seven wheels that measured 3 m (10 ft) in diameter; in the 8th century, during the Tang Dynasty (618–907), the Chinese applied hydraulic power to rotate the fan wheels for air conditioning, while the rotary fan became even more common during the Song Dynasty (960–1279).
In the 17th century, the experiments of scientists including Otto von Guericke, Robert Hooke and Robert Boyle, established the basic principles of vacuum and airflow. The English architect Sir Christopher Wren applied an early ventilation system in the Houses of Parliament that used bellows to circulate air. Wren's design would be the catalyst for much later improvement and innovation. The first rotary fan used in Europe was for mine ventilation during the 16th century, as illustrated by Georg Agricola (1494–1555).
John Theophilus Desaguliers, a British engineer, demonstrated the successful use of a fan system to draw out stagnant air from coal mines in 1727 and soon afterward he installed a similar apparatus in Parliament. Good ventilation was particularly important in coal mines to reduce casualties from asphyxiation. The civil engineer John Smeaton, and later John Buddle installed reciprocating air pumps in the mines in the North of England. However, this arrangement was not as ideal as the machinery was liable to breaking down.
Steam
In 1849 a 6m radius steam-driven fan, designed by William Brunton, was made operational in the Gelly Gaer Colliery of South Wales. The model was exhibited at the Great Exhibition of 1851. Also in 1851 David Boswell Reid, a Scottish doctor, installed four steam-powered fans in the ceiling of St George's Hospital in Liverpool, so that the pressure produced by the fans would force the incoming air upward and through vents in the ceiling. Improvements in the technology were made by James Nasmyth, Frenchman Theophile Guibal and J. R. Waddle.
Electrical
Between 1882 and 1886 Schuyler Wheeler invented a fan powered by electricity. It was commercially marketed by the American firm Crocker & Curtis electric motor company. In 1885 a desktop electric fan was commercially available by Stout, Meadowcraft & Co. in New York.
In 1882, Philip Diehl developed the world's first electric ceiling fan. During this intense period of innovation, fans powered by alcohol, oil, or kerosene were common around the turn of the 20th century. In 1909, KDK of Japan pioneered the invention of mass-produced electric fans for home use. In the 1920s, industrial advances allowed steel fans to be mass-produced in different shapes, bringing fan prices down and allowing more homeowners to afford them. In the 1930s, the first art deco fan (the "Silver Swan") was designed by Emerson. By the 1940s, Crompton Greaves of India became the world's largest manufacturer of electric ceiling fans mainly for sale in India, Asia, and the Middle East. By the 1950s, table and stand fans were manufactured in bright colors and eye-catching.
Window and central air conditioning in the 1960s caused many companies to discontinue production of fans, but in the mid-1970s, with an increasing awareness of the cost of electricity and the amount of energy used to heat and cool homes, turn-of-the-century styled ceiling fans became immensely popular again as both decorative and energy-efficient units.
In 1998 William Fairbank and Walter K. Boyd invented the high-volume low-speed (HVLS) ceiling fan, designed to reduce energy consumption by using long fan blades rotating at low speed to move a relatively large volume of air.
Types
Mechanical revolving blade fans are made in a wide range of designs. They are used on the floor, table, desk, or hung from the ceiling (ceiling fan). They can also be built into a window, wall, roof, chimney, etc. Most electronic systems such as computers include fans to cool the circuits inside, and in appliances such as hair dryers and portable space heaters and mounted/installed wall heaters. They are also used for moving air in air-conditioning systems, and in automotive engines, where they are driven by belts or by a direct motor. Fans used for comfort create a wind chill by increasing the heat transfer coefficient but do not lower temperatures directly. Fans used to cool electrical equipment or in engines or other machines do cool the equipment directly by forcing hot air into the cooler environment outside of the machine.
There are three main types of fans used for moving air, axial, centrifugal (also called radial) and cross flow (also called tangential). The American Society of Mechanical Engineers Performance Testing Code 11 (PTC) provides standard procedures for conducting and reporting tests on fans, including those of the centrifugal, axial, and mixed flows.
Axial-flow
Axial-flow fans have blades that force air to move parallel to the shaft about which the blades rotate. This type of fan is used in a wide variety of applications, ranging from small cooling fans for electronics to the giant fans used in cooling towers. Axial flow fans are applied in air conditioning and industrial process applications. Standard axial flow fans have diameters of 300–400 mm or 1,800–2,000 mm and work under pressures up to 800 Pa. Special types of fans are used as low-pressure compressor stages in aircraft engines. Examples of axial fans are:
* Table fan: Basic elements of a typical table fan include the fan blade, base, armature, and lead wires, motor, blade guard, motor housing, oscillator gearbox, and oscillator shaft. The oscillator is a mechanism that motions the fan from side to side. The armature axle shaft comes out on both ends of the motor, one end of the shaft is attached to the blade and the other is attached to the oscillator gearbox. The motor case joins to the gearbox to contain the rotor and stator. The oscillator shaft combines the weighted base and the gearbox. A motor housing covers the oscillator mechanism. The blade guard joins to the motor case for safety.
* Domestic Extractor Fan: Wall or ceiling mounted, the domestic extractor fan is employed to remove moisture and stale air from domestic dwellings. Bathroom extractor fans typically utilize a four-inch (100 mm) impeller, whilst kitchen extractor fans typically use a six-inch (150 mm) impeller as the room itself is often bigger. Axial fans with five-inch (125 mm) impellers are also used in larger bathrooms though are much less common. Domestic axial extractor fans are not suitable for duct runs over 3 m or 4 m, depending on the number of bends in the run, as the increased air pressure in longer pipework inhibits the performance of the fan.
* Electro-mechanical fans: Among collectors, are rated according to their condition, size, age, and several blades. Four-blade designs are the most common. Five-blade or six-blade designs are rare. The materials from which the components are made, such as brass, are important factors in fan desirability.
* Ceiling fan: A fan suspended from the ceiling of a room is a ceiling fan. Most ceiling fans rotate at relatively low speeds and do not have blade guards. Ceiling fans can be found in both residential and industrial/commercial settings.
* In automobiles, a mechanical fan provides engine cooling and prevents the engine from overheating by blowing or drawing air through a coolant-filled radiator. The fan may be driven with a belt and pulley off the engine's crankshaft or an electric motor switched on or off by a thermostatic switch.
* Computer fan for cooling electrical components and in laptop coolers
* Fans inside audio power amplifiers help to draw heat away from the electrical components.
* Variable pitch fan: A variable-pitch fan is used where precise control of static pressure within supply ducts is required. The blades are arranged to rotate upon a control-pitch hub. The fan wheel will spin at a constant speed. The blades follow the control pitch hub. As the hub moves toward the rotor, the blades increase their angle of attack and an increase in flow results.
Centrifugal
Often called a "squirrel cage" (because of its general similarity in appearance to exercise wheels for pet rodents) or "scroll fan", the centrifugal fan has a moving component (called an impeller) that consists of a central shaft about which a set of blades that form a spiral, or ribs, are positioned. Centrifugal fans blow air at right angles to the intake of the fan and spin the air outwards to the outlet (by deflection and centrifugal force). The impeller rotates, causing air to enter the fan near the shaft and move perpendicularly from the shaft to the opening in the scroll-shaped fan casing. A centrifugal fan produces more pressure for a given air volume, and is used where this is desirable such as in leaf blowers, blowdryers, air mattress inflators, inflatable structures, climate control in air handling units and various industrial purposes. They are typically noisier than comparable axial fans (although some types of centrifugal fans are quieter such as in air handling units).
Cross-flow fan
The cross-flow or tangential fan, sometimes known as a tubular fan, was patented in 1893 by Paul Mortier, and is used extensively in heating, ventilation, and air conditioning (HVAC), especially in ductless split air conditioners. The fan is usually long concerning the diameter, so the flow remains approximately two-dimensional away from the ends. The cross-flow fan uses an impeller with forward-curved blades, placed in a housing consisting of a rear wall and a vortex wall. Unlike radial machines, the main flow moves transversely across the impeller, passing the blading twice.
The flow within a cross-flow fan may be broken up into three distinct regions: a vortex region near the fan discharge, called an eccentric vortex, the through-flow region, and a paddling region directly opposite. Both the vortex and paddling regions are dissipative, and as a result, only a portion of the impeller imparts usable work on the flow. The cross-flow fan, or transverse fan, is thus a two-stage partial admission machine. The popularity of the crossflow fan in HVAC comes from its compactness, shape, quiet operation, and ability to provide a high pressure coefficient. Effectively a rectangular fan in terms of inlet and outlet geometry, the diameter readily scales to fit the available space, and the length is adjustable to meet flow rate requirements for the particular application.
Common household tower fans are also cross-flow fans. Much of the early work focused on developing the cross-flow fan for both high- and low-flow-rate conditions and resulted in numerous patents. Key contributions were made by Coester, Ilberg and Sadeh, Porter and Markland, and Eck. One interesting phenomenon particular to the cross-flow fan is that, as the blades rotate, the local air incidence angle changes. The result is that in certain positions the blades act as compressors (pressure increase), while at other azimuthal locations the blades act as turbines (pressure decrease).
Since the flow both enters and exits the impeller radially, the crossflow fan is well suited for aircraft applications. Due to the two-dimensional nature of the flow, the fan readily integrates into a wing for use in both thrust production and boundary-layer control. A configuration that utilizes a crossflow fan is located at the wing leading edge is the fanwing. This design creates lift by deflecting the wake downward due to the rotational direction of the fan, causing large Magnus force, similar to a spinning leading-edge cylinder. Another configuration utilizing a crossflow fan for thrust and flow control is the propulsive wing. In this design, the crossflow fan is placed near the trailing edge of a thick wing and draws the air of the wing's suction (top) surface. By doing this, the propulsive wing is nearly stall-free, even at extremely high angles of attack, producing very high lift. The external links section provides links to these concepts.
A cross-flow fan is a centrifugal fan in which the air flows straight through the fan instead of at a right angle. The rotor of a cross-flow fan is covered to create a pressure differential. Cross-flow fans are made to have a double circular arc rear wall with a thick vortex wall that decreases in radial gap. The gap decreases in the direction of the fans impeller rotation. The rear wall has a log-spiral profile while the vortex stabilizer is a horizontal thin wall with rounded edge. The resultant pressure difference allows air to flow straight through the fan, even though the fan blades counter the flow of air on one side of the rotation. Cross-flow fans give airflow along the entire width of the fan; however, they are noisier than ordinary centrifugal fans, presumably[original research?] because the fan blades fight the flow of air on one side of the rotation, unlike typical centrifugal fans. Cross-flow fans are often used in ductless air conditioners, air doors, in some types of laptop coolers, in automobile ventilation systems, and for cooling in medium-sized equipment such as photocopiers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1194) Heat pump
A heat pump is a device used to warm the interior of a building or heat domestic hot water by transferring thermal energy from a cooler space to a warmer space using the refrigeration cycle, being the opposite direction in which heat transfer would take place without the application of external power. Common device types include air source heat pumps, ground source heat pumps, water source heat pumps and exhaust air heat pumps. Heat pumps are also often used in district heating systems.
The efficiency of a heat pump is expressed as a coefficient of performance (COP), or seasonal coefficient of performance (SCOP). The higher the number, the more efficient a heat pump is and the less energy it consumes. When used for space heating these devices are typically much more energy efficient than simple electrical resistance heaters. Heat pumps have a smaller carbon footprint than heating systems burning fossil fuels such as natural gas, but those powered by hydrogen are also low-carbon and may become competitors.
Heat pump is device for transferring heat from a substance or space at one temperature to another substance or space at a higher temperature. It consists of a compressor, a condenser, a throttle or expansion valve, an evaporator, and a working fluid (refrigerant), such as carbon dioxide, ammonia, or a halocarbon. The compressor delivers the vaporized refrigerant under high pressure and temperature to the condenser, located in the space to be heated. There, the cooler air condenses the refrigerant and becomes heated in the process. The liquid refrigerant then enters the throttle valve and, expanding, comes out as a liquid–vapour mixture at a lower temperature and pressure; it then enters the evaporator, where the liquid is evaporated by contact with a comparatively warmer space. The vapour then passes to the compressor, and the cycle is repeated.
A heat pump is basically a heat engine run in the reverse direction. In other words, a heat pump is a device that is used to transfer heat energy to a thermal reservoir. They are often used to transfer thermal energy by absorbing heat from a cold space and releasing it to a warmer one.
Heat pumps transfer heat from a cold body to a hot body at the expense of mechanical energy supplied to it by an external agent. The cold body is cooled more and more. A heat pump generally comprises four key components which include a condenser, a compressor, an expansion valve and an evaporator. The working substance used in these components is called refrigerant.
Types of Heat Pump
Some of the most popular examples of a heat pump include air conditioners, freezers or other heating as well as ventilating devices.
Geothermal (ground-source) Heat Pump
A geothermal heat pump also called a ground-source heat pump is used to carry the heat exchange fluid (water with a little antifreeze) from the soil or from groundwater. Geothermal heat pumps are quite expensive to install. This heat pump can be used to cool buildings by transferring heat from the hot areas into the soil via the ground loop or piping placed within the ground.
Water Source Heat Pump
This heat pump working mechanism is quite similar to a ground-source heat pump. However, in this case, the heat is replenished from a body of water instead of the ground. The major requirement here is that the body of water has to be very big so as to be able to withstand the cooling effect of the unit and it should not freeze or create some adverse effects.
Air Source Heat Pump
Air source heat pumps basically move heat between two heat exchangers. Usually, one of these heat exchangers are placed on the outside of a building wherein fins are also attached which allows the air to be forced in with the help of a fan. The other is used to directly heat the air or water inside the building which is then circulated around with the help of heat emitters that basically releases the heat around the building.
Exhaust Air Heat Pump
Exhaust air heat pumps are used for extracting heat from the exhaust air of a building. However, they require mechanical ventilation. Two classes of exhaust air heat pumps exist.
* Exhaust air-air heat pumps – These are used to transfer heat to intake air.
* Exhaust air-water heat pumps – They are used to transfer heat to a heating circuit which consists of a tank of domestic hot water.
Solar-assisted Heat Pump
In this type of heat pump, there is an integration of two systems. One is a heat pump and there are thermal solar panels that are found in a single integrated system. Here, the solar thermal panel acts as the low-temperature heat source. Meanwhile, the heat that is produced is fed to the heat pump’s evaporator.
Absorption Heat Pumps
These are relatively a new type of heat pump that is mainly for residential systems. An absorption heat pump is also called a gas-fired heat pump and uses heat as its primary source of energy. This can be used with a wide variety of heat sources.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1195) Biology
Summary
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life.
Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments.
Biologists are able to study life at multiple levels of organization. From the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them.
Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment.
Details
Biology is study of living things and their vital processes. The field deals with all the physicochemical aspects of life. The modern tendency toward cross-disciplinary research and the unification of scientific knowledge and investigation from different fields has resulted in significant overlap of the field of biology with other scientific disciplines. Modern principles of other fields—chemistry, medicine, and physics, for example—are integrated with those of biology in areas such as biochemistry, biomedicine, and biophysics.
Biology is subdivided into separate branches for convenience of study, though all the subdivisions are interrelated by basic principles. Thus, while it is custom to separate the study of plants (botany) from that of animals (zoology), and the study of the structure of organisms (morphology) from that of function (physiology), all living things share in common certain biological phenomena—for example, various means of reproduction, cell division, and the transmission of genetic material.
Biology is often approached on the basis of levels that deal with fundamental units of life. At the level of molecular biology, for example, life is regarded as a manifestation of chemical and energy transformations that occur among the many chemical constituents that compose an organism. As a result of the development of increasingly powerful and precise laboratory instruments and techniques, it is possible to understand and define with high precision and accuracy not only the ultimate physiochemical organization (ultrastructure) of the molecules in living matter but also the way living matter reproduces at the molecular level. Especially crucial to those advances was the rise of genomics in the late 20th and early 21st centuries.
Cell biology is the study of cells—the fundamental units of structure and function in living organisms. Cells were first observed in the 17th century, when the compound microscope was invented. Before that time, the individual organism was studied as a whole in a field known as organismic biology; that area of research remains an important component of the biological sciences. Population biology deals with groups or populations of organisms that inhabit a given area or region. Included at that level are studies of the roles that specific kinds of plants and animals play in the complex and self-perpetuating interrelationships that exist between the living and the nonliving world, as well as studies of the built-in controls that maintain those relationships naturally. Those broadly based levels—molecules, cells, whole organisms, and populations—may be further subdivided for study, giving rise to specializations such as morphology, taxonomy, biophysics, biochemistry, genetics, epigenetics, and ecology. A field of biology may be especially concerned with the investigation of one kind of living thing—for example, the study of birds in ornithology, the study of fishes in ichthyology, or the study of microorganisms in microbiology.
Basic concepts of biology
Biological principles:
Homeostasis:
The concept of homeostasis—that living things maintain a constant internal environment—was first suggested in the 19th century by French physiologist Claude Bernard, who stated that “all the vital mechanisms, varied as they are, have only one object: that of preserving constant the conditions of life.”
As originally conceived by Bernard, homeostasis applied to the struggle of a single organism to survive. The concept was later extended to include any biological system from the cell to the entire biosphere, all the areas of Earth inhabited by living things.
Unity
All living organisms, regardless of their uniqueness, have certain biological, chemical, and physical characteristics in common. All, for example, are composed of basic units known as cells and of the same chemical substances, which, when analyzed, exhibit noteworthy similarities, even in such disparate organisms as bacteria and humans. Furthermore, since the action of any organism is determined by the manner in which its cells interact and since all cells interact in much the same way, the basic functioning of all organisms is also similar.
Animal cells and plant cells contain membrane-bound organelles, including a distinct nucleus. In contrast, bacterial cells do not contain organelles.
There is not only unity of basic living substance and functioning but also unity of origin of all living things. According to a theory proposed in 1855 by German pathologist Rudolf Virchow, “all living cells arise from pre-existing living cells.” That theory appears to be true for all living things at the present time under existing environmental conditions. If, however, life originated on Earth more than once in the past, the fact that all organisms have a sameness of basic structure, composition, and function would seem to indicate that only one original type succeeded.
A common origin of life would explain why in humans or bacteria—and in all forms of life in between—the same chemical substance, deoxyribonucleic acid (DNA), in the form of genes accounts for the ability of all living matter to replicate itself exactly and to transmit genetic information from parent to offspring. Furthermore, the mechanisms for that transmittal follow a pattern that is the same in all organisms.
Whenever a change in a gene (a mutation) occurs, there is a change of some kind in the organism that contains the gene. It is this universal phenomenon that gives rise to the differences (variations) in populations of organisms from which nature selects for survival those that are best able to cope with changing conditions in the environment.
Evolution
In his theory of natural selection, which is discussed in greater detail later, Charles Darwin suggested that “survival of the fittest” was the basis for organic evolution (the change of living things with time). Evolution itself is a biological phenomenon common to all living things, even though it has led to their differences. Evidence to support the theory of evolution has come primarily from the fossil record, from comparative studies of structure and function, from studies of embryological development, and from studies of DNA and RNA (ribonucleic acid).
Three types of natural selection, showing the effects of each on the distribution of phenotypes within a population. The downward arrows point to those phenotypes against which selection acts. Stabilizing selection (left column) acts against phenotypes at both extremes of the distribution, favouring the multiplication of intermediate phenotypes. Directional selection (centre column) acts against only one extreme of phenotypes, causing a shift in distribution toward the other extreme. Diversifying selection (right column) acts against intermediate phenotypes, creating a split in distribution toward each extreme.
Diversity
Despite the basic biological, chemical, and physical similarities found in all living things, a diversity of life exists not only among and between species but also within every natural population. The phenomenon of diversity has had a long history of study because so many of the variations that exist in nature are visible to the eye. The fact that organisms changed during prehistoric times and that new variations are constantly evolving can be verified by paleontological records as well as by breeding experiments in the laboratory. Long after Darwin assumed that variations existed, biologists discovered that they are caused by a change in the genetic material (DNA). That change can be a slight alteration in the sequence of the constituents of DNA (nucleotides), a larger change such as a structural alteration of a chromosome, or a complete change in the number of chromosomes. In any case, a change in the genetic material in the reproductive cells manifests itself as some kind of structural or chemical change in the offspring. The consequence of such a mutation depends upon the interaction of the mutant offspring with its environment.
It has been suggested that sexual reproduction became the dominant type of reproduction among organisms because of its inherent advantage of variability, which is the mechanism that enables a species to adjust to changing conditions. New variations are potentially present in genetic differences, but how preponderant a variation becomes in a gene pool depends upon the number of offspring the mutants or variants produce (differential reproduction). It is possible for a genetic novelty (new variation) to spread in time to all members of a population, especially if the novelty enhances the population’s chances for survival in the environment in which it exists. Thus, when a species is introduced into a new habitat, it either adapts to the change by natural selection or by some other evolutionary mechanism or eventually dies off. Because each new habitat means new adaptations, habitat changes have been responsible for the millions of different kinds of species and for the heterogeneity within each species.
The total number of extant animal and plant species is estimated at between roughly 5 million and 10 million; about 1.5 million of those species have been described by scientists. The use of classification as a means of producing some kind of order out of the staggering number of different types of organisms appeared as early as the book of Genesis—with references to cattle, beasts, fowl, creeping things, trees, and so on. The first scientific attempt at classification, however, is attributed to the Greek philosopher Aristotle, who tried to establish a system that would indicate the relationship of all things to each other. He arranged everything along a scale, or “ladder of nature,” with nonliving things at the bottom; plants were placed below animals, and humankind was at the top. Other schemes that have been used for grouping species include large anatomical similarities, such as wings or fins, which indicate a natural relationship, and also similarities in reproductive structures.
Taxonomy has been based on two major assumptions: one is that similar body construction can be used as a criterion for a classification grouping; the other is that, in addition to structural similarities, evolutionary and molecular relationships between organisms can be used as a means for determining classification.
Behaviour and interrelationships
The study of the relationships of living things to each other and to their environment is known as ecology. Because these interrelationships are so important to the welfare of Earth and because they can be seriously disrupted by human activities, ecology has become an important branch of biology.
Continuity
Whether an organism is a human or a bacterium, its ability to reproduce is one of the most important characteristics of life. Because life comes only from preexisting life, it is only through reproduction that successive generations can carry on the properties of a species.
The study of structure
Living things are defined in terms of the activities or functions that are missing in nonliving things. The life processes of every organism are carried out by specific materials assembled in definite structures. Thus, a living thing can be defined as a system, or structure, that reproduces, changes with its environment over a period of time, and maintains its individuality by constant and continuous metabolism.
Cells and their constituents
Biologists once depended on the light microscope to study the morphology of cells found in higher plants and animals. The functioning of cells in unicellular and in multicellular organisms was then postulated from observation of the structure; the discovery of the chloroplastids in the cell, for example, led to the investigation of the process of photosynthesis. With the invention of the electron microscope, the fine organization of the plastids could be used for further quantitative studies of the different parts of that process.
Qualitative and quantitative analyses in biology make use of a variety of techniques and approaches to identify and estimate levels of nucleic acids, proteins, carbohydrates, and other chemical constituents of cells and tissues. Many such techniques make use of antibodies or probes that bind to specific molecules within cells and that are tagged with a chemical, commonly a fluorescent dye, a radioactive isotope, or a biological stain, thereby enabling or enhancing microscopic visualization or detection of the molecules of interest.
Chemical labels are powerful means by which biologists can identify, locate, or trace substances in living matter. Some examples of widely used assays that incorporate labels include the Gram stain, which is used for the identification and characterization of bacteria; fluorescence in situ hybridization, which is used for the detection of specific genetic sequences in chromosomes; and luciferase assays, which measure bioluminescence produced from luciferin-luciferase reactions, allowing for the quantification of a wide array of molecules.
Tissues and organs
Early biologists viewed their work as a study of the organism. The organism, then considered the fundamental unit of life, is still the prime concern of some modern biologists, and understanding how organisms maintain their internal environment remains an important part of biological research. To better understand the physiology of organisms, researchers study the tissues and organs of which organisms are composed. Key to that work is the ability to maintain and grow cells in vitro (“in glass”), otherwise known as tissue culture.
Some of the first attempts at tissue culture were made in the late 19th century. In 1885, German zoologist Wilhelm Roux maintained tissue from a chick embryo in a salt solution. The first major breakthrough in tissue culture, however, came in 1907 with the growth of frog nerve cell processes by American zoologist Ross G. Harrison. Several years later, French researchers Alexis Carrel and Montrose Burrows had refined Harrison’s methods and introduced the term tissue culture. Using stringent laboratory techniques, workers have been able to keep cells and tissues alive under culture conditions for long periods of time. Techniques for keeping organs alive in preparation for transplants stem from such experiments.
Advances in tissue culture have enabled countless discoveries in biology. For example, many experiments have been directed toward achieving a deeper understanding of biological differentiation, particularly of the factors that control differentiation. Crucial to those studies was the development in the late 20th century of tissue culture methods that allowed for the growth of mammalian embryonic stem cells—and ultimately human embryonic stem cells—on culture plates.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1196) Mach Number
Summary
Mach number, in fluid mechanics, is ratio of the velocity of a fluid to the velocity of sound in that fluid, named after Ernst Mach (1838–1916), an Austrian physicist and philosopher. In the case of an object moving through a fluid, such as an aircraft in flight, the Mach number is equal to the velocity of the object relative to the fluid divided by the velocity of sound in that fluid. Mach numbers less than one indicate subsonic flow; those greater than one, supersonic flow. Fluid flow, in addition, is classified as compressible or incompressible on the basis of the Mach number. For example, gas flowing with a Mach number of less than three-tenths may be considered incompressible, or of constant density, an approximation that greatly simplifies the analysis of its behaviour. For Mach numbers greater than one, shock wave patterns develop on the moving body because of compression of the surrounding fluid. Streamlining alleviates shock wave effects.
Details
Mach number (M or Ma) is a dimensionless quantity in fluid dynamics representing the ratio of flow velocity past a boundary to the local speed of sound.
M = u/c where:
M is the local Mach number,
u is the local flow velocity with respect to the boundaries (either internal, such as an object immersed in the flow, or external, like a channel), and
c is the speed of sound in the medium, which in air varies with the square root of the thermodynamic temperature.
By definition, at Mach 1, the local flow velocity u is equal to the speed of sound. At Mach 0.65, u is 65% of the speed of sound (subsonic), and, at Mach 1.35, u is 35% faster than the speed of sound (supersonic). Pilots of high-altitude aerospace vehicles use flight Mach number to express a vehicle's true airspeed, but the flow field around a vehicle varies in three dimensions, with corresponding variations in local Mach number.
The local speed of sound, and hence the Mach number, depends on the temperature of the surrounding gas. The Mach number is primarily used to determine the approximation with which a flow can be treated as an incompressible flow. The medium can be a gas or a liquid. The boundary can be traveling in the medium, or it can be stationary while the medium flows along it, or they can both be moving, with different velocities: what matters is their relative velocity with respect to each other. The boundary can be the boundary of an object immersed in the medium, or of a channel such as a nozzle, diffuser or wind tunnel channeling the medium. As the Mach number is defined as the ratio of two speeds, it is a dimensionless number. If M < 0.2–0.3 and the flow is quasi-steady and isothermal, compressibility effects will be small and simplified incompressible flow equations can be used.
Etymology
The Mach number is named after Austrian physicist and philosopher Ernst Mach, and is a designation proposed by aeronautical engineer Jakob Ackeret in 1929. As the Mach number is a dimensionless quantity rather than a unit of measure, the number comes after the unit; the second Mach number is Mach 2 instead of 2 Mach (or Machs). This is somewhat reminiscent of the early modern ocean sounding unit mark (a synonym for fathom), which was also unit-first, and may have influenced the use of the term Mach. In the decade preceding faster-than-sound human flight, aeronautical engineers referred to the speed of sound as Mach's number, never Mach 1.
Overview
The speed of sound (blue) depends only on the temperature variation at altitude (red) and can be calculated from it since isolated density and pressure effects on the speed of sound cancel each other. The speed of sound increases with height in two regions of the stratosphere and thermosphere, due to heating effects in these regions.
Mach number is a measure of the compressibility characteristics of fluid flow: the fluid (air) behaves under the influence of compressibility in a similar manner at a given Mach number, regardless of other variables. As modeled in the International Standard Atmosphere, dry air at mean sea level, standard temperature of 15 °C (59 °F), the speed of sound is 340.3 meters per second (1,116.5 ft/s; 761.23 mph; 661.49 kn). The speed of sound is not a constant; in a gas, it increases proportionally to the square root of the absolute temperature, and since atmospheric temperature generally decreases with increasing altitude between sea level and 11,000 meters (36,089 ft), the speed of sound also decreases. For example, the standard atmosphere model lapses temperature to −56.5 °C (−69.7 °F) at 11,000 meters (36,089 ft) altitude, with a corresponding speed of sound (Mach 1) of 295.0 meters per second (967.8 ft/s; 659.9 mph; 573.4 kn), 86.7% of the sea level value.
Classification of Mach regimes
While the terms subsonic and supersonic, in the purest sense, refer to speeds below and above the local speed of sound respectively, aerodynamicists often use the same terms to talk about particular ranges of Mach values. This occurs because of the presence of a transonic regime around flight (free stream) M = 1 where approximations of the Navier-Stokes equations used for subsonic design no longer apply; the simplest explanation is that the flow around an airframe locally begins to exceed M = 1 even though the free stream Mach number is below this value.
Meanwhile, the supersonic regime is usually used to talk about the set of Mach numbers for which linearised theory may be used, where for example the (air) flow is not chemically reacting, and where heat-transfer between air and vehicle may be reasonably neglected in calculations.
Generally, NASA defines high hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Aircraft operating in this regime include the Space Shuttle and various space planes in development.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1197) Quark
Summary
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. All commonly observable matter is composed of up quarks, down quarks and electrons. Owing to a phenomenon known as color confinement, quarks are never found in isolation; they can be found only within hadrons, which include baryons (such as protons and neutrons) and mesons, or in quark–gluon plasmas. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
Quarks have various intrinsic properties, including electric charge, mass, color charge, and spin. They are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as fundamental forces (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge.
There are six types, known as flavors, of quarks: up, down, charm, strange, top, and bottom. Up and down quarks have the lowest masses of all quarks. The heavier quarks rapidly change into up and down quarks through a process of particle decay: the transformation from a higher mass state to a lower mass state. Because of this, up and down quarks are generally stable and the most common in the universe, whereas strange, charm, bottom, and top quarks can only be produced in high energy collisions (such as those involving cosmic rays and in particle accelerators). For every quark flavor there is a corresponding type of antiparticle, known as an antiquark, that differs from the quark only in that some of its properties (such as the electric charge) have equal magnitude but opposite sign.
The quark model was independently proposed by physicists Murray Gell-Mann and George Zweig in 1964. Quarks were introduced as parts of an ordering scheme for hadrons, and there was little evidence for their physical existence until deep inelastic scattering experiments at the Stanford Linear Accelerator Center in 1968. Accelerator experiments have provided evidence for all six flavors. The top quark, first observed at Fermilab in 1995, was the last to be discovered.
Details
Qquark is any member of a group of elementary subatomic particles that interact by means of the strong force and are believed to be among the fundamental constituents of matter. Quarks associate with one another via the strong force to make up protons and neutrons, in much the same way that the latter particles combine in various proportions to make up atomic nuclei. There are six types, or flavours, of quarks that differ from one another in their mass and charge characteristics. These six quark flavours can be grouped in three pairs: up and down, charm and strange, and top and bottom. Quarks appear to be true elementary particles; that is, they have no apparent structure and cannot be resolved into something smaller. In addition, however, quarks always seem to occur in combination with other quarks or with antiquarks, their antiparticles, to form all hadrons—the so-called strongly interacting particles that encompass both baryons and mesons.
Quark "flavours"
Throughout the 1960s theoretical physicists, trying to account for the ever-growing number of subatomic particles observed in experiments, considered the possibility that protons and neutrons were composed of smaller units of matter. In 1961 two physicists, Murray Gell-Mann of the United States and Yuval Neʾeman of Israel, proposed a particle classification scheme called the Eightfold Way, based on the mathematical symmetry group SU(3), which described strongly interacting particles in terms of building blocks. In 1964 Gell-Mann introduced the concept of quarks as a physical basis for the scheme, having adopted the fanciful term from a passage in James Joyce’s novel Finnegans Wake. (The American physicist George Zweig developed a similar theory independently that same year and called his fundamental particles “aces.”) Gell-Mann’s model provided a simple picture in which all mesons are shown as consisting of a quark and an antiquark and all baryons as composed of three quarks. It postulated the existence of three types of quarks, distinguished by unique “flavours.” These three quark types are now commonly designated as “up” (u), “down” (d), and “strange” (s). Each carries a fractional value of the electron charge (i.e., a charge less than that of the electron, e). The up quark (charge 2/3 e) and down quark (charge −1/3 e) make up protons and neutrons and are thus the ones observed in ordinary matter. Strange quarks (charge −1/3 e) occur as components of K mesons and various other extremely short-lived subatomic particles that were first observed in cosmic rays but that play no part in ordinary matter.
Quark “colours”
The interpretation of quarks as actual physical entities initially posed two major problems. First, quarks had to have half-integer spin (intrinsic angular momentum) values for the model to work, but at the same time they seemed to violate the Pauli exclusion principle, which governs the behaviour of all particles (called fermions) having odd half-integer spin. In many of the baryon configurations constructed of quarks, sometimes two or even three identical quarks had to be set in the same quantum state—an arrangement prohibited by the exclusion principle. Second, quarks appeared to defy being freed from the particles they made up. Although the forces binding quarks were strong, it seemed improbable that they were powerful enough to withstand bombardment by high-energy particle beams from accelerators.
These problems were resolved by the introduction of the concept of colour, as formulated in quantum chromodynamics (QCD). In this theory of strong interactions, whose breakthrough ideas were published in 1973, colour has nothing to do with the colours of the everyday world but rather represents a property of quarks that is the source of the strong force. The colours red, green, and blue are ascribed to quarks, and their opposites, antired, antigreen, and antiblue, are ascribed to antiquarks. According to QCD, all combinations of quarks must contain mixtures of these imaginary colours that cancel out one another, with the resulting particle having no net colour. A baryon, for example, always consists of a combination of one red, one green, and one blue quark and so never violates the exclusion principle. The property of colour in the strong force plays a role analogous to that of electric charge in the electromagnetic force, and just as charge implies the exchange of photons between charged particles, so does colour involve the exchange of massless particles called gluons among quarks. Just as photons carry electromagnetic force, gluons transmit the forces that bind quarks together. Quarks change their colour as they emit and absorb gluons, and the exchange of gluons maintains proper quark colour distribution.
Binding forces and “massive” quarks
The binding forces carried by the gluons tend to be weak when quarks are close together. Within a proton (or other hadron), at distances of less than {10}^{-15} metre, quarks behave as though they were nearly free. This condition is called asymptotic freedom. When one begins to draw the quarks apart, however, as when attempting to knock them out of a proton, the effect of the force grows stronger. This is because, as explained by QCD, gluons have the ability to create other gluons as they move between quarks. Thus, if a quark starts to speed away from its companions after being struck by an accelerated particle, the gluons utilize energy that they draw from the quark’s motion to produce more gluons. The larger the number of gluons exchanged among quarks, the stronger the effective binding forces become. Supplying additional energy to extract the quark only results in the conversion of that energy into new quarks and antiquarks with which the first quark combines. This phenomenon is observed at high-energy particle accelerators in the production of “jets” of new particles that can be associated with a single quark.
The discovery in the 1970s of the “charm” (c) and “bottom” (b) quarks and their associated antiquarks, achieved through the creation of mesons, strongly suggests that quarks occur in pairs. This speculation led to efforts to find a sixth type of quark called “top” (t), after its proposed flavour. According to theory, the top quark carries a charge of 2/3 e; its partner, the bottom quark, has a charge of −1/3 e. In 1995 two independent groups of scientists at the Fermi National Accelerator Laboratory reported that they had found the top quark. Their results give the top quark a mass of 173.8 ± 5.2 gigaelectron volts (GeV; {10}^{9} eV). (The next heaviest quark, the bottom, has a mass of about 4.2 GeV.) It has yet to be explained why the top quark is so much more massive than the other elementary particles, but its existence completes the Standard Model, the prevailing theoretical scheme of nature’s fundamental building blocks.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1198) Electric Charge
Summary
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative (commonly carried by protons and electrons respectively). Like charges repel each other and unlike charges attract each other. An object with an absence of net charge is referred to as neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric charge is a conserved property; the net charge of an isolated system, the amount of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized; it comes in integer multiples of individual small units called the elementary charge, e, about 1.602 × {10}^{-19} coulombs, which is the smallest charge which can exist freely (particles called quarks have smaller charges, multiples of 1/3 e, but they are only found in combination, and always combine to form particles with integer charge). The proton has a charge of +e, and the electron has a charge of -e.
Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (combination of electric and magnetic fields) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental forces in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics.
The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (Ah). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant as the charge on a mole of electrons. The lowercase symbol q often denotes charge.
Details
Electric charge is the basic property of matter carried by some elementary particles that governs how the particles are affected by an electric or magnetic field. Electric charge, which can be positive or negative, occurs in discrete natural units and is neither created nor destroyed.
Electric charges are of two general types: positive and negative. Two objects that have an excess of one type of charge exert a force of repulsion on each other when relatively close together. Two objects that have excess opposite charges, one positively charged and the other negatively charged, attract each other when relatively near.
Many fundamental, or subatomic, particles of matter have the property of electric charge. For example, electrons have negative charge and protons have positive charge, but neutrons have zero charge. The negative charge of each electron is found by experiment to have the same magnitude, which is also equal to that of the positive charge of each proton. Charge thus exists in natural units equal to the charge of an electron or a proton, a fundamental physical constant. A direct and convincing measurement of an electron’s charge, as a natural unit of electric charge, was first made (1909) in the Millikan oil-drop experiment. Atoms of matter are electrically neutral because their nuclei contain the same number of protons as there are electrons surrounding the nuclei. Electric current and charged objects involve the separation of some of the negative charge of neutral atoms. Current in metal wires consists of a drift of electrons of which one or two from each atom are more loosely bound than the rest. Some of the atoms in the surface layer of a glass rod positively charged by rubbing it with a silk cloth have lost electrons, leaving a net positive charge because of the unneutralized protons of their nuclei. A negatively charged object has an excess of electrons on its surface.
Electric charge is conserved: in any isolated system, in any chemical or nuclear reaction, the net electric charge is constant. The algebraic sum of the fundamental charges remains the same.
The unit of electric charge in the metre–kilogram–second and SI systems is the coulomb and is defined as the amount of electric charge that flows through a cross section of a conductor in an electric circuit during each second when the current has a value of one ampere. One coulomb consists of 6.24 × {10}^{18} natural units of electric charge, such as individual electrons or protons. From the definition of the ampere, the electron itself has a negative charge of 1.602176634 × {10}^{-19} coulomb.
An electrochemical unit of charge, the faraday, is useful in describing electrolysis reactions, such as in metallic electroplating. One faraday equals 96485.332123 coulombs, the charge of a mole of electrons (that is, an Avogadro’s number, 6.02214076 × {10}^{23}, of electrons).
Gist
Electric charge is basic property of matter carried by some elementary particles. Electric charge, which can be positive or negative, occurs in discrete natural units and is neither created nor destroyed. Most electric charge is carried by the electrons and protons within an atom. Electrons are said to carry negative charge, while protons are said to carry positive charge, although these labels are completely arbitrary. Protons and electrons create electric fields, which exert a force called the Coulomb force, which radiates outward in all directions.
In physics, charge, also known as electric charge, electrical charge, or electrostatic charge and symbolized q, is a characteristic of a unit of matter that expresses the extent to which it has more or fewer electrons than protons. In atoms, the electron carries a negative elementary or unit charge; the proton carries a positive charge. The two types of charge are equal and opposite. The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. The electric charge of a macroscopic object is the sum of the electric charges of the particles that make it up. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1199) Oat
The oat (Avena sativa), sometimes called the common oat, is a species of cereal grain grown for its seed, which is known by the same name (usually in the plural, unlike other cereals and pseudocereals). While oats are suitable for human consumption as oatmeal and rolled oats, one of the most common uses is as livestock feed.
Oats (Avena sativa) are domesticated cereal grass (family Poaceae) grown primarily for its edible starchy grains. Oats are widely cultivated in the temperate regions of the world and are second only to rye in their ability to survive in poor soils. Although oats are used chiefly as livestock feed, some are processed for human consumption, especially as breakfast foods. The plants provide good hay and, under proper conditions, furnish excellent grazing and make good silage (stalk feed preserved by fermentation).
Oats are annual plants and often reach 1.5 metres (5 feet) in height. The long leaves have rounded sheaths at the base and a membranous ligule (small appendage where the leaf joins the stem). The flowering and fruiting structure, or inflorescence, of the plant is made up of numerous branches bearing florets that produce the caryopsis, or one-seeded fruit. Common oats are grown in cool temperate regions; red oats, more heat tolerant, are grown mainly in warmer climates. With sufficient moisture, the crop will grow on soils that are sandy, low in fertility, or highly acidic. The plants are relatively free from diseases and pests, though they are susceptible to rust and anthracnose on their stems and leaves.
Rolled oats, flattened kernels with the hulls removed, are used mostly for oatmeal; other breakfast foods are made from the groats, which are unflattened kernels with husks removed. Oat flour is not generally considered suitable for bread but is used to make cookies and puddings. The grains are high in carbohydrates and contain about 13 percent protein and 7.5 percent fat. They are a source of calcium, iron, vitamin B1, and niacin.
As a livestock feed, the grain is used both in pure form and in mixtures, though the demand for oats has been somewhat reduced by competition from hybrid corn (maize) and alfalfa. The straw is used for animal feed and bedding. In industry oat hulls are a source of furfural, a chemical used in various types of solvents.
Oat (Avena sativa) is a type of cereal grain. People often eat the plant's whole seeds (oats), outer seed layers (oat bran), and leaves and stems (oat straw).
Oats might reduce cholesterol and blood sugar levels, and help control appetite by making you feel full. Oat bran might work by keeping the gut from absorbing substances that can lead to heart disease, high cholesterol, and diabetes. Oats seem to reduce swelling when applied to the skin.
Oat bran and whole oats are used for heart disease, high cholesterol, and diabetes. They are also used for high blood pressure, cancer, dry skin, and many other conditions, but there is no good scientific evidence to support these other uses.
Uses & Effectiveness
Likely Effective for:
* Heart disease. Oats contain fiber. Eating a diet high in fiber, such as 3.6 grams of oats daily, reduces the risk for heart disease.
* High cholesterol. Eating oats, oat bran, and other soluble fibers can somewhat reduce total and low-density lipoprotein (LDL or "bad") cholesterol when consumed as part of a diet low in saturated fat.
Possibly Effective for
* Diabetes. Eating a diet rich in whole grains, including oats and oat bran, might help prevent diabetes. It might also help improve blood sugar control and lower cholesterol levels in people with diabetes.
* Stomach cancer. Eating high-fiber foods, such as oats and oat bran, seems to lower the risk of stomach cancer.
Possibly Ineffective for
* Colon cancer, rectal cancer. Regularly eating oat bran or oats doesn't seem to lower the risk of colon or rectal cancer.
* High blood pressure. Eating oats as oatmeal or oat cereal doesn't seem to reduce blood pressure.
There is interest in using oats for a number of other purposes, but there isn't enough reliable information to say whether it might be helpful.
Side Effects
*When taken by mouth: Oat bran and whole oats are likely safe for most people when eaten in foods. Oats can cause gas and bloating. To minimize side effects, start with a low dose and increase slowly to the desired amount. Your body will get used to oat bran and the side effects will likely go away.
* When applied to the skin: Lotion containing oat extract is possibly safe to use on the skin. Putting oat-containing products on the skin can cause some people to have a rash.
Special Precautions and Warnings
* Pregnancy and breast-feeding: Oat bran and whole oats are likely safe when eaten in foods during pregnancy and breast-feeding.
* Celiac disease: People with celiac disease must not eat gluten. Many people with celiac disease are told to avoid eating oats because they might be contaminated with wheat, rye, or barley, which contain gluten. But in people who haven't had any symptoms for at least 6 months, eating moderate amounts of pure, non-contaminated oats seems to be safe.
* Disorders of the digestive tract including the esophagus, stomach, and intestines: Avoid eating oat products. Digestive problems that could extend the length of time it takes for your food to be digested could allow oats to block your intestine.
Interactions
* Moderate Interaction
* Be cautious with this combination
Insulin interacts with OATS
* Oats might reduce the amount of insulin needed for blood sugar control. Taking oats along with insulin might cause your blood sugar to drop too low. Monitor your blood sugar closely. The dose of insulin might need to be changed.
Medications for diabetes (Antidiabetes drugs) interacts with OATS
Oats might lower blood sugar levels. Taking oats along with diabetes medications might cause blood sugar to drop too low. Monitor your blood sugar closely.
Dosing
Oats are commonly eaten in foods. For health benefits, adults should eat whole oats providing at least 3.6 grams of soluble fiber daily. Speak with a healthcare provider to find out what dose might be best for a specific condition.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1200) DNA
Summary
Deoxyribonucleic acid is a molecule composed of two polynucleotide chains that coil around each other to form a double helix carrying genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phospho-diester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, pyrimidines and purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine.
Both strands of double-stranded DNA store the same biological information. This information is replicated when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation.
Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
Details
DNA, abbreviation of deoxyribonucleic acid, is an organic chemical of complex molecular structure that is found in all prokaryotic and eukaryotic cells and in many viruses. DNA codes genetic information for the transmission of inherited traits.
A brief treatment of DNA follows.
The chemical DNA was first discovered in 1869, but its role in genetic inheritance was not demonstrated until 1943. In 1953 James Watson and Francis Crick, aided by the work of biophysicists Rosalind Franklin and Maurice Wilkins, determined that the structure of DNA is a double-helix polymer, a spiral consisting of two DNA strands wound around each other. The breakthrough led to significant advances in scientists’ understanding of DNA replication and hereditary control of cellular activities.
Each strand of a DNA molecule is composed of a long chain of monomer nucleotides. The nucleotides of DNA consist of a deoxyribose sugar molecule to which is attached a phosphate group and one of four nitrogenous bases: two purines (adenine and guanine) and two pyrimidines (cytosine and thymine). The nucleotides are joined together by covalent bonds between the phosphate of one nucleotide and the sugar of the next, forming a phosphate-sugar backbone from which the nitrogenous bases protrude. One strand is held to another by hydrogen bonds between the bases; the sequencing of this bonding is specific—i.e., adenine bonds only with thymine, and cytosine only with guanine.
The configuration of the DNA molecule is highly stable, allowing it to act as a template for the replication of new DNA molecules, as well as for the production (transcription) of the related RNA (ribonucleic acid) molecule. A segment of DNA that codes for the cell’s synthesis of a specific protein is called a gene.
DNA replicates by separating into two single strands, each of which serves as a template for a new strand. The new strands are copied by the same principle of hydrogen-bond pairing between bases that exists in the double helix. Two new double-stranded molecules of DNA are produced, each containing one of the original strands and one new strand. This “semiconservative” replication is the key to the stable inheritance of genetic traits.
Within a cell, DNA is organized into dense protein-DNA complexes called chromosomes. In eukaryotes, the chromosomes are located in the nucleus, although DNA also is found in mitochondria and chloroplasts. In prokaryotes, which do not have a membrane-bound nucleus, the DNA is found as a single circular chromosome in the cytoplasm. Some prokaryotes, such as bacteria, and a few eukaryotes have extrachromosomal DNA known as plasmids, which are autonomous, self-replicating genetic material. Plasmids have been used extensively in recombinant DNA technology to study gene expression.
The genetic material of viruses may be single- or double-stranded DNA or RNA. Retroviruses carry their genetic material as single-stranded RNA and produce the enzyme reverse transcriptase, which can generate DNA from the RNA strand. Four-stranded DNA complexes known as G-quadruplexes have been observed in guanine-rich areas of the human genome.
What is DNA?
DNA, or deoxyribonucleic acid, is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA). Mitochondria are structures within cells that convert the energy from food into a form that cells can use.
The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). Human DNA consists of about 3 billion bases, and more than 99 percent of those bases are the same in all people. The order, or sequence, of these bases determines the information available for building and maintaining an organism, similar to the way in which letters of the alphabet appear in a certain order to form words and sentences.
DNA bases pair up with each other, A with T and C with G, to form units called base pairs. Each base is also attached to a sugar molecule and a phosphate molecule. Together, a base, sugar, and phosphate are called a nucleotide. Nucleotides are arranged in two long strands that form a spiral called a double helix. The structure of the double helix is somewhat like a ladder, with the base pairs forming the ladder’s rungs and the sugar and phosphate molecules forming the vertical sidepieces of the ladder.
An important property of DNA is that it can replicate, or make copies of itself. Each strand of DNA in the double helix can serve as a pattern for duplicating the sequence of bases. This is critical when cells divide because each new cell needs to have an exact copy of the DNA present in the old cell.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1201) RNA
Summary
Ribonucleic acid (RNA) is a molecule similar to DNA. Unlike DNA, RNA is single-stranded. An RNA strand has a backbone made of alternating sugar (ribose) and phosphate groups. ... Different types of RNA exist in the cell: messenger RNA (mRNA), ribosomal RNA (rRNA), and transfer RNA (tRNA).
Ribonucleic acid (RNA) is a polymeric molecule essential in various biological roles in coding, decoding, regulation and expression of genes. RNA and deoxyribonucleic acid (DNA) are nucleic acids. Along with lipids, proteins, and carbohydrates, nucleic acids constitute one of the four major macromolecules essential for all known forms of life. Like DNA, RNA is assembled as a chain of nucleotides, but unlike DNA, RNA is found in nature as a single strand folded onto itself, rather than a paired double strand. Cellular organisms use messenger RNA (mRNA) to convey genetic information (using the nitrogenous bases of guanine, uracil, adenine, and cytosine, denoted by the letters G, U, A, and C) that directs synthesis of specific proteins. Many viruses encode their genetic information using an RNA genome.
Some RNA molecules play an active role within cells by catalyzing biological reactions, controlling gene expression, or sensing and communicating responses to cellular signals. One of these active processes is protein synthesis, a universal function in which RNA molecules direct the synthesis of proteins on ribosomes. This process uses transfer RNA (tRNA) molecules to deliver amino acids to the ribosome, where ribosomal RNA (rRNA) then links amino acids together to form coded proteins.
Details
RNA, abbreviation of ribonucleic acid, is a complex compound of high molecular weight that functions in cellular protein synthesis and replaces DNA (deoxyribonucleic acid) as a carrier of genetic codes in some viruses. RNA consists of ribose nucleotides (nitrogenous bases appended to a ribose sugar) attached by phosphodiester bonds, forming strands of varying lengths. The nitrogenous bases in RNA are adenine, guanine, cytosine, and uracil, which replaces thymine in DNA.
The ribose sugar of RNA is a cyclical structure consisting of five carbons and one oxygen. The presence of a chemically reactive hydroxyl (−OH) group attached to the second carbon group in the ribose sugar molecule makes RNA prone to hydrolysis. This chemical lability of RNA, compared with DNA, which does not have a reactive −OH group in the same position on the sugar moiety (deoxyribose), is thought to be one reason why DNA evolved to be the preferred carrier of genetic information in most organisms. The structure of the RNA molecule was described by R.W. Holley in 1965.
RNA structure
RNA typically is a single-stranded biopolymer. However, the presence of self-complementary sequences in the RNA strand leads to intrachain base-pairing and folding of the ribonucleotide chain into complex structural forms consisting of bulges and helices. The three-dimensional structure of RNA is critical to its stability and function, allowing the ribose sugar and the nitrogenous bases to be modified in numerous different ways by cellular enzymes that attach chemical groups (e.g., methyl groups) to the chain. Such modifications enable the formation of chemical bonds between distant regions in the RNA strand, leading to complex contortions in the RNA chain, which further stabilizes the RNA structure. Molecules with weak structural modifications and stabilization may be readily destroyed. As an example, in an initiator transfer RNA (tRNA) molecule that lacks a methyl group, modification at position 58 of the tRNA chain renders the molecule unstable and hence nonfunctional; the nonfunctional chain is destroyed by cellular tRNA quality control mechanisms.
RNAs can also form complexes with molecules known as ribonucleoproteins (RNPs). The RNA portion of at least one cellular RNP has been shown to act as a biological catalyst, a function previously ascribed only to proteins.
Types and functions of RNA
Of the many types of RNA, the three most well-known and most commonly studied are messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA), which are present in all organisms. These and other types of RNAs primarily carry out biochemical reactions, similar to enzymes. Some, however, also have complex regulatory functions in cells. Owing to their involvement in many regulatory processes, to their abundance, and to their diverse functions, RNAs play important roles in both normal cellular processes and diseases.
In protein synthesis, mRNA carries genetic codes from the DNA in the nucleus to ribosomes, the sites of protein translation in the cytoplasm. Ribosomes are composed of rRNA and protein. The ribosome protein subunits are encoded by rRNA and are synthesized in the nucleolus. Once fully assembled, they move to the cytoplasm, where, as key regulators of translation, they “read” the code carried by mRNA. A sequence of three nitrogenous bases in mRNA specifies incorporation of a specific amino acid in the sequence that makes up the protein. Molecules of tRNA (sometimes also called soluble, or activator, RNA), which contain fewer than 100 nucleotides, bring the specified amino acids to the ribosomes, where they are linked to form proteins.
In addition to mRNA, tRNA, and rRNA, RNAs can be broadly divided into coding (cRNA) and noncoding RNA (ncRNA). There are two types of ncRNAs, housekeeping ncRNAs (tRNA and rRNA) and regulatory ncRNAs, which are further classified according to their size. Long ncRNAs (lncRNA) have at least 200 nucleotides, while small ncRNAs have fewer than 200 nucleotides. Small ncRNAs are subdivided into micro RNA (miRNA), small nucleolar RNA (snoRNA), small nuclear RNA (snRNA), small-interfering RNA (siRNA), and PIWI-interacting RNA (piRNA).
The miRNAs are of particular importance. They are about 22 nucleotides long and function in gene regulation in most eukaryotes. They can inhibit (silence) gene expression by binding to target mRNA and inhibiting translation, thereby preventing functional proteins from being produced. Many miRNAs play significant roles in cancer and other diseases. For example, tumour suppressor and oncogenic (cancer-initiating) miRNAs can regulate unique target genes, leading to tumorigenesis and tumour progression.
Also of functional significance are the piRNAs, which are about 26 to 31 nucleotides long and exist in most animals. They regulate the expression of transposons (jumping genes) by keeping the genes from being transcribed in the germ cells (sperm and eggs). Most piRNA are complementary to different transposons and can specifically target those transposons.
Circular RNA (circRNA) is unique from other RNA types because its 5′ and 3′ ends are bonded together, creating a loop. The circRNAs are generated from many protein-encoding genes, and some can serve as templates for protein synthesis, similar to mRNA. They can also bind miRNA, acting as “sponges” that prevent miRNA molecules from binding to their targets. In addition, circRNAs play an important role in regulating the transcription and alternative splicing of the genes from which circRNAs were derived.
RNA in disease
Important connections have been discovered between RNA and human disease. For example, as described previously, some miRNAs are capable of regulating cancer-associated genes in ways that facilitate tumour development. In addition, the dysregulation of miRNA metabolism has been linked to various neurodegenerative diseases, including Alzheimer disease. In the case of other RNA types, tRNAs can bind to specialized proteins known as caspases, which are involved in apoptosis (programmed cell death). By binding to caspase proteins, tRNAs inhibit apoptosis; the ability of cells to escape programmed death signaling is a hallmark of cancer. Noncoding RNAs known as tRNA-derived fragments (tRFs) are also suspected to play a role in cancer. The emergence of techniques such as RNA sequencing has led to the identification of novel classes of tumour-specific RNA transcripts, such as MALAT1 (metastasis associated lung adenocarcinoma transcript 1), increased levels of which have been found in various cancerous tissues and are associated with the proliferation and metastasis (spread) of tumour cells.
A class of RNAs containing repeat sequences is known to sequester RNA-binding proteins (RBPs), resulting in the formation of foci or aggregates in neural tissues. These aggregates play a role in the development of neurological diseases such as amyotrophic lateral sclerosis (ALS) and myotonic dystrophy. The loss of function, dysregulation, and mutation of various RBPs has been implicated in a host of human diseases.
The discovery of additional links between RNA and disease is expected. Increased understanding of RNA and its functions, combined with the continued development of sequencing technologies and efforts to screen RNA and RBPs as therapeutic targets, are likely to facilitate such discoveries.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline