You are not logged in.
604) Carpentry
Carpentry, the art and trade of cutting, working, and joining timber. The term includes both structural timberwork in framing and items such as doors, windows, and staircases.
In the past, when buildings were often wholly constructed of timber framing, the carpenter played a considerable part in building construction; along with the mason he was the principal building worker. The scope of the carpenter’s work has altered, however, with the passage of time. Increasing use of concrete and steel construction, especially for floors and roofs, means that the carpenter plays a smaller part in making the framework of buildings, except for houses and small structures. On the other hand, in the construction of temporary formwork and shuttering for concrete building, the carpenter’s work has greatly increased.
Because wood is widely distributed throughout the world, it has been used as a building material for centuries; many of the tools and techniques of carpentry, perfected after the Middle Ages, have changed little since that time. On the other hand, world supplies of wood are shrinking, and the increasing cost of obtaining, finishing, and distributing timber has brought continuing revision in traditional practices. Further, because much traditional construction wastes wood, engineering calculation has supplanted empirical and rule-of-thumb methods. The development of laminated timbers such as plywood, and the practice of prefabrication have simplified and lowered the cost of carpentry.
The framing of houses generally proceeds in one of two ways: in platform (or Western) framing floors are framed separately, story by story; in balloon framing the vertical members (studs) extend the full height of the building from foundation plate to rafter plate. The timber used in the framing is put to various uses. The studs usually measure 1.5 × 3.5 inches (4 × 9 cm; known as a “2 × 4”) and are spaced at regular intervals of 16 inches (41 cm). They are anchored to a horizontal foundation plate at the bottom and a plate at the top, both 2 × 4 timber. Frequently stiffening braces are built between studs at midpoint and are known as noggings. Window and door openings are boxed in with horizontal 2 × 4 lumber called headers at the top and sills at the bottom.
Floors are framed by anchoring 1.5 × 11-inch (4 × 28-centimetre) lumber called joists on the foundation for the first floor and on the plates of upper floors. They are set on edge and placed in parallel rows across the width of the house. Crisscross bracings that help them stay parallel are called herringbone struts. In later stages, a subfloor of planks or plywood is laid across the joists, and on top of this is placed the finished floor—narrower hardwood planks that fit together with tongue-and-groove edges or any variety of covering.
The traditional pitched roof is made from inclined studs or rafters that meet at the peak. For wide roof spans extra support is provided by adding a horizontal cross brace, making the rafters look like the letter A, with a V-shaped diagonal support on the cross bar. Such supports are called trusses. The principal timbers used for framing and most carpentry in general are in the conifer, or softwood, group and include various species of pine, fir, spruce, and cedar. The most commonly used timber species in the United States are Canadian spruces and Douglas fir, British Columbian pine, and western red cedar. Cedar is useful for roofing and siding shingles as well as framing, since it has a natural resistance to weathering and needs no special preservation treatment.
A carpenter’s work may also extend to interior jobs, requiring some of the skills of a joiner. These jobs include making door frames, cabinets, countertops, and assorted molding and trim. Much of the skill involves joining wood inconspicuously for the sake of appearance, as opposed to the joining of unseen structural pieces.
The standard hand tools used by a carpenter are hammers, pliers, screwdrivers, and awls for driving and extracting nails, setting screws, and punching guide holes, respectively. Planes are hand-held blades used to reduce and smooth wood surfaces, and chisels are blades that can be hit with a mallet to cut out forms in wood. The crosscut saw cuts across wood grain, and the rip saw cuts with the grain. Tenon and dovetail saws are used to make precise cuts for the indicated joints, and a keyhole saw cuts out holes. The level shows whether a surface is perfectly horizontal or vertical, and the trisquare tests the right angle between adjacent surfaces. These instruments are complemented by the use of power tools.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
605) Industry
Industry, a group of productive enterprises or organizations that produce or supply goods, services, or sources of income. In economics, industries are customarily classified as primary, secondary, and tertiary; secondary industries are further classified as heavy and light.
Primary Industry
This sector of a nation’s economy includes agriculture, forestry, fishing, mining, quarrying, and the extraction of minerals. It may be divided into two categories: genetic industry, including the production of raw materials that may be increased by human intervention in the production process; and extractive industry, including the production of exhaustible raw materials that cannot be augmented through cultivation.
The genetic industries include agriculture, forestry, and livestock management and fishing—all of which are subject to scientific and technological improvement of renewable resources. The extractive industries include the mining of mineral ores, the quarrying of stone, and the extraction of mineral fuels.
Secondary Industry
This sector, also called manufacturing industry, (1) takes the raw materials supplied by primary industries and processes them into consumer goods, or (2) further processes goods that other secondary industries have transformed into products, or (3) builds capital goods used to manufacture consumer and nonconsumer goods. Secondary industry also includes energy-producing industries (e.g., hydroelectric industries) as well as the construction industry.
Secondary industry may be divided into heavy, or large-scale, and light, or small-scale, industry. Large-scale industry generally requires heavy capital investment in plants and machinery, serves a large and diverse market including other manufacturing industries, has a complex industrial organization and frequently a skilled specialized labour force, and generates a large volume of output. Examples would include petroleum refining, steel and iron manufacturing, motor vehicle and heavy machinery manufacture, cement production, nonferrous metal refining, meat-packing, and hydroelectric power generation.
Light, or small-scale, industry may be characterized by the nondurability of manufactured products and a smaller capital investment in plants and equipment, and it may involve nonstandard products, such as customized or craft work. The labour force may be either low skilled, as in textile work and clothing manufacture, food processing, and plastics manufacture, or highly skilled, as in electronics and computer hardware manufacture, precision instrument manufacture, gemstone cutting, and craft work.
Tertiary Industry
This sector, also called service industry, includes industries that, while producing no tangible goods, provide services or intangible gains or generate wealth. In free market and mixed economies this sector generally has a mix of private and government enterprise.
The industries of this sector include banking, finance, insurance, investment, and real estate services; wholesale, retail, and resale trade; transportation, information, and communications services; professional, consulting, legal, and personal services; tourism, hotels, restaurants, and entertainment; repair and maintenance services; education and teaching; and health, social welfare, administrative, police, security, and defense services.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
606) Emu
Emu, flightless bird of Australia and second largest living bird: the emu is more than 1.5 metres (5 feet) tall and may weigh more than 45 kg (100 pounds). The emu is the sole living member of the family Dromaiidae (or Dromiceiidae) of the order Casuariiformes, which also includes the cassowaries.
The common emu, Dromaius (or Dromiceius) novaehollandiae, the only survivor of several forms exterminated by settlers, is stout-bodied and long-legged, like its relative the cassowary. Both males and females are brownish, with dark gray head and neck. Emus can dash away at nearly 50 km (30 miles) per hour; if cornered they kick with their big three-toed feet. Emus mate for life; the male incubates from 7 to 10 dark green eggs, 13 cm (5 inches) long, in a ground nest for about 60 days. The striped young soon run with the adults. In small flocks emus forage for fruits and insects but may also damage crops. The peculiar structure of the trachea of the emu is correlated with the loud booming note of the bird during the breeding season. Three subspecies are recognized, inhabiting northern, southeastern, and southwestern Australia; a fourth, now extinct, lived on Tasmania.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
607) Nuclear Fusion
Fusion reactors have been getting a lot of press recently because they offer some major advantages over other power sources. They will use abundant sources of fuel, they will not leak radiation above normal background levels and they will produce less radioactive waste than current fission reactors.
Nobody has put the technology into practice yet, but working reactors aren't actually that far off. Fusion reactors are now in experimental stages at several laboratories in the United States and around the world.
A consortium from the United States, Russia, Europe and Japan has proposed to build a fusion reactor called the International Thermonuclear Experimental Reactor (ITER) in Cadarache, France, to demonstrate the feasibility of using sustained fusion reactions for making electricity. In this article, we'll learn about nuclear fusion and see how the ITER reactor will work.
Current nuclear reactors use nuclear fission to generate power. In nuclear fission, you get energy from splitting one atom into two atoms. In a conventional nuclear reactor, high-energy neutrons split heavy atoms of uranium, yielding large amounts of energy, radiation and radioactive wastes that last for long periods of time.
In nuclear fusion, you get energy when two atoms join together to form one. In a fusion reactor, hydrogen atoms come together to form helium atoms, neutrons and vast amounts of energy. It's the same type of reaction that powers hydrogen bombs and the sun. This would be a cleaner, safer, more efficient and more abundant source of power than nuclear fission.
There are several types of fusion reactions. Most involve the isotopes of hydrogen called deuterium and tritium:
• Proton-proton chain - This sequence is the predominant fusion reaction scheme used by stars such as the sun. Two pairs of protons form to make two deuterium atoms. Each deuterium atom combines with a proton to form a helium-3 atom. Two helium-3 atoms combine to form beryllium-6, which is unstable. Beryllium-6 decays into two helium-4 atoms. These reactions produce high energy particles (protons, electrons, neutrinos, positrons) and radiation (light, gamma rays)
• Deuterium-deuterium reactions - Two deuterium atoms combine to form a helium-3 atom and a neutron.
• Deuterium-tritium reactions - One atom of deuterium and one atom of tritium combine to form a helium-4 atom and a neutron. Most of the energy released is in the form of the high-energy neutron.
Conceptually, harnessing nuclear fusion in a reactor is a no-brainer. But it has been extremely difficult for scientists to come up with a controllable, non-destructive way of doing it. To understand why, we need to look at the necessary conditions for nuclear fusion.
ISOTOPES
Isotopes are atoms of the same element that have the same number of protons and electrons but a different number of neutrons. Some common isotopes in fusion are:
• Protium is a hydrogen isotope with one proton and no neutrons. It is the most common form of hydrogen and the most common element in the universe.
• Deuterium is a hydrogen isotope with one proton and one neutron. It is not radioactive and can be extracted from seawater.
• Tritium is a hydrogen isotope with one proton and two neutrons. It is radioactive, with a half-life of about 10 years. Tritium does not occur naturally but can be made by bombarding lithium with neutrons.
• Helium-3 is a helium isotope with two protons and one neutron.
• Helium-4 is the most common, naturally occurring form of helium, with two protons and two neutrons.
When hydrogen atoms fuse, the nuclei must come together. However, the protons in each nucleus will tend to repel each other because they have the same charge (positive). If you've ever tried to place two magnets together and felt them push apart from each other, you've experienced this principle first-hand.
To achieve fusion, you need to create special conditions to overcome this tendency. Here are the conditions that make fusion possible:
High temperature - The high temperature gives the hydrogen atoms enough energy to overcome the electrical repulsion between the protons.
• Fusion requires temperatures about 100 million Kelvin (approximately six times hotter than the sun's core).
• At these temperatures, hydrogen is a plasma, not a gas. Plasma is a high-energy state of matter in which all the electrons are stripped from atoms and move freely about.
• The sun achieves these temperatures by its large mass and the force of gravity compressing this mass in the core. We must use energy from microwaves, lasers and ion particles to achieve these temperatures.
High pressure - Pressure squeezes the hydrogen atoms together. They must be within 1 x {10}^{-15} meters of each other to fuse.
• The sun uses its mass and the force of gravity to squeeze hydrogen atoms together in its core.
• We must squeeze hydrogen atoms together by using intense magnetic fields, powerful lasers or ion beams.
With current technology, we can only achieve the temperatures and pressures necessary to make deuterium-tritium fusion possible. Deuterium-deuterium fusion requires higher temperatures that may be possible in the future. Ultimately, deuterium-deuterium fusion will be better because it is easier to extract deuterium from seawater than to make tritium from lithium. Also, deuterium is not radioactive, and deuterium-deuterium reactions will yield more energy.
There are two ways to achieve the temperatures and pressures necessary for hydrogen fusion to take place:
• Magnetic confinement uses magnetic and electric fields to heat and squeeze the hydrogen plasma. The ITER project in France is using this method.
• Inertial confinement uses laser beams or ion beams to squeeze and heat the hydrogen plasma. Scientists are studying this experimental approach at the National Ignition Facility of Lawrence Livermore Laboratory in the United States.
Let's look at magnetic confinement first. Here's how it would work:
Microwaves, electricity and neutral particle beams from accelerators heat a stream of hydrogen gas. This heating turns the gas into plasma. This plasma gets squeezed by super-conducting magnets, thereby allowing fusion to occur. The most efficient shape for the magnetically confined plasma is a donut shape (toroid).
A reactor of this shape is called a tokamak. The ITER tokamak will be a self-contained reactor whose parts are in various cassettes. These cassettes can be easily inserted and removed without having to tear down the entire reactor for maintenance. The tokamak will have a plasma toroid with a 2-meter inner radius and a 6.2-meter outer radius.
Let's take a closer look at the ITER fusion reactor to see how magnetic confinement works.
"Tokamak" is a Russian acronym for "toroidal chamber with axial magnetic field."
The main parts of the ITER tokamak reactor are:
• Vacuum vessel - holds the plasma and keeps the reaction chamber in a vacuum
• Neutral beam injector (ion cyclotron system) - injects particle beams from the accelerator into the plasma to help heat the plasma to critical temperature
• Magnetic field coils (poloidal, toroidal) - super-conducting magnets that confine, shape and contain the plasma using magnetic fields
• Transformers/Central solenoid - supply electricity to the magnetic field coils
• Cooling equipment (crostat, cryopump) - cool the magnets
• Blanket modules - made of lithium; absorb heat and high-energy neutrons from the fusion reaction
• Divertors - exhaust the helium products of the fusion reaction
Here's how the process will work:
1. The fusion reactor will heat a stream of deuterium and tritium fuel to form high-temperature plasma. It will squeeze the plasma so that fusion can take place. The power needed to start the fusion reaction will be about 70 megawatts, but the power yield from the reaction will be about 500 megawatts. The fusion reaction will last from 300 to 500 seconds. (Eventually, there will be a sustained fusion reaction.)
2. The lithium blankets outside the plasma reaction chamber will absorb high-energy neutrons from the fusion reaction to make more tritium fuel. The blankets will also get heated by the neutrons.
3. The heat will be transferred by a water-cooling loop to a heat exchanger to make steam.
4. The steam will drive electrical turbines to produce electricity.
5. The steam will be condensed back into water to absorb more heat from the reactor in the heat exchanger.
Initially, the ITER tokamak will test the feasibility of a sustained fusion reactor and eventually will become a test fusion power plant.
The main application for fusion is in making electricity. Nuclear fusion can provide a safe, clean energy source for future generations with several advantages over current fission reactors:
• Abundant fuel supply - Deuterium can be readily extracted from seawater, and excess tritium can be made in the fusion reactor itself from lithium, which is readily available in the Earth's crust. Uranium for fission is rare, and it must be mined and then enriched for use in reactors.
• Safe - The amounts of fuel used for fusion are small compared to fission reactors. This is so that uncontrolled releases of energy do not occur. Most fusion reactors make less radiation than the natural background radiation we live with in our daily lives.
• Clean - No combustion occurs in nuclear power (fission or fusion), so there is no air pollution.
• Less nuclear waste - Fusion reactors will not produce high-level nuclear wastes like their fission counterparts, so disposal will be less of a problem. In addition, the wastes will not be of weapons-grade nuclear materials as is the case in fission reactors.
¬NASA is currently looking into developing small-scale fusion reactors for powering¬ deep-space rockets. Fusion propulsion would boast an unlimited fuel supply (hydrogen), would be more efficient and would ultimately lead to faster rockets.
Nuclear Fusion
With its high energy yields, low nuclear waste production, and lack of air pollution, fusion, the same source that powers stars, could provide an alternative to conventional energy sources. But what drives this process?
What is fusion?
Fusion occurs when two light atoms bond together, or fuse, to make a heavier one. The total mass of the new atom is less than that of the two that formed it; the "missing" mass is given off as energy, as described by Albert Einstein's famous "E=mc^2" equation.
In order for the nuclei of two atoms to overcome the aversion to one another caused their having the same charge, high temperatures and pressures are required. Temperatures must reach approximately six times those found in the core of the sun. At this heat, the hydrogen is no longer a gas but a plasma, an extremely high-energy state of matter where electrons are stripped from their atoms.
Fusion is the dominant source of energy for stars in the universe. It is also a potential energy source on Earth. When set off in an intentionally uncontrolled chain reaction, it drives the hydrogen bomb. Fusion is also being considered as a possibility to power crafts through space.
Fusion differs from fission, which splits atoms and results in substantial radioactive waste, which is hazardous.
Cooking up energy
There are several "recipes" for cooking up fusion, which rely on different atomic combinations.
Deuterium-Tritium fusion: The most promising combination for power on Earth today is the fusion of a deuterium atom with a tritium one. The process, which requires temperatures of approximately 72 million degrees F (39 million degrees Celsius), produces 17.6 million electron volts of energy.
Deuterium is a promising ingredient because it is an isotope of hydrogen, containing a single proton and neutron but no electron. In turn, hydrogen is a key part of water, which covers the Earth. A gallon of seawater (3.8 liters) could produce as much energy as 300 gallons (1,136 liters) of gasoline. Another hydrogen isotope, tritium contains one proton and two neutrons. It is more challenging to locate in large quantities, due to its 10-year half-life (half of the quantity decays every decade). Rather than attempting to find it naturally, the most reliable method is to bombard lithium, an element found in Earth's crust, with neutrons to create the element.
Deuterium-deuterium fusion: Theoretically more promising than deuterium-tritium because of the ease of obtaining the two deuterium atoms, this method is also more challenging because it requires temperatures too high to be feasible at present. However, the process yields more energy than deuterium-tritium fusion.
With their high heat and masses, stars utilize different combinations to power them.
Proton-proton fusion: The dominant driver for stars like the sun with core temperatures under 27 million degrees F (15 million degrees C), proton-proton fusion begins with two protons and ultimately yields high energy particles such as positrons, neutrinos, and gamma rays.
Carbon cycle: Stars with higher temperatures merge carbon rather than hydrogen atoms.
Triple alpha process: Stars such as red giants at the end of their phase, with temperatures exceeding 180 million degrees F (100 million degrees C) fuse helium atoms together rather than hydrogen and carbon.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
608) Taste bud
Taste bud, small organ located on the tongue in terrestrial vertebrates that functions in the perception of taste. In fish, taste buds occur on the lips, the flanks, and the caudal (tail) fins of some species and on the barbels of catfish.
Taste receptor cells, with which incoming chemicals from food and other sources interact, occur on the tongue in groups of 50–150. Each of these groups forms a taste bud, which is grouped together with other taste buds into taste papillae. The taste buds are embedded in the epithelium of the tongue and make contact with the outside environment through a taste pore. Slender processes (microvilli) extend from the outer ends of the receptor cells through the taste pore, where the processes are covered by the mucus that lines the oral cavity. At their inner ends the taste receptor cells synapse, or connect, with afferent sensory neurons, nerve cells that conduct information to the brain. Each receptor cell synapses with several afferent sensory neurons, and each afferent neuron branches to several taste papillae, where each branch makes contact with many receptor cells. The afferent sensory neurons occur in three different nerves running to the brain—the facial nerve, the glossopharyngeal nerve, and the vagus nerve. Taste receptor cells of vertebrates are continually renewed throughout the life of the organism.
On average, the human tongue has 2,000–8,000 taste buds, implying that there are hundreds of thousands of receptor cells. However, the number of taste buds varies widely. For example, per square centimetre on the tip of the tongue, some people may have only a few individual taste buds, whereas others may have more than one thousand; this variability contributes to differences in the taste sensations experienced by different people. Taste sensations produced within an individual taste bud also vary, since each taste bud typically contains receptor cells that respond to distinct chemical stimuli—as opposed to the same chemical stimulus. As a result, the sensation of different tastes (i.e., salty, sweet, sour, bitter, or umami) is diverse not only within a single taste bud but also throughout the surface of the tongue.
The taste receptor cells of other animals can often be characterized in ways similar to those of humans, because all animals have the same basic needs in selecting food.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
609) Alzheimer's disease
Overview
Alzheimer's disease is a progressive disorder that causes brain cells to waste away (degenerate) and die. Alzheimer's disease is the most common cause of dementia — a continuous decline in thinking, behavioral and social skills that disrupts a person's ability to function independently.
The early signs of the disease may be forgetting recent events or conversations. As the disease progresses, a person with Alzheimer's disease will develop severe memory impairment and lose the ability to carry out everyday tasks.
Current Alzheimer's disease medications may temporarily improve symptoms or slow the rate of decline. These treatments can sometimes help people with Alzheimer's disease maximize function and maintain independence for a time. Different programs and services can help support people with Alzheimer's disease and their caregivers.
There is no treatment that cures Alzheimer's disease or alters the disease process in the brain. In advanced stages of the disease, complications from severe loss of brain function — such as dehydration, malnutrition or infection — result in death.
Symptoms
Memory loss is the key symptom of Alzheimer's disease. An early sign of the disease is usually difficulty remembering recent events or conversations. As the disease progresses, memory impairments worsen and other symptoms develop.
At first, a person with Alzheimer's disease may be aware of having difficulty with remembering things and organizing thoughts. A family member or friend may be more likely to notice how the symptoms worsen.
Brain changes associated with Alzheimer's disease lead to growing trouble with:
Memory
Everyone has occasional memory lapses. It's normal to lose track of where you put your keys or forget the name of an acquaintance. But the memory loss associated with Alzheimer's disease persists and worsens, affecting the ability to function at work or at home.
People with Alzheimer's may:
• Repeat statements and questions over and over
• Forget conversations, appointments or events, and not remember them later
• Routinely misplace possessions, often putting them in illogical locations
• Get lost in familiar places
• Eventually forget the names of family members and everyday objects
• Have trouble finding the right words to identify objects, express thoughts or take part in conversations
Thinking and reasoning
Alzheimer's disease causes difficulty concentrating and thinking, especially about abstract concepts such as numbers.
Multitasking is especially difficult, and it may be challenging to manage finances, balance checkbooks and pay bills on time. These difficulties may progress to an inability to recognize and deal with numbers.
Making judgments and decisions
The ability to make reasonable decisions and judgments in everyday situations will decline. For example, a person may make poor or uncharacteristic choices in social interactions or wear clothes that are inappropriate for the weather. It may be more difficult to respond effectively to everyday problems, such as food burning on the stove or unexpected driving situations.
Planning and performing familiar tasks
Once-routine activities that require sequential steps, such as planning and cooking a meal or playing a favorite game, become a struggle as the disease progresses. Eventually, people with advanced Alzheimer's may forget how to perform basic tasks such as dressing and bathing.
Changes in personality and behavior
Brain changes that occur in Alzheimer's disease can affect moods and behaviors. Problems may include the following:
• Depression
• Apathy
• Social withdrawal
• Mood swings
• Distrust in others
• Irritability and aggressiveness
• Changes in sleeping habits
• Wandering
• Loss of inhibitions
• Delusions, such as believing something has been stolen
Preserved skills
Many important skills are preserved for longer periods even while symptoms worsen. Preserved skills may include reading or listening to books, telling stories and reminiscing, singing, listening to music, dancing, drawing, or doing crafts.
These skills may be preserved longer because they are controlled by parts of the brain affected later in the course of the disease.
When to see a doctor
A number of conditions, including treatable conditions, can result in memory loss or other dementia symptoms. If you are concerned about your memory or other thinking skills, talk to your doctor for a thorough assessment and diagnosis.
If you are concerned about thinking skills you observe in a family member or friend, talk about your concerns and ask about going together to a doctor's appointment.
Causes
Scientists believe that for most people, Alzheimer's disease is caused by a combination of genetic, lifestyle and environmental factors that affect the brain over time.
Less than 1 percent of the time, Alzheimer's is caused by specific genetic changes that virtually guarantee a person will develop the disease. These rare occurrences usually result in disease onset in middle age.
The exact causes of Alzheimer's disease aren't fully understood, but at its core are problems with brain proteins that fail to function normally, disrupt the work of brain cells (neurons) and unleash a series of toxic events. Neurons are damaged, lose connections to each other and eventually die.
The damage most often starts in the region of the brain that controls memory, but the process begins years before the first symptoms. The loss of neurons spreads in a somewhat predictable pattern to other regions of the brains. By the late stage of the disease, the brain has shrunk significantly.
Researchers are focused on the role of two proteins:
• Plaques. Beta-amyloid is a leftover fragment of a larger protein. When these fragments cluster together, they appear to have a toxic effect on neurons and to disrupt cell-to-cell communication. These clusters form larger deposits called amyloid plaques, which also include other cellular debris.
• Tangles. Tau proteins play a part in a neuron's internal support and transport system to carry nutrients and other essential materials. In Alzheimer's disease, tau proteins change shape and organize themselves into structures called neurofibrillary tangles. The tangles disrupt the transport system and are toxic to cells.
Risk factors
b]Age[bi]
Increasing age is the greatest known risk factor for Alzheimer's disease. Alzheimer's is not a part of normal aging, but as you grow older the likelihood of developing Alzheimer's disease increases.
One study, for example, found that annually there were two new diagnoses per 1,000 people ages 65 to 74, 11 new diagnoses per 1,000 people ages 75 to 84, and 37 new diagnoses per 1,000 people age 85 and older.
Family history and genetics
Your risk of developing Alzheimer's is somewhat higher if a first-degree relative — your parent or sibling — has the disease. Most genetic mechanisms of Alzheimer's among families remain largely unexplained, and the genetic factors are likely complex.
One better understood genetic factor is a form of the apolipoprotein E gene (APOE). A variation of the gene, APOE e4, increases the risk of Alzheimer's disease, but not everyone with this variation of the gene develops the disease.
Scientists have identified rare changes (mutations) in three genes that virtually guarantee a person who inherits one of them will develop Alzheimer's. But these mutations account for less than 1 percent of people with Alzheimer's disease.
Down syndrome
Many people with Down syndrome develop Alzheimer's disease. This is likely related to having three copies of chromosome 21 — and subsequently three copies of the gene for the protein that leads to the creation of beta-amyloid. Signs and symptoms of Alzheimer's tend to appear 10 to 20 years earlier in people with Down syndrome than they do for the general population.
Gender
There appears to be little difference in risk between men and women, but, overall, there are more women with the disease because they generally live longer than men.
Mild cognitive impairment
Mild cognitive impairment (MCI) is a decline in memory or other thinking skills that is greater than what would be expected for a person's age, but the decline doesn't prevent a person from functioning in social or work environments.
People who have MCI have a significant risk of developing dementia. When the primary MCI deficit is memory, the condition is more likely to progress to dementia due to Alzheimer's disease. A diagnosis of MCI enables the person to focus on healthy lifestyle changes, develop strategies to compensate for memory loss and schedule regular doctor appointments to monitor symptoms.
Past head trauma
People who've had a severe head trauma have a greater risk of Alzheimer's disease.
Poor sleep patterns
Research has shown that poor sleep patterns, such as difficulty falling asleep or staying asleep, are associated with an increased risk of Alzheimer's disease.
Lifestyle and heart health
Research has shown that the same risk factors associated with heart disease may also increase the risk of Alzheimer's disease. These include:
• Lack of exercise
• Obesity
• Smoking or exposure to secondhand smoke
• High blood pressure
• High cholesterol
• Poorly controlled type 2 diabetes
These factors can all be modified. Therefore, changing lifestyle habits can to some degree alter your risk. For example, regular exercise and a healthy low-fat diet rich in fruits and vegetables are associated with a decreased risk of developing Alzheimer's disease.
Lifelong learning and social engagement
Studies have found an association between lifelong involvement in mentally and socially stimulating activities and a reduced risk of Alzheimer's disease. Low education levels — less than a high school education — appear to be a risk factor for Alzheimer's disease.
Complications
Memory and language loss, impaired judgment, and other cognitive changes caused by Alzheimer's can complicate treatment for other health conditions. A person with Alzheimer's disease may not be able to:
• Communicate that he or she is experiencing pain — for example, from a dental problem
• Report symptoms of another illness
• Follow a prescribed treatment plan
• Notice or describe medication side effects
As Alzheimer's disease progresses to its last stages, brain changes begin to affect physical functions, such as swallowing, balance, and bowel and bladder control. These effects can increase vulnerability to additional health problems such as:
• Inhaling food or liquid into the lungs (aspiration)
• Pneumonia and other infections
• Falls
• Fractures
• Bedsores
• Malnutrition or dehydration
Prevention
Alzheimer's disease is not a preventable condition. However, a number of lifestyle risk factors for Alzheimer's can be modified. Evidence suggests that changes in diet, exercise and habits — steps to reduce the risk of cardiovascular disease — may also lower your risk of developing Alzheimer's disease and other disorders that cause dementia. Heart-healthy lifestyle choices that may reduce the risk of Alzheimer's include the following:
• Exercise regularly
• Eat a diet of fresh produce, healthy oils and foods low in saturated fat
• Follow treatment guidelines to manage high blood pressure, diabetes and high cholesterol
• If you smoke, ask your doctor for help to quit smoking
Studies have shown that preserved thinking skills later in life and a reduced risk of Alzheimer's disease are associated with participating in social events, reading, dancing, playing board games, creating art, playing an instrument, and other activities that require mental and social engagement.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
610) Screen Printing
The History of Screen Printing
Screen printing has evolved from the ancient art of stencilling that took place in the late 1800’s and over time, with modifications, the method has evolved into an industry.
In the 19th century it remained a simple process using fabrics like organdy stretched over wooden frames as a means to hold stencils in place during printing. Only in the twentieth century did the process become mechanised, usually for printing flat posters or packaging and fabrics.
Initially, although it was not a well known process, screen printing bridged the gap between hand fed production and automated printing, which was far more expensive. It quickly transitioned from handcraft to mass production, particularly in the US, and in doing so opened up a completely new area of print capabilities and transformed the advertising industry.
Today it has become a very sophisticated process, using advanced fabrics and inks combined with computer technology. Often screen printing is used a substitute for other processes such as offset litho. As a printing technique it can print an image onto almost any surface such as paper, card, wood, glass, plastic, leather or any fabric. The iPhone, the solar cell, and the hydrogen fuel cell are all screen printed products – and they would not exist without this printing process.
Production
Screen printing is a technique that involves using a woven mesh screen to support an ink-blocking stencil to receive a desired image.
The screen stencil forms open areas of mesh that transfer ink or other printable materials, by pressing through the mesh as a sharp-edged image onto a substrate (the item that will receive the image).
A squeegee is moved across the screen stencil, forcing ink through the mesh openings to wet the substrate during the squeegee stroke. As the screen rebounds away from the substrate, the ink remains. Basically it is the process of using a mesh-based stencil to apply ink onto a substrate, whether it is t-shirts, posters, stickers, vinyl, wood, or other materials.
Screen printing is also sometimes known as silkscreen printing. One colour is printed at a time, so several screens can be used to produce a multicoloured image or design.
When & Where is Screen Printing Used?
Screen Printing is most commonly associated with T-Shirts, lanyards, balloons, bags and merchandise however this process is also used when applying latex to promotional printed scratch cards or for a decorative print process called spot UV.
All using the same process of squeegees and screens, the clear coating is then exposed to UV radiation lamps for the drying process.
Primarily used to enhance logos or depict certain text, spot UV is a great way to take your print to the next level.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
611) Heat transfer
Heat transfer, any or all of several kinds of phenomena, considered as mechanisms, that convey energy and entropy from one location to another. The specific mechanisms are usually referred to as convection, thermal radiation, and conduction. Conduction involves transfer of energy and entropy between adjacent molecules, usually a slow process. Convection involves movement of a heated fluid, such as air, usually a fairly rapid process. Radiation refers to the transmission of energy as electromagnetic radiation from its emission at a heated surface to its absorption on another surface, a process requiring no medium to convey the energy.
Transfer of heat, whether in heating a building or a kettle of water or in a natural condition such as a thunderstorm, usually involves all these processes.
Thermal conduction
Thermal conduction, transfer of energy (heat) arising from temperature differences between adjacent parts of a body.
Thermal conductivity is attributed to the exchange of energy between adjacent molecules and electrons in the conducting medium. The rate of heat flow in a rod of material is proportional to the cross-sectional area of the rod and to the temperature difference between the ends and inversely proportional to the length; that is the rate H equals the ratio of the cross section A of the rod to its length l, multiplied by the temperature difference
and by the thermal conductivity of the material, designated by the constant k. This empirical relation is expressed as: ). The minus sign arises because heat flows always from higher to lower temperature.A substance of large thermal conductivity k is a good heat conductor, whereas one with small thermal conductivity is a poor heat conductor or good thermal insulator. Typical values are 0.093 kilocalories/second-metre-°C for copper (a good thermal conductor) and 0.00003 kilocalories/second-metre°C for wood (poor thermal conductor).
Convection
Convection, process by which heat is transferred by movement of a heated fluid such as air or water.
Natural convection results from the tendency of most fluids to expand when heated—i.e., to become less dense and to rise as a result of the increased buoyancy. Circulation caused by this effect accounts for the uniform heating of water in a kettle or air in a heated room: the heated molecules expand the space they move in through increased speed against one another, rise, and then cool and come closer together again, with increase in density and a resultant sinking.
Forced convection involves the transport of fluid by methods other than that resulting from variation of density with temperature. Movement of air by a fan or of water by a pump are examples of forced convection.
Atmospheric convection currents can be set up by local heating effects such as solar radiation (heating and rising) or contact with cold surface masses (cooling and sinking). Such convection currents primarily move vertically and account for many atmospheric phenomena, such as clouds and thunderstorms.
Thermal radiation
Thermal radiation, process by which energy, in the form of electromagnetic radiation, is emitted by a heated surface in all directions and travels directly to its point of absorption at the speed of light; thermal radiation does not require an intervening medium to carry it.
Thermal radiation ranges in wavelength from the longest infrared rays through the visible-light spectrum to the shortest ultraviolet rays. The intensity and distribution of radiant energy within this range is governed by the temperature of the emitting surface. The total radiant heat energy emitted by a surface is proportional to the fourth power of its absolute temperature (the Stefan–Boltzmann law).
The rate at which a body radiates (or absorbs) thermal radiation depends upon the nature of the surface as well. Objects that are good emitters are also good absorbers (Kirchhoff’s radiation law). A blackened surface is an excellent emitter as well as an excellent absorber. If the same surface is silvered, it becomes a poor emitter and a poor absorber. A blackbody is one that absorbs all the radiant energy that falls on it. Such a perfect absorber would also be a perfect emitter.
The heating of the Earth by the Sun is an example of transfer of energy by radiation. The heating of a room by an open-hearth fireplace is another example. The flames, coals, and hot bricks radiate heat directly to the objects in the room with little of this heat being absorbed by the intervening air. Most of the air that is drawn from the room and heated in the fireplace does not reenter the room in a current of convection but is carried up the chimney together with the products of combustion.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
612) Baltic states
Baltic states, northeastern region of Europe containing the countries of Estonia, Latvia, and Lithuania, on the eastern shores of the Baltic Sea.
The Baltic states are bounded on the west and north by the Baltic Sea, which gives the region its name, on the east by Russia, on the southeast by Belarus, and on the southwest by Poland and an exclave of Russia. The underlying geology is sandstone, shale, and limestone, evidenced by hilly uplands that alternate with low-lying plains and bear mute testimony to the impact of the glacial era. In fact, glacial deposits in the form of eskers, moraines, and drumlins occur in profusion and tend to disrupt the drainage pattern, which results in frequent flooding. The Baltic region is dotted with more than 7,000 lakes and countless peat bogs, swamps, and marshes. A multitude of rivers, notably the Neman (Lithuanian: Nemunas) and Western Dvina (Latvian: Daugava), empty northwestward into the Baltic Sea.
The climate is cool and damp, with greater rainfall in the interior uplands than along the coast. Temperatures are moderate in comparison with other areas of the East European Plain, such as in neighbouring Russia. Despite its extensive agriculture, the Baltic region remains more than one-third forested. Trees that adapt to the often poorly drained soil are common, such as birches and conifers. Among the animals that inhabit the region are elk, boar, roe deer, wolves, hares, and badgers.
The Latvian and Lithuanian peoples speak languages belonging to the Baltic branch of the Indo-European linguistic family and are commonly known as Balts. The Estonian (and Livonian) peoples, who are considered Finnic peoples, speak languages of the Finno-Ugric family and constitute the core of the southern branch of the Baltic Finns. Culturally, the Estonians were strongly influenced by the Germans, and traces of the original Finnish culture have been preserved only in folklore. The Latvians also were considerably Germanized, and the majority of both the Estonians and the Latvians belong to the Lutheran church. However, most Lithuanians, associated historically with Poland, are Roman Catholic.
The vast majority of ethnic Estonians, Latvians, and Lithuanians live within the borders of their respective states. In all three countries virtually everyone among the titular nationalities speaks the native tongue as their first language, which is remarkable in light of the massive Russian immigration to the Baltic states during the second half of the 20th century. Initially, attempts to Russify the Baltic peoples were overt, but later they were moderated as Russian immigration soared and the sheer weight of the immigrant numbers simply served to promote this objective in less-blatant ways. Independence from the Soviet Union in 1991 allowed the Baltic states to place controls on immigration, and, in the decade following, the Russian presence in Baltic life diminished. At the beginning of the 21st century, the titular nationalities of Lithuania and Estonia accounted for about four-fifths and two-thirds of the countries’ populations, respectively, while ethnic Latvians made up just less than three-fifths of their nation’s population. Around this time, Poles eclipsed Russians as the largest minority in Lithuania. Urban dwellers constitute more than two-thirds of the region’s population, with the largest cities being Vilnius and Kaunas in southeastern Lithuania, the Latvian capital of Riga, and Tallinn on the northwestern coast of Estonia. Life expectancy in the Baltic states is comparatively low by European standards, as are the rates of natural increase, which were negative in all three countries at the beginning of the 21st century, owing in part to an aging population. Overall population fell in each of the Baltic states in the years following independence, primarily because of the return emigration of Russians to Russia, as well as other out-migration to western Europe and North America. In some cases, Russians took on the nationalities of their adopted Baltic countries and were thus counted among the ethnic majorities.
After the breakup of the Soviet Union, the Baltic states struggled to make a transition to a market economy from the system of Soviet national planning that had been in place since the end of World War II. A highly productive region for the former U.S.S.R., the Baltic states catered to economies of scale in output and regional specialization in industry - for example, manufacturing electric motors, machine tools, and radio receivers. Latvia, for example, was a leading producer of Soviet radio receivers. Throughout the 1990s privatization accelerated, national currencies were reintroduced, and non-Russian foreign investment increased.
Agriculture remains important to the Baltic economy, with potatoes, cereal grains, and fodder crops produced and dairy cattle and pigs raised. Timbering and fisheries enjoy modest success. The Baltic region is not rich in natural resources. Though Estonia is an important producer of oil shale, a large share of mineral and energy resources is imported. Low energy supplies, inflationary prices, and an economic collapse in Russia contributed to an energy crisis in the Baltics in the 1990s. Industry in the Baltic states is prominent, especially the production of food and beverages, textiles, wood products, and electronics and the traditional stalwarts of machine building and metal fabricating. The three states have the highest productivity of the former constituent republics of the Soviet Union.
Shortly after attaining independence, Estonia, Latvia, and Lithuania abandoned the Russian ruble in favour of new domestic currencies (the kroon, lats, and litas, respectively), which, as they strengthened, greatly improved foreign trade. The main trading partners outside the region are Russia, Germany, Finland, and Sweden. The financial stability of the Baltic nations was an important prerequisite to their entering the European Union and the North Atlantic Treaty Organization in 2004. Each of the Baltic states was preparing to adopt the euro as its common currency by the end of the decade.
Population : 2020 : 6,030,542; Area : 175,228 square kilometers = 67,656 sq mi.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
613) Mont Blanc Tunnel
Mont Blanc Tunnel, major Alpine automotive tunnel connecting France and Italy. It is 7.3 miles (11.7 km) long and is driven under the highest mountain in Europe. The tunnel is notable for its solution of a difficult ventilation problem and for being the first large rock tunnel to be excavated full-face—i.e., with the entire diameter of the tunnel bore drilled and blasted. Otherwise it was conventionally driven from two headings, the Italian and French crews beginning work in 1958 and 1959, respectively, and meeting in August 1962. Many difficulties, including an avalanche that swept the Italian camp, were overcome, and, when the tunnel opened in 1965, it was the longest vehicular tunnel in the world. It fulfilled a 150-year-old dream and is of great economic importance, providing a significantly shortened year-round automotive route between the two countries. In March 1999, however, a two-day fire killed 39 people and caused extensive damage to the tunnel, forcing it to close. It reopened to car traffic in March 2002 and to trucks and buses in the following months. Protestors, citing environmental and safety concerns, opposed the tunnel’s reopening, especially its use by heavy trucks.
The Mont Blanc Tunnel is a highway tunnel in Europe, under the Mont Blanc mountain in the Alps. It links Chamonix, Haute-Savoie, France with Courmayeur, Aosta Valley, Italy, via European route E25, in particular the motorway from Geneva (A40 of France) to Turin (A5 of Italy). The passageway is one of the major trans-Alpine transport routes, particularly for Italy, which relies on this tunnel for transporting as much as one-third of its freight to northern Europe. It reduces the route from France to Turin by 50 kilometres (30 miles) and to Milan by 100 km (60 mi). Northeast of Mont Blanc's summit, the tunnel is about 15 km (10 mi) southwest of the tripoint with Switzerland, near Mont Dolent.
The agreement between France and Italy on building a tunnel was signed in 1949. Two operating companies were founded, each responsible for one half of the tunnel: the French ‘Autoroutes et tunnel du Mont-Blanc’ (ATMB), founded on 30 April 1958, and the Italian ‘Società italiana per azioni per il Traforo del Monte Bianco’ (SITMB), founded on 1 September 1957. Drilling began in 1959 and was completed in 1962; the tunnel was opened to traffic on 19 July 1965.
The tunnel is 11.611 km (7.215 mi) in length, 8.6 m (28 ft) in width, and 4.35 m (14.3 ft) in height. The passageway is not horizontal, but in a slightly inverted "V", which assists ventilation. The tunnel consists of a single gallery with a two-lane dual direction road. At the time of its construction, it was three times longer than any existing highway tunnel.
The tunnel passes almost exactly under the summit of the Aiguille du Midi. At this spot, it lies 2,480 metres (8,140 ft) beneath the surface, making it the world's second deepest operational tunnel after the Gotthard Base Tunnel.
The Mont Blanc Tunnel was originally managed by the two building companies. Following a fire in 1999 in which 39 people died, which showed how lack of coordination could hamper the safety of the tunnel, all the operations are managed by a single entity: MBT-EEIG, controlled by both ATMB and SITMB together, through a 50–50 shares distribution.
An alternative route for road traffic between France to Italy is the Fréjus Road Tunnel. Road traffic grew steadily until 1994, even with the opening of the Fréjus tunnel. Since then, the combined traffic volume of the former has remained roughly constant.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
614) Wembley Stadium
Wembley Stadium, stadium in the borough of Brent in northwestern London, England, built as a replacement for an older structure of the same name on the same site. The new Wembley was the largest stadium in Great Britain at the time of its opening in 2007, with a seating capacity of 90,000. It is owned by a subsidiary of the Football Association and is used for football (soccer), rugby, and other sports and also for musical events.
The original Wembley Stadium, built to house the British Empire Exhibition of 1924–25, was completed in advance of the exhibition in 1923. It served as the principal venue of the London 1948 Olympic Games and remained in use until 2000. Construction of the new stadium began in 2002. The English firm Foster + Partners and the American stadium specialists HOK Sports Venue Event (now known as Populous) were the architects. Excavations to lower the elevation of the pitch (playing field) uncovered the foundations of Watkin’s Tower, a building project of the 1890s that would have been the world’s tallest structure had it been completed. The new stadium officially opened in March 2007.
Wembley Stadium is almost round in shape, with a circumference of 3,280 feet (1 km). The most striking architectural feature is a giant arch that is the principal support of the roof. The arch is 436 feet (133 metres) in height and is tilted 22° from the perpendicular. The movable stadium roof does not close completely but can shelter all the seats.
Wembley Stadium has hosted the Football Association Cup Final every year since the year of its completion. It is also the home of England’s national football team. During the London 2012 Olympic Games, the stadium was a venue for football, including the final (gold medal) match. American (gridiron) football is played at the stadium in the National Football League International Series.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
615) Computer memory
Computer memory, device that is used to store data or programs (sequences of instructions) on a temporary or permanent basis for use in an electronic digital computer. Computers represent information in binary code, written as sequences of 0s and 1s. Each binary digit (or “bit”) may be stored by any physical system that can be in either of two stable states, to represent 0 and 1. Such a system is called bistable. This could be an on-off switch, an electrical capacitor that can store or lose a charge, a magnet with its polarity up or down, or a surface that can have a pit or not. Today capacitors and transistors, functioning as tiny electrical switches, are used for temporary storage, and either disks or tape with a magnetic coating, or plastic discs with patterns of pits are used for long-term storage.
Computer memory is divided into main (or primary) memory and auxiliary (or secondary) memory. Main memory holds instructions and data when a program is executing, while auxiliary memory holds data and programs not currently in use and provides long-term storage.
Main Memory
The earliest memory devices were electro-mechanical switches, or relays, and electron tubes. In the late 1940s the first stored-program computers used ultrasonic waves in tubes of mercury or charges in special electron tubes as main memory. The latter were the first random-access memory (RAM). RAM contains storage cells that can be accessed directly for read and write operations, as opposed to serial access memory, such as magnetic tape, in which each cell in sequence must be accessed till the required cell is located.
Magnetic drum memory
Magnetic drums, which had fixed read/write heads for each of many tracks on the outside surface of a rotating cylinder coated with a ferromagnetic material, were used for both main and auxiliary memory in the 1950s, although their data access was serial.
Magnetic core memory
About 1952 the first relatively cheap RAM was developed: magnetic core memory, an arrangement of tiny ferrite cores on a wire grid through which current could be directed to change individual core alignments. Because of the inherent advantage of RAM, core memory was the principal form of main memory until superseded by semiconductor memory in the late 1960s.
Semiconductor memory
There are two basic kinds of semiconductor memory. Static RAM (SRAM) consists of flip-flops, a bistable circuit composed of four to six transistors. Once a flip-flop stores a bit, it keeps that value until the opposite value is stored in it. SRAM gives fast access to data, but it is physically relatively large. It is used primarily for small amounts of memory called registers in a computer’s central processing unit (CPU) and for fast “cache” memory. Dynamic RAM (DRAM) stores each bit in an electrical capacitor rather than in a flip-flop, using a transistor as a switch to charge or discharge the capacitor. Because it has fewer electrical components, a DRAM storage cell is smaller than SRAM. However, access to its value is slower and, because capacitors gradually leak charges, stored values must be recharged approximately 50 times per second. Nonetheless, DRAM is generally used for main memory because the same size chip can hold several times as much DRAM as SRAM.
Storage cells in RAM have addresses. It is common to organize RAM into “words” of 8 to 64 bits, or 1 to 8 bytes (8 bits = 1 byte). The size of a word is generally the number of bits that can be transferred at a time between main memory and the CPU. Every word, and usually every byte, has an address. A memory chip must have additional decoding circuits that select the set of storage cells that are at a particular address and either store a value at that address or fetch what is stored there. The main memory of a modern computer consists of a number of memory chips, each of which might hold many megabytes (millions of bytes), and still further addressing circuitry selects the appropriate chip for each address. In addition, DRAM requires circuits to detect its stored values and refresh them periodically.
Main memories take longer to access data than CPUs take to operate on them. For instance, DRAM memory access typically takes 20 to 80 nanoseconds (billionths of a second), but CPU arithmetic operations may take only a nanosecond or less. There are several ways in which this disparity is handled. CPUs have a small number of registers, very fast SRAM that hold current instructions and the data on which they operate. Cache memory is a larger amount (up to several megabytes) of fast SRAM on the CPU chip. Data and instructions from main memory are transferred to the cache, and since programs frequently exhibit “locality of reference”—that is, they execute the same instruction sequence for a while in a repetitive loop and operate on sets of related data—memory references can be made to the fast cache once values are copied into it from main memory.
Much of the DRAM access time goes into decoding the address to select the appropriate storage cells. The locality of reference property means that a sequence of memory addresses will frequently be used, and fast DRAM is designed to speed access to subsequent addresses after the first one. Synchronous DRAM (SDRAM) and EDO (extended data output) are two such types of fast memory.
Nonvolatile semiconductor memories, unlike SRAM and DRAM, do not lose their contents when power is turned off. Some nonvolatile memories, such as read-only memory (ROM), are not rewritable once manufactured or written. Each memory cell of a ROM chip has either a transistor for a 1 bit or none for a 0 bit. ROMs are used for programs that are essential parts of a computer’s operation, such as the bootstrap program that starts a computer and loads its operating system or the BIOS (basic input/output system) that addresses external devices in a personal computer (PC).
EPROM (erasable programmable ROM), EAROM (electrically alterable ROM), and flash memory are types of nonvolatile memories that are rewritable, though the rewriting is far more time-consuming than reading. They are thus used as special-purpose memories where writing is seldom necessary—if used for the BIOS, for example, they may be changed to correct errors or update features.
Auxiliary Memory
Auxiliary memory units are among computer peripheral equipment. They trade slower access rates for greater storage capacity and data stability. Auxiliary memory holds programs and data for future use, and, because it is nonvolatile (like ROM), it is used to store inactive programs and to archive data. Early forms of auxiliary storage included punched paper tape, punched cards, and magnetic drums. Since the 1980s, the most common forms of auxiliary storage have been magnetic disks, magnetic tapes, and optical discs.
Magnetic disk drives
Magnetic disks are coated with a magnetic material such as iron oxide. There are two types: hard disks made of rigid aluminum or glass, and removable diskettes made of flexible plastic. In 1956 the first magnetic hard drive (HD) was invented at IBM; consisting of 50 21-inch (53-cm) disks, it had a storage capacity of 5 megabytes. By the 1990s the standard HD diameter for PCs had shrunk to 3.5 inches (about 8.9 cm), with storage capacities in excess of 100 gigabytes (billions of bytes); the standard size HD for portable PCs (“laptops”) was 2.5 inches (about 6.4 cm). Since the invention of the floppy disk drive (FDD) at IBM by Alan Shugart in 1967, diskettes have shrunk from 8 inches (about 20 cm) to the current standard of 3.5 inches (about 8.9 cm). FDDs have low capacity—generally less than two megabytes—and have become obsolete since the introduction of optical disc drives in the 1990s.
Hard drives generally have several disks, or platters, with an electromagnetic read/write head for each surface; the entire assembly is called a comb. A microprocessor in the drive controls the motion of the heads and also contains RAM to store data for transfer to and from the disks. The heads move across the disk surface as it spins up to 15,000 revolutions per minute; the drives are hermetically sealed, permitting the heads to float on a thin film of air very close to the disk’s surface. A small current is applied to the head to magnetize tiny spots on the disk surface for storage; similarly, magnetized spots on the disk generate currents in the head as it moves by, enabling data to be read. FDDs function similarly, but the removable diskettes spin at only a few hundred revolutions per minute.
Data are stored in close concentric tracks that require very precise control of the read/write heads. Refinements in controlling the heads have enabled smaller and closer packing of tracks—up to 20,000 tracks per inch (8,000 tracks per cm) by the start of the 21st century—which has resulted in the storage capacity of these devices growing nearly 30 percent per year since the 1980s. RAID (redundant array of inexpensive disks) combines multiple disk drives to store data redundantly for greater reliability and faster access. They are used in high-performance computer network servers.
Magnetic tape
Magnetic tape, similar to the tape used in tape recorders, has also been used for auxiliary storage, primarily for archiving data. Tape is cheap, but access time is far slower than that of a magnetic disk because it is sequential-access memory—i.e., data must be sequentially read and written as a tape is unwound, rather than retrieved directly from the desired point on the tape. Servers may also use large collections of tapes or optical discs, with robotic devices to select and load them, rather like old-fashioned jukeboxes.
Optical discs
Another form of largely read-only memory is the optical compact disc, developed from videodisc technology during the early 1980s. Data are recorded as tiny pits in a single spiral track on plastic discs that range from 3 to 12 inches (7.6 to 30 cm) in diameter, though a diameter of 4.8 inches (12 cm) is most common. The pits are produced by a laser or by a stamping machine and are read by a low-power laser and a photocell that generates an electrical signal from the varying light reflected from the pattern of pits. Optical discs are removable and have a far greater memory capacity than diskettes; the largest ones can store many gigabytes of information.
A common optical disc is the CD-ROM (compact disc read-only memory). It holds about 700 megabytes of data, recorded with an error-correcting code that can correct bursts of errors caused by dust or imperfections. CD-ROMs are used to distribute software, encyclopaedias, and multimedia text with audio and images. CD-R (CD-recordable), or WORM (write-once read-many), is a variation of CD-ROM on which a user may record information but not subsequently change it. CD-RW (CD-rewritable) disks can be re-recorded. DVDs (digital video, or versatile, discs), developed for recording movies, store data more densely than does CD-ROM, with more powerful error correction. Though the same size as CDs, DVDs typically hold 5 to 17 gigabytes—several hours of video or several million text pages.
Magneto-optical discs
Magneto-optical discs are a hybrid storage medium. In reading, spots with different directions of magnetization give different polarization in the reflected light of a low-power laser beam. In writing, every spot on the disk is first heated by a strong laser beam and then cooled under a magnetic field, magnetizing every spot in one direction, to store all 0s. The writing process then reverses the direction of the magnetic field to store 1s where desired.
Memory Hierarchy
Although the main/auxiliary memory distinction is broadly useful, memory organization in a computer forms a hierarchy of levels, arranged from very small, fast, and expensive registers in the CPU to small, fast cache memory; larger DRAM; very large hard disks; and slow and inexpensive nonvolatile backup storage. Memory usage by modern computer operating systems spans these levels with virtual memory, a system that provides programs with large address spaces (addressable memory), which may exceed the actual RAM in the computer. Virtual memory gives each program a portion of main memory and stores the rest of its code and data on a hard disk, automatically copying blocks of addresses to and from main memory as needed. The speed of modern hard disks together with the same locality of reference property that lets caches work well makes virtual memory feasible.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
616) Computer graphics
Computer graphics, production of images on computers for use in any medium. Images used in the graphic design of printed material are frequently produced on computers, as are the still and moving images seen in comic strips and animations. The realistic images viewed and manipulated in electronic games and computer simulations could not be created or supported without the enhanced capabilities of modern computer graphics. Computer graphics also are essential to scientific visualization, a discipline that uses images and colours to model complex phenomena such as air currents and electric fields, and to computer-aided engineering and design, in which objects are drawn and analyzed in computer programs. Even the windows-based graphical user interface, now a common means of interacting with innumerable computer programs, is a product of computer graphics.
Image Display
Images have high information content, both in terms of information theory (i.e., the number of bits required to represent images) and in terms of semantics (i.e., the meaning that images can convey to the viewer). Because of the importance of images in any domain in which complex information is displayed or manipulated, and also because of the high expectations that consumers have of image quality, computer graphics have always placed heavy demands on computer hardware and software.
In the 1960s early computer graphics systems used vector graphics to construct images out of straight line segments, which were combined for display on specialized computer video monitors. Vector graphics is economical in its use of memory, as an entire line segment is specified simply by the coordinates of its endpoints. However, it is inappropriate for highly realistic images, since most images have at least some curved edges, and using all straight lines to draw curved objects results in a noticeable “stair-step” effect.
In the late 1970s and ’80s raster graphics, derived from television technology, became more common, though still limited to expensive graphics workstation computers. Raster graphics represents images by bitmaps stored in computer memory and displayed on a screen composed of tiny pixels. Each pixel is represented by one or more memory bits. One bit per pixel suffices for black-and-white images, while four bits per pixel specify a 16-step gray-scale image. Eight bits per pixel specify an image with 256 colour levels; so-called “true color” requires 24 bits per pixel (specifying more than 16 million colours). At that resolution, or bit depth, a full-screen image requires several megabytes (millions of bytes; 8 bits = 1 byte) of memory. Since the 1990s, raster graphics has become ubiquitous. Personal computers are now commonly equipped with dedicated video memory for holding high-resolution bitmaps.
3-D Rendering
Although used for display, bitmaps are not appropriate for most computational tasks, which need a three-dimensional representation of the objects composing the image. One standard benchmark for the rendering of computer models into graphical images is the Utah Teapot, created at the University of Utah in 1975. Represented skeletally as a wire-frame image, the Utah Teapot is composed of many small polygons. However, even with hundreds of polygons, the image is not smooth. Smoother representations can be provided by Bezier curves, which have the further advantage of requiring less computer memory. Bezier curves are described by cubic equations; a cubic curve is determined by four points or, equivalently, by two points and the curve’s slopes at those points. Two cubic curves can be smoothly joined by giving them the same slope at the junction. Bezier curves, and related curves known as B-splines, were introduced in computer-aided design programs for the modeling of automobile bodies.
Rendering offers a number of other computational challenges in the pursuit of realism. Objects must be transformed as they rotate or move relative to the observer’s viewpoint. As the viewpoint changes, solid objects must obscure those behind them, and their front surfaces must obscure their rear ones. This technique of “hidden surface elimination” may be done by extending the pixel attributes to include the “depth” of each pixel in a scene, as determined by the object of which it is a part. Algorithms can then compute which surfaces in a scene are visible and which ones are hidden by others. In computers equipped with specialized graphics cards for electronic games, computer simulations, and other interactive computer applications, these algorithms are executed so quickly that there is no perceptible lag - that is, rendering is achieved in “real time.”
Shading And Texturing
Visual appearance includes more than just shape and colour; texture and surface finish (e.g., matte, satin, glossy) also must be accurately modeled. The effects that these attributes have on an object’s appearance depend in turn on the illumination, which may be diffuse, from a single source, or both. There are several approaches to rendering the interaction of light with surfaces. The simplest shading techniques are flat, Gouraud, and Phong. In flat shading, no textures are used and only one colour tone is used for the entire object, with different amounts of white or black added to each face of the object to simulate shading. The resulting model appears flat and unrealistic. In Gouraud shading, textures may be used (such as wood, stone, stucco, and so forth); each edge of the object is given a colour that factors in lighting, and the computer interpolates (calculates intermediate values) to create a smooth gradient over each face. This results in a much more realistic image. Modern computer graphics systems can render Gouraud images in real time. In Phong shading each pixel takes into account any texture and all light sources. It generally gives more realistic results but is somewhat slower.
The shading techniques described thus far do not model specular reflection from glossy surfaces or model transparent and translucent objects. This can be done by ray tracing, a rendering technique that uses basic optical laws of reflection and refraction. Ray tracing follows an imaginary light ray from the viewpoint through each point in a scene. When the ray encounters an object, it is traced as it is reflected or refracted. Ray tracing is a recursive procedure; each reflected or refracted ray is again traced in the same fashion until it vanishes into the background or makes an insignificant contribution. Ray tracing may take a long time—minutes or even hours can be consumed in creating a complex scene.
In reality, objects are illuminated not only directly by a light source such as the Sun or a lamp but also more diffusely by reflected light from other objects. This type of lighting is re-created in computer graphics by radiosity techniques, which model light as energy rather than rays and which look at the effects of all the elements in a scene on the appearance of each object. For example, a brightly coloured object will cast a slight glow of the same colour on surrounding surfaces. Like ray tracing, radiosity applies basic optical principles to achieve realism - and like ray tracing, it is computationally expensive.
Processors And Programs
One way to reduce the time required for accurate rendering is to use parallel processing, so that in ray shading, for example, multiple rays can be traced at once. Another technique, pipelined parallelism, takes advantage of the fact that graphics processing can be broken into stages—constructing polygons or Bezier surfaces, eliminating hidden surfaces, shading, rasterization, and so on. Using pipelined parallelism, as one image is being rasterized, another can be shaded, and a third can be constructed. Both kinds of parallelism are employed in high-performance graphics processors. Demanding applications with many images may also use “farms” of computers. Even with all of this power, it may take days to render the many images required for a computer-animated motion picture.
Computer graphics relies heavily on standard software packages. The OpenGL (open graphics library) specifies a standard set of graphics routines that may be implemented in computer programming languages such as C or Java. PHIGS (programmer’s hierarchical interactive graphics system) is another set of graphics routines. VRML (virtual reality modeling language) is a graphics description language for World Wide Web applications. Several commercial and free packages provide extensive three-dimensional modeling capabilities for realistic graphics. More modest tools, offering only elementary two-dimensional graphics, are the “paint” programs commonly installed on home computers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
617) Appendicitis
Appendicitis, inflammation of the appendix, the closed-end tube attached to the cecum, the first region of the large intestine. While some cases are mild and may resolve on their own, most require the removal of the inflamed appendix through abdominal surgery (usually via laparotomy or laparoscopy), often leaving a small scar or scars. If untreated, there is a high risk of peritonitis, in which the inflamed appendix bursts; shock and death can result.
The first person to describe acute appendicitis was American physician Reginald H. Fitz in 1886. His article, “Perforating Inflammation of the Vermiform Appendix with Special Reference to Its Early Diagnosis and Treatment,” was published in the ‘American Journal of Medical Science’ and led to the recognition that appendicitis is one of the most common causes of trouble in the abdomen for humans worldwide.
The main factor that seems to precipitate a bout of appendicitis is obstruction of the appendix. Obstruction may arise from blockage by a hard mass of fecal matter (a fecal stone), by infection with parasites, or by the presence of a foreign object. Lymphoid hyperplasia, a rapid increase in the production of white blood cells known as lymphocytes, which may be associated with viral illness, can also cause obstruction. Excessive consumption of alcohol may exacerbate a case of appendicitis. Doctors try to establish whether a patient may have appendicitis by measuring the number of white blood cells (leukocytes), which often increase from the normal count of between 5,000 and 10,000 (for an adult) to an abnormal count of between 12,000 and 20,000. This takes place because of other acute inflammatory conditions that occur in the abdomen at the same time.
Those who suffer an attack of appendicitis usually feel pain all over the abdomen or sometimes in the upper abdomen around the area of the navel. The pain is usually not very severe initially. For a period of one to six hours after the first pain sensation, the abdominal pain becomes restricted to the lower right side, and it becomes sharper. There may also be nausea and vomiting, with patients often developing a fever, although this sometimes happens some hours, or even a day, later.
The basic method for treating appendicitis is for a surgeon to completely remove the appendix in a minor operation called an appendectomy. The operation, conducted under anesthesia, often is completed quickly. Problems arise if the diagnosis of acute appendicitis is not made straightaway. It is possible for doctors to wait for a while - often as long as 34 hours - so that a more definitive diagnosis can be made. During that time it is important for the patient to remain in the hospital in case of medical emergencies or when the need for surgical intervention arises.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
618) Spleen
Spleen, organ of the lymphatic system located in the left side of the abdominal cavity under the diaphragm, the muscular partition between the abdomen and the chest. In humans it is about the size of a fist and is well supplied with blood. As the lymph nodes are filters for the lymphatic circulation, the spleen is the primary filtering element for the blood. The organ also plays an important role in storing and releasing certain types of immune cells that mediate tissue inflammation.
The spleen is encased in a thick connective-tissue capsule. Inside, the mass of splenic tissue is of two types, the red pulp and the white pulp, which do not separate into regions but intermingle and are distributed throughout the spleen. The white pulp is lymphoid tissue that usually surrounds splenic blood vessels. The red pulp is a network of splenic cords (cords of Billroth) and sinusoids (wide vessels) filled with blood, and it is in the red pulp that most of the filtration occurs.
The white pulp of the spleen contains typical lymphoid elements, such as plasma cells, lymphocytes, and lymphatic nodules, called follicles in the spleen. Germinal centres in the white pulp serve as the sites of lymphocyte production. Similar to the lymph nodes, the spleen reacts to microorganisms and other antigens that reach the bloodstream by releasing special phagocytic cells known as macrophages. Splenic macrophages reside in both red and white pulp, and they serve to remove foreign material from the blood and to initiate an immune reaction that results in the production of antibodies.
The splenic cords in the red pulp in the spleen serve as important reservoirs for large quantities of macrophages and other phagocytic white blood cells called monocytes. Studies have shown that upon severe tissue injury, such as that sustained during a heart attack, the spleen releases a legion of monocytes, which then travel through the bloodstream to the site of injury. There they serve to regulate inflammation and to facilitate tissue healing. In animals who have had their spleens removed, the monocyte response is not observed at the site of tissue injury, and healing is less thorough. In addition, humans who have had their spleens removed (a procedure known as a splenectomy) appear to be at increased risk of infections and, as they age, cardiovascular disease and possibly even certain types of cancer. It is suspected that the absence of immune-regulating factors released from the spleen is related to the increase in susceptibility to such diseases in individuals who have undergone a splenectomy.
The red pulp has a specialized role in addition to filtration. It is the body’s major site of the destruction of red blood cells, which normally have a life span of only 120 days. Degenerate red cells are removed from the circulation in the spleen, and the hemoglobin that they contain is degraded to a readily excretable pigment and an iron molecule that is recycled (i.e., used to produce new hemoglobin elsewhere).
In some species the spleen also acts as a reservoir for blood during periods of inactivity. When such an animal is aroused for defense or flight, the capsule of the spleen contracts, forcing additional blood reserves into the circulation. It is unclear whether the human spleen has this capability.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
619) Seashell
Seashell, hard exoskeleton of marine mollusks such as snails, bivalves, and chitons that serves to protect and support their bodies. It is composed largely of calcium carbonate secreted by the mantle, a skinlike tissue in the mollusk’s body wall. Seashells are usually made up of several layers of distinct microstructures that have differing mechanical properties. The shell layers are secreted by different parts of the mantle, although incremental growth takes place only at the shell margin. One of the most distinctive microstructures is nacre, or mother-of-pearl, which occurs as an inner layer in the shells of some gastropods and bivalves and in those of the cephalopods Nautilus and Spirula.
Seashells may be univalved (as in snails) or bivalved (as in clams), or they may be composed of a series of plates (as in chitons). They may also be reduced to small internal plates or granules, as in some slugs. In gastropods, bivalves, and shelled cephalopods, the coiled form of the shell approximates an equiangular spiral or variations of it. In some forms, such as the worm shells (family Vermetidae), however, the coiling of the shell is irregular. Shells are frequently ornamented with complex arrangements of spines, folia, ribs, cords, and grooves, which in some species provide protection against predators, give added strength, or assist in burrowing. The aperture of gastropod shells is particularly vulnerable to predators and may be protected by complex folds and teeth. Many species use a calcareous or horny operculum (trapdoor) on the foot to seal off the aperture when the foot is withdrawn into the shell. In the cephalopods Nautilus and Spirula, the planospirally coiled shell consists of multiple chambers connected by a porous tube called the siphuncle. The chambers contain quantities of water and gas that are adjusted by the siphuncle to achieve neutral buoyancy. Many seashells are brightly coloured in complicated designs by a variety of pigments secreted by special cells in the edge of the mantle. In some cases there is an obvious camouflage function, but in most others the significance of the colours is unclear.
Seashells are collected all over the world because of their endless diversity, elegance of form, and bright colours. They also have been used to make jewelry, buttons, inlays, and other decorative items throughout history. In ancient times certain varieties, such as tooth shells and cowrie shells, were even used as money.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
620) International Space Station
The International Space Station (ISS) is a multi-nation construction project that is the largest single structure humans ever put into space. Its main construction was completed between 1998 and 2011, although the station continually evolves to include new missions and experiments. It has been continuously occupied since Nov. 2, 2000.
As of January 2018, 230 individuals from 18 countries have visited the International Space Station. Top participating countries include the United States (145 people) and Russia (46 people). Astronaut time and research time on the space station is allocated to space agencies according to how much money or resources (such as modules or robotics) that they contribute. The ISS includes contributions from 15 nations. NASA (United States), Roscosmos (Russia) and the European Space Agency are the major partners of the space station who contribute most of the funding; the other partners are the Japanese Aerospace Exploration Agency and the Canadian Space Agency.
Current plans call for the space station to be operated through at least 2024, with the partners discussing a possible extension until 2028. Afterwards, plans for the space station are not clearly laid out. It could be deorbited, or recycled for future space stations in orbit.
Crews aboard the ISS are assisted by mission control centers in Houston and Moscow and a payload control center in Huntsville, Ala. Other international mission control centers support the space station from Japan, Canada and Europe. The ISS can also be controlled from mission control centers in Houston or Moscow.
Finding the space station in the sky
The space station flies at an average altitude of 248 miles (400 kilometers) above Earth. It circles the globe every 90 minutes at a speed of about 17,500 mph (28,000 km/h). In one day, the station travels about the distance it would take to go from Earth to the moon and back.
The space station can rival the brilliant planet Venus in brightness and appears as a bright moving light across the night sky. It can be seen from Earth without the use of a telescope by night sky observers who know when and where to look. You can use this NASA app to find out when and where to spot the International Space Station's location.
Crew composition and activities
The ISS generally holds crews of between three and six people (the full six-person size was possible after 2009, when the station facilities could support it). But crew sizes have varied over the years. After the Columbia space shuttle disaster in 2003 that grounded flights for several years, crews were as small as two people due to the reduced capacity to launch people into space on the smaller Russian Soyuz spacecraft. The space station has also housed as many as 13 people several times, but only for a few days during crew changeovers or space shuttle visits.
The space shuttle fleet retired in 2011, leaving Soyuz as the only current method to bring people to the ISS. Three astronauts fly to the space station in Soyuz spacecraft and spend about six months there at a time. Sometimes, mission lengths vary a little due to spacecraft scheduling or special events (such as the one-year crew that stayed on the station between 2015 and 2016.) If the crew needs to evacuate the station, they can return to Earth aboard two Russian Soyuz vehicles docked to the ISS.
Starting in 2019 or 2020, the commercial crew vehicles Dragon (by SpaceX) and CST-100 (by Boeing) are expected to increase ISS crew numbers because they can bring up more astronauts at a time than Soyuz. When the U.S. commercial vehicles are available, demand for Soyuz will decrease because NASA will purchase fewer seats for its astronauts from the Russians.
Astronauts spend most of their time on the ISS performing experiments and maintenance, and at least two hours of every day are allocated to exercise and personal care. They also occasionally perform spacewalks, conduct media/school events for outreach, and post updates to social media, as Canadian astronaut Chris Hadfield, an ISS commander, did in 2013. (However, the first astronaut to tweet from space was Mike Massimino, who did it from a space shuttle in May 2009.)
The ISS is a platform for long-term research for human health, which NASA bills as a key stepping stone to letting humans explore other solar system destinations such as the moon or Mars. Human bodies change in microgravity, including alterations to muscles, bones, the cardiovascular system and the eyes; many scientific investigations are trying to characterize how severe the changes are and whether they can be reversed. (Eye problems in particular are vexing the agency, as their cause is unclear and astronauts are reporting permanent changes to vision after returning to Earth.)
Astronauts also participate in testing out commercial products – such as an espresso machine or 3D printers – or doing biological experiments, such as on rodents or plants, which the astronauts can grow and sometimes eat in space.
Crews are not only responsible for science, but also for maintaining the station. Sometimes, this requires that they venture on spacewalks to perform repairs. From time to time, these repairs can be urgent — such as when a part of the ammonia system fails, which has happened a couple of times. Spacewalk safety procedures were changed after a potentially deadly 2013 incident when astronaut Luca Parmitano's helmet filled with water while he was working outside the station. NASA now responds quickly to "water incursion" incidents. It also has added pads to the spacesuits to soak up the liquid, and a tube to provide an alternate breathing location should the helmet fill with water.
NASA is also testing technology that could supplement or replace astronaut spacewalks. One example is Robonaut. A prototype currently on board the station is able to flip switches and do other routine tasks under supervision, and may be modified at some point to work "outside" as well.
Records in space
The ISS has had several notable milestones over the years, when it comes to crews:
• Most consecutive days in space by an American: 340 days, which happened when Scott Kelly took part in a one-year mission to the International Space Station in 2015-16 (along with Russian cosmonaut Mikhail Kornienko). The space agencies did a comprehensive suite of experiments on the astronauts, including a "twin study" with Kelly and his Earth-bound former astronaut twin, Mark. NASA has expressed interest in more long-duration missions, although none have yet been announced.
• Longest single spaceflight by a woman: 289 days, during American astronaut Peggy Whitson's 2016-17 mission aboard the space station.
• Most total time spent in space by a woman: Again, that's Peggy Whitson, who racked up most of her 665 days in space on the ISS.
• Most women in space at once: This happened in April 2010 when women from two spaceflight missions met at the ISS. This included Tracy Caldwell Dyson (who flew on a Soyuz spacecraft for a long-duration mission) and NASA astronauts Stephanie Wilson and Dorothy Metcalf-Lindenburger and Japan's Naoko Yamazaki, who arrived aboard the space shuttle Discovery on its brief STS-131 mission.
• Biggest space gathering: 13 people, during NASA's STS-127 shuttle mission aboard Endeavour in 2009. (It's been tied a few times during later missions.)
• Longest single spacewalk: 8 hours and 56 minutes during STS-102, for an ISS construction mission in 2001. NASA astronauts Jim Voss and Susan Helms participated.
• Longest Russian spacewalk: 8 hours and 13 minutes during Expedition 54, to repair an ISS antenna. Russian astronauts Alexander Misurkin and Anton Shkaplerov participated.
Structure
The space station, including its large solar arrays, spans the area of a U.S. football field, including the end zones, and weighs 861,804 lbs. (391,000 kilograms), not including visiting vehicles. The complex now has more livable room than a conventional five-bedroom house, and has two bathrooms, gym facilities and a 360-degree bay window. Astronauts have also compared the space station's living space to the cabin of a Boeing 747 jumbo jet.
The International Space Station was taken into space piece-by-piece and gradually built in orbit using spacewalking astronauts and robotics. Most missions used NASA's space shuttle to carry up the heavier pieces, although some individual modules were launched on single-use rockets. The ISS includes modules and connecting nodes that contain living quarters and laboratories, as well as exterior trusses that provide structural support, and solar panels that provide power.
The first module, the Russia Zarya, launched on Nov. 20, 1998, on a Proton rocket. Two weeks later, space shuttle flight STS-88 launched the NASA Unity/Node 1 module. Astronauts performed spacewalks during STS-88 to connect the two parts of the station together; later, other pieces of the station were launched on rockets or in the space shuttle cargo bay. Some of the other major modules and components include:
• The truss, airlocks and solar panels (launched in stages throughout the ISS lifetime; docking adapters were launched in 2017 for new commercial spacecraft)
• Zvezda (Russia; launched in 2000)
• Destiny Laboratory Module (NASA; launched 2001)
• Canadarm2 robotic arm (CSA; launched 2001). It was originally used only for spacewalks and remote-controlled repairs. Today it also is regularly used to berth cargo spacecraft to the space station – spacecraft that can't use the other ports.
• Harmony/Node 2 (NASA; launched 2007)
• Columbus orbital facility (ESA; launched 2008)
• Dextre robotic hand (CSA; launched 2008)
• Japanese Experiment Module or Kibo (launched in stages between 2008-09)
• Cupola window and Tranquility/Node 3 (launched 2010)
• Leonardo Permanent Multipurpose Module (ESA; launched for permanent residency in 2011, although it was used before that to bring cargo to and from the station)
• Bigelow Expandable Activity Module (private module launched in 2016)
Spacecraft for the space station
Besides the space shuttle and Soyuz, the space station has been visited by many other kinds of spacecraft. Uncrewed Progress (Russia) vehicles make regular visits to the station. Europe's Automated Transfer Vehicle and Japan's H-II Transfer Vehicle used to do visits to the ISS as well, until their programs were retired.
NASA began developing commercial cargo spacecraft to the space station under the Commercial Orbital Transportation Services program, which lasted from 2006 to 2013. Starting in 2012, the first commercial spacecraft, SpaceX's Dragon, made a visit to the space station. Visits continue today with Dragon and Orbital ATK's Antares spacecraft under the first stage of NASA's Commercial Resupply Services program. Dragon, Antares and Sierra Nevada Corp.'s Dream Chaser all have received CRS-2 contracts expected to cover flights between 2019 and 2024.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
621) Esophagus
Esophagus, also spelled oesophagus, relatively straight muscular tube through which food passes from the pharynx to the stomach. The esophagus can contract or expand to allow for the passage of food. Anatomically, it lies behind the trachea and heart and in front of the spinal column; it passes through the muscular diaphragm before entering the stomach. Both ends of the esophagus are closed off by muscular constrictions known as sphincters; at the anterior, or upper, end is the upper esophageal sphincter, and at the distal, or lower, end is the lower esophageal sphincter.
The upper esophageal sphincter is composed of circular muscle tissue and remains closed most of the time. Food entering the pharynx relaxes this sphincter and passes through it into the esophagus; the sphincter immediately closes to prevent food from backing up. Contractions of the muscles in the esophageal wall (peristalsis) move the food down the esophageal tube. The food is pushed ahead of the peristaltic wave until it reaches the lower esophageal sphincter, which opens, allowing food to pass into the stomach, and then closes to prevent the stomach’s gastric juices and contents from entering the esophagus.
Disorders of the esophagus include ulceration and bleeding; heartburn, caused by gastric juices in the esophagus; achalasia, an inability to swallow or to pass food from the esophagus to the stomach, caused by destruction of the nerve endings in the walls of the esophagus; scleroderma, a collagen disease; and spasms of the esophageal muscles.
In some vertebrates the esophagus is not merely a tubular connection between the pharynx and the stomach but rather may serve as a storage reservoir or an ancillary digestive organ. In many birds, for example, an expanded region of the esophagus anterior to the stomach forms a thin-walled crop, which is the bird’s principal organ for the temporary storage of food. Some birds use the crop to carry food to their young. Ruminant mammals, such as the cow, are often said to have four “stomachs.” Actually, the first three of these chambers (rumen, reticulum, and omasum) are thought to be derived from the esophagus. Vast numbers of bacteria and protozoans live in the rumen and reticulum. When food enters these chambers, the microbes begin to digest and ferment it, breaking down not only protein, starch, and fats but cellulose as well. The larger, coarser material is periodically regurgitated as the cud, and after further chewing the cud is reswallowed. Slowly the products of microbial action, and some of the microbes themselves, move into the cow’s true stomach and intestine, where further digestion and absorption take place. Since the cow, like other mammals, has no cellulose-digesting enzymes of its own, it relies upon the digestive activity of these symbiotic microbes in its digestive tract. Much of the cellulose in the cow’s herbivorous diet, which otherwise would have no nutritive value, is thereby made available to the cow.
If the mouth is the gateway to the body, then the esophagus is a highway for food and drink to travel along to make it to the stomach. This body part has a very simple function, but can have many disorders.
Function
The esophagus is a tube that connects the throat (pharynx) and the stomach. It is about 8 inches (20 centimeters) long. The esophagus isn’t just a hollow tube that food slips down like a water slide, though. The esophagus is made of muscles that contract to move food to the stomach. This process is called peristalsis.
At the top of the esophagus is a band of muscle called the upper esophageal sphincter. Another band of muscle, the lower esophageal sphincter is at the bottom of the tube, slightly above the stomach. When a person swallows, these sphincters relax so food can pass into the stomach. When not in use, they contract so food and stomach acid do not flow back up the esophagus.
Conditions and diseases
As a person ages, the sphincters weaken, making some people more prone to backflow of acid from the stomach, a condition called gastroesophageal reflux disease (GERD). GERD can cause severe damage to the esophagus, according to the National Library of Medicine.
“GERD is due to the reflux of acid contents of the stomach that get refluxed up into the esophagus," Dr. Lisa Ganjhu, a clinical assistant professor of medicine and gastroenterologist at NYU Langone Medical Center, told Live Science. "Acid is not meant to be in the esophagus so the symptoms of that may be a burning sensation in the chest, the pain can be so intense that it feels like a heart attack. It is always best to seek medical attention if you are having those symptoms.”
Some people are sensitive to certain foods that lower the pressure of the lower esophageal sphincter and this allows the acid to wash up into the esophagus. Anxiety also increases the sensitivity of the esophagus so the sensation is more severe.
GERD can also cause esophagus ulcers. An ulcer is an open sore that, in this case, is located in the esophagus. Some symptoms are pain, nausea, heartburn and chest pain, according to the University of Minnesota Medical Center.
Barrett’s esophagus is a condition that may occur when the lining of the esophagus changes to be more like the lining of the intestine, according to The National Institute of Diabetes and Digestive and Kidney Diseases. This condition can turn into a rare cancer called esophageal adenocarcinoma. There is no known cause of this disorder, but doctors have found that those with GERD are more likely to get Barrett’s.
According to the American Cancer Society, esophageal cancer typically has no symptoms until it is advanced. Symptoms include difficulty swallowing (also called dysphagia), chest pain and weight loss.
Esophagus spasms, also called "nutcracker esophagus," are unexplained muscle contractions of the esophagus that can be quite painful, according to the Mayo Clinic. One of the symptoms is severe, sudden chest pain and, if the spasms are frequent, they can prevent swallowing.
Another disorder that can prevent swallowing is motor neuron disease. Motor neuron diseases (MND) affecting millions of Americans, with over 100,000 diagnosed annually. Between 80 to 95 percent of people living with MND experience some loss of speech and swallowing before they die, according to a press release by Johns Hopkins. “The disease really is characterized by, and for the most part, normal mental function, normal sensation," said Dr. Nicholas Maragakis, co-medical director of The Johns Hopkins ALS Clinic. "Patients gradually get weaker, over time. Unlike stroke, it’s not a disease that happens overnight.”
Promoting good esophagus health
Ganjhu gave these tips for the best way to prevent reflux of food and acid into the esophagus and ways to help treat GERD:
• Eat small meals so that the food does not sit in the stomach and instead moves on to the small bowel to be further digested.
• Try acid-blocking medications.
• Avoid or reduce consumption of foods and beverages that contain caffeine, chocolate, peppermint, spearmint and alcohol.
• Avoid all carbonated drinks.
• Cut down on fatty foods.
• Eat a diet rich in fruits and vegetables, although it may best to avoid acidic vegetables and fruits (such as oranges, lemons, grapefruit, pineapple and tomatoes) if they bother you.
• Quit smoking.
• Overweight people should try to diet and exercise to lose weight. A starting goal is to lose 5 to 10 percent of your present weight.
• People with GERD should avoid wearing tight clothing, particularly around the abdomen.
• If possible, GERD patients should avoid nonsteroidal anti-inflammatory drugs (NSAIDs), such as aspirin, ibuprofen (Motrin, Advil) or naproxen (Aleve).
• After meals, take a walk or stay upright.
• Avoid bedtime snacks. In general, do not eat for at least two hours before bedtime.
• When going to bed, try lying on the left side rather than the right side. The stomach is located higher than the esophagus while sleeping on the right side, which can put pressure on the lower esophageal sphincter (LES), increasing the risk for fluid backup.
• Sleep in a tilted position to help keep acid in the stomach at night. To do this, raise the bed at an angle using 4- to 6-inch (10 to 15 centimeters) blocks under the head of the bed. Use a wedge support to elevate the top half of your body. Extra pillows that only raise the head actually increase the risk for reflux.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
622) Ulcer
Ulcer, a lesion or sore on the skin or mucous membrane resulting from the gradual disintegration of surface epithelial tissue. An ulcer may be superficial, or it may extend into the deeper layer of the skin or other underlying tissue. An ulcer has a depressed floor or crater surrounded by sharply defined edges that are sometimes elevated above the level of the adjoining surface. The main symptom of an ulcer is pain.
The main causes of ulcers are infection, faulty blood circulation, nerve damage, trauma, nutritional disturbances including thiamine or other vitamin deficiencies, and cancer. Such bacterial infections as tuberculosis or syphilis can cause ulcers on any surface of the body. Any infection under the skin, such as a boil or carbuncle, may break through the surface and form an inflammatory ulcer. The ulcers on the legs of persons with varicose veins are caused by the slow circulation of the blood in the skin. Diabetics may sustain ulcers on their feet or toes after losing sensation in those areas due to nervous-system damage. A bedsore, or decubitus ulcer, typically occurs on the skin of the back in immobilized or bedridden persons. A peptic ulcer is an ulcer that occurs in the stomach or the first segment of the duodenum, parts of the intestinal tract that are bathed by gastric juice. Ulcers can also result from burns, electric burns, and frostbite.
When an ulcer of the skin does not heal or is hard to the touch, the possibility of cancer must be considered. The probability of cancer is increased if the patient is past middle age. Ulcers on the border of the lower lip in elderly men are frequently cancers. Such cancers must be recognized and treated early before they spread and become inoperable. By contrast, superficial ulcers on the lips, known as cold sores, are caused by a virus and are not serious. Ulcers in the mouth and throat are frequently caused by infection but are sometimes cancerous, especially in older persons. Cancerous ulcers may also occur in the stomach, small or large intestine, and rectum.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
623) Peptic ulcer
Peptic ulcer, lesion that occurs primarily in the mucous membrane of the stomach or duodenum (the upper segment of the small intestine); it is produced when external factors reduce the ability of the mucosal lining to resist the acidic effects of gastric juice (a mixture of digestive enzymes and hydrochloric acid). Until recently the factors responsible for peptic ulcers remained unclear; a stressful lifestyle and rich diet commonly were blamed. Evidence now indicates that infection with the bacterium Helicobacter pylori and long-term use of nonsteroidal anti-inflammatory drugs (NSAIDs) are the two major causes of peptic ulcer.
Between 10 and 15 percent of the world’s population suffers from peptic ulcer. Duodenal ulcers, which account for 80 percent of peptic ulcers, are more common in men than in women, but stomach ulcers affect women more frequently. The symptoms of gastric and duodenal ulcer are similar and include a gnawing, burning ache and hungerlike pain in the mid-upper abdomen, usually experienced from one to three hours after meals and several hours after retiring.
In the early 1980s two Australian researchers, Barry Marshall and J. Robin Warren, challenged previous theories of ulcer development with evidence that ulcers could be caused by H. pylori. This theory was greeted with skepticism because it was thought that no organism could live in the highly acidic conditions of the stomach and duodenum. H. pylori overcomes this obstacle by converting the abundant waste product, urea, into carbon dioxide and ammonia, creating a more neutral local environment. Although the mechanism is not completely understood, this process causes the mucosal lining to break down. In its weakened condition the lining cannot withstand the corrosive effects of gastric acid, and an ulcer can form.
Infection with H. pylori is the most common bacterial infection in humans; it is pervasive in the Third World, and in the United States it affects about a third of the population. Among those who suffer from peptic ulcers, as many as 90 percent of those with duodenal ulcers and 70 percent with gastric ulcers are believed to be infected with H. pylori. Evidence also exists that untreated H. pylori infection may lead to stomach cancer. The recommended treatments for H. pylori-induced ulcers are antibiotics, such as tetracycline, metronidazole, amoxicillin, and clarithromycin, and drugs that stop the secretion of stomach acid, including proton pump inhibitors (omeprazole or lansoprazole) and H2 blockers (cimetidine and ranitidine). Bismuth subsalicylate may also be used to protect the lining of the stomach from acid.
Most peptic ulcers not caused by H. pylori infection result from the ingestion of large quantities of NSAIDs, which often are prescribed for conditions such as rheumatoid arthritis. Withdrawal of NSAID treatment usually allows the ulcer to heal, but if this is not possible the ulcer can be managed with the H2 blockers cimetidine and ranitidine (marketed as Tagamet™ and Zantac™, respectively) or with the proton pump inhibitors lansoprazole (Prevacid™) and omeprazole (Losec™ or Prilosec™). A small proportion of peptic ulcers results from the Zollinger-Ellison syndrome, an uncommon disease associated with a tumour of the duodenum or pancreas that causes an increase in gastric acid secretion. Cigarette smoking has been found to have an adverse effect on peptic ulcers, slowing healing and promoting recurrence. Complications of ulcers include bleeding, perforation of the abdominal wall, and obstruction of the gastrointestinal tract.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
624) Body mass index
Body mass index (BMI), an estimate of total body fat. The BMI is defined as weight in kilograms divided by the square of the height in metres: weight/ height squared = BMI. This number, which is central to determining whether an individual is clinically defined as obese, parallels fatness but is not a direct measure of body fat. BMI is less sensitive than using a skinfold caliper or other method to measure body fat indirectly.
Interpretation of BMI numbers is based on weight status groupings, such as underweight, healthy weight, overweight, and obese, that are adjusted for age and gender. For all adults over age 20, BMI numbers correlate to the same weight status designations. For example, a BMI for adult women and men between 18.5 and 24.9 is considered healthy. A BMI lower than 18.5 is considered underweight, whereas a BMI between 25.0 and 29.9 equates with overweight and 30.0 and above with obesity. Definitions of overweight and obesity are more difficult to quantify for children, whose BMI changes with age.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
625) Tumour
Tumour, also spelled tumor, also called neoplasm, a mass of abnormal tissue that arises without obvious cause from preexisting body cells, has no purposeful function, and is characterized by a tendency to independent and unrestrained growth. Tumours are quite different from inflammatory or other swellings because the cells in tumours are abnormal in appearance and other characteristics. Abnormal cells—the kind that generally make up tumours—differ from normal cells in having undergone one or more of the following alterations: (1) hypertrophy, or an increase in the size of individual cells; this feature is occasionally encountered in tumours but occurs commonly in other conditions; (2) hyperplasia, or an increase in the number of cells within a given zone; in some instances it may constitute the only criterion of tumour formation; (3) anaplasia, or a regression of the physical characteristics of a cell toward a more primitive or undifferentiated type; this is an almost constant feature of malignant tumours, though it occurs in other instances both in health and in disease.
In some instances the cells of a tumour are normal in appearance; the differences between them and normal body cells can be discerned only with some difficulty. Such tumours are more often benign than not. Other tumours are composed of cells that appear different from normal adult types in size, shape, and structure; they usually belong to tumours that are malignant. Such cells may be bizarre in form or may be arranged in a distorted manner. In more extreme cases, the cells of malignant tumours are described as primitive, or undifferentiated, because they have lost the appearance and functions of the particular type of (normal) specialized cell that was their predecessor. As a rule, the less differentiated a malignant tumour’s cells are, the more quickly the tumour may be expected to grow.
Malignancy refers to the ability of a tumour ultimately to cause death. Any tumour, either benign or malignant in type, may produce death by local effects if it is appropriately situated. The common and more specific definition of malignancy implies an inherent tendency of the tumour’s cells to metastasize (invade the body widely and become disseminated by subtle means) and eventually to kill the patient unless all the malignant cells can be eradicated.
Metastasis is thus the outstanding characteristic of malignancy. Metastasis is the tendency of tumour cells to be carried from their site of origin by way of the circulatory system and other channels, which may eventually establish these cells in almost every tissue and organ of the body. In contrast, the cells of a benign tumour invariably remain in contact with each other in one solid mass centred on the site of origin. Because of the physical continuity of benign tumour cells, they may be removed completely by surgery if the location is suitable. But the dissemination of malignant cells, each one individually possessing (through cell division) the ability to give rise to new tumours in new and distant sites, requires complete eradication by a single surgical procedure in all but the earliest period of growth.
A mass of tumour cells usually constitutes a definite localized swelling that, if it occurs on or near the surface of the body, can be felt as a lump. Deeply placed tumours, however, may not be palpable. Some tumours, and particularly malignant ones, may appear as ulcers, hardened cracks or fissures, wartlike projections, or a diffuse, ill-defined infiltration of what appears to be an otherwise normal organ or tissue.
Pain is a variable symptom with tumours. It most commonly results from the growing tumour pressing on adjacent nerve tracts. In their early stages all tumours tend to be painless, and those that grow to a large size without interfering with local functions may remain painless. Eventually, however, most malignant tumours cause pain by the direct invasion of nerves or the destruction of bone.
All benign tumours tend to remain localized at the site of origin. Many benign tumours are enclosed by a capsule consisting of connective tissue derived from the structures immediately surrounding the tumour. Well-encapsulated tumours are not anchored to their surrounding tissues. These benign tumours enlarge by a gradual buildup, pushing aside the adjacent tissues without involving them intimately. Malignant tumours, by contrast, do not usually possess a capsule; they invade the surrounding tissues, making surgical removal more difficult or risky.
A benign tumour may undergo malignant transformation, but the cause of such change is unknown. It is also possible for a malignant tumour to remain quiescent, mimicking a benign one clinically, for a long time. The regression of a malignant tumour to benign is unknown.
Among the major types of benign tumours are the following: lipomas, which are composed of fat cells; angiomas, which are composed of blood or lymphatic vessels; osteomas, which arise from bone; chondromas, which arise from cartilage; and adenomas, which arise from glands.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
626) Leaf
Leaf, in botany, any usually flattened green outgrowth from the stem of a vascular plant. As the primary sites of photosynthesis, leaves manufacture food for plants, which in turn ultimately nourish and sustain all land animals. Botanically, leaves are an integral part of the stem system, and they are initiated in the apical bud (growing tip of a stem) along with the tissues of the stem itself. Certain organs that are superficially very different from the usual green leaf are formed in the same manner and are actually modified leaves; among these are the sharp spines of cacti, the needles of pines and other conifers, and the scales of an asparagus stalk or a lily bulb.
Typically, a leaf consists of a broad expanded blade (the lamina), attached to the plant stem by a stalklike petiole. Leaves are, however, quite diverse in size, shape, and various other characteristics, including the nature of the blade margin and the type of venation (arrangement of veins). Veins, which support the lamina and transport materials to and from the leaf tissues, radiate through the lamina from the petiole. The types of venation are characteristic of different kinds of plants: for example, dicotyledons have netlike venation and usually free vein endings; monocotyledons have parallel venation and rarely free vein endings. The leaf may be simple—with a single blade—or compound—with separate leaflets; it may also be reduced to a spine or scale.
The main function of a leaf is to produce food for the plant by photosynthesis. Chlorophyll, the substance that gives plants their characteristic green colour, absorbs light energy. The internal structure of the leaf is protected by the leaf epidermis, which is continuous with the stem epidermis. The central leaf, or mesophyll, consists of soft-walled, unspecialized cells of the type known as parenchyma. As much as one-fifth of the mesophyll is composed of chlorophyll-containing chloroplasts, which absorb sunlight and, in conjunction with certain enzymes, use the radiant energy in decomposing water into its elements, hydrogen and oxygen. The oxygen liberated from green leaves replaces the oxygen removed from the atmosphere by plant and animal respiration and by combustion. The hydrogen obtained from water is combined with carbon dioxide in the enzymatic processes of photosynthesis to form the sugars that are the basis of both plant and animal life. Oxygen is passed into the atmosphere through stomata—pores in the leaf surface.
Chlorophylls, green pigments, are usually present in much greater quantities than others. In autumn chlorophyll production slows as the days get shorter and cooler. As the remaining chlorophyll breaks down and fades, the colours of other pigments are revealed. These include carotene (yellow), xanthophyll (pale yellow), anthocyanin (red if the sap is slightly acidic, bluish if it is slightly alkaline, with intermediate shades between), and betacyanin (red). Tannins give oak leaves their dull brown colour.
Leaves are essentially short-lived structures. Even when they persist for two or three years, as in coniferous and broad-leaved evergreens, they make little contribution to the plant after the first year. The fall of leaves, whether in the first autumn in deciduous trees or after several years in evergreens, results from the formation of a weak zone, the abscission layer, at the base of the petiole. Abscission layers may form when leaves are seriously damaged by insects, disease, or drought. Their normal formation in autumn appears to be, in part at least, due to the shortening of the day. Perhaps the shorter days accentuate the senile changes normal in older leaves. As a result, a zone of cells across the petiole becomes softened until the leaf falls. A healing layer then forms on the stem and closes the wound, leaving the leaf scar, a prominent feature in many winter twigs and an aid in identification.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
627) Transpiration
Transpiration, in botany, a plant’s loss of water, mainly through the stomates of leaves. Stomatal openings are necessary to admit carbon dioxide to the leaf interior and to allow oxygen to escape during photosynthesis, hence transpiration is generally considered to be merely an unavoidable phenomenon that accompanies the real functions of the stomates. It has been proposed that transpiration provides the energy to transport water in the plant and may aid in heat dissipation in direct sunlight (by cooling through evaporation of water), though these theories have been challenged. Excessive transpiration can be extremely injurious to a plant. When water loss exceeds water intake, it can retard the plant’s growth and ultimately lead to death by dehydration.
Transpiration was first measured by Stephen Hales (1677–1761), an English botanist and physiologist. He noticed that plants “imbibe” and “perspire” significant amounts of water compared to animals and created a novel method for measuring the emission of water vapour by plants. He found that transpiration occurred from the leaves and that this process encouraged a continuous upward flow of water and dissolved nutrients from the roots. Modern research has shown that as much as 99 percent of the water taken in by the roots of a plant is released into the air as water vapour.
Leaf stomates are the primary sites of transpiration and consist of two guard cells that form a small pore on the surfaces of leaves. The guard cells control the opening and closing of the stomates in response to various environmental stimuli and can regulate the rate of transpiration to reduce water loss. Darkness and internal water deficit tend to close stomates and decrease transpiration; illumination, ample water supply, and optimum temperature open stomates and increase transpiration. Many plants close their stomates under high temperature conditions to reduce evaporation or under high concentrations of carbon dioxide gas, when the plant likely has sufficient quantities for photosynthesis.
Physically, plants that live in areas with low humidity commonly have leaves with less surface area so that evaporation is limited. Conversely, plants in humid areas, especially those in low light conditions like understory vegetation, may have large leaves because the need for adequate sunlight is heightened and the risk of detrimental water loss is low. Many desert plants have minute leaves that are deciduous during drought periods, which nearly eliminates water loss during the dry season, and cacti lack leaves altogether. Waxy cuticles, trichomes (leaf hairs), sunken stomates, and other leaf adaptations also help reduce transpiration rates by keeping the leaf surface cool or protecting it from air currents that increase evaporation. Finally, some plants have evolved alternative photosynthetic pathways, like crassulacean acid metabolism (CAM), to minimize transpiration losses. These plants, including many succulents, open their stomates at night to take in carbon dioxide and close them during the day when conditions are commonly hot and dry.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
628) Stomate
Stomate, also called stoma, plural stomata or stomas, any of the microscopic openings or pores in the epidermis of leaves and young stems. Stomata are generally more numerous on the underside of leaves. They provide for the exchange of gases between the outside air and the branched system of interconnecting air canals within the leaf.
A stomate opens and closes in response to the internal pressure of two sausage-shaped guard cells that surround it. The inner wall of a guard cell is thicker than the outer wall. When the guard cell is filled with water and it becomes turgid, the outer wall balloons outward, drawing the inner wall with it and causing the stomate to enlarge.
Guard cells work to control excessive water loss, closing on hot, dry, or windy days and opening when conditions are more favourable for gas exchange. For most plants, dawn triggers a sudden increase in stomatal opening, reaching a maximum near noon, which is followed by a decline because of water loss. Recovery and reopening are then followed by another decline as darkness approaches. In plants that photosynthesize with the CAM carbon fixation pathway, such as bromeliads and members of the family Crassulaceae, stomata are opened at night to reduce water loss from evapotranspiration.
The concentration of carbon dioxide in the air is another regulator of stomatal opening in many plants. When carbon dioxide levels fall below normal (about 0.03 percent), the guard cells become turgid and the stomata enlarge.
Stomate : Definition : A tiny pore on the surface of a leaf or herbaceous stem surrounded by a pair of guard cells that regulate its opening and closure, and serves as the site for gas exchange.
Supplement
The stomates are actually the pores created by the swelling of guard cells to allow CO2 to enter into the leaf, which is a necessary reactant of photosynthesis. The water vapor and O2 are also allowed to escape via the pore. In order to form the pore, osmotic pressure draws water to increase the cells volume; this in turn causes the guard cells to bow apart from each other because the inner wall of the pore is more rigid than the wall on the oppostie side of the cell.
Stomates are present in all terrestrial plants (in sporophyte phase), except for the liverworts. Dicots usually have more on the lower epidermis than the upper epidermis whereas monocots usually have the same number on both sides. Plants whose leaves float in water have stomates only on the upper epidermis whereas plants whose leaves are completely submerged may lack a stomate entirely.
CAM – short for “Crassulacean Acid Metabolism” – is a method of carbon fixation evolved by some plants in dry circumstances.
In most plants, the stomata – which are like tiny mouths that take in oxygen all along the surfaces of their leaves – open during the day to take in CO2 and release O2.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline