Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#176 2018-07-17 00:03:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

159) Carbon-14 dating

Carbon-14 dating, also called radiocarbon dating, method of age determination that depends upon the decay to nitrogen of radiocarbon (carbon-14). Carbon-14 is continually formed in nature by the interaction of neutrons with nitrogen-14 in the Earth’s atmosphere; the neutrons required for this reaction are produced by cosmic rays interacting with the atmosphere.

Radiocarbon present in molecules of atmospheric carbon dioxide enters the biological carbon cycle: it is absorbed from the air by green plants and then passed on to animals through the food chain. Radiocarbon decays slowly in a living organism, and the amount lost is continually replenished as long as the organism takes in air or food. Once the organism dies, however, it ceases to absorb carbon-14, so that the amount of the radiocarbon in its tissues steadily decreases. Carbon-14 has a half-life of 5,730 ± 40 years—i.e., half the amount of the radioisotope present at any given time will undergo spontaneous disintegration during the succeeding 5,730 years. Because carbon-14 decays at this constant rate, an estimate of the date at which an organism died can be made by measuring the amount of its residual radiocarbon.

The carbon-14 method was developed by the American physicist Willard F. Libby about 1946. It has proved to be a versatile technique of dating fossils and archaeological specimens from 500 to 50,000 years old. The method is widely used by Pleistocene geologists, anthropologists, archaeologists, and investigators in related fields.

r&r93101.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#177 2018-07-17 03:24:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

160) Nuclear Power

Nuclear power, electricity generated by power plants that derive their heat from fission in a nuclear reactor. Except for the reactor, which plays the role of a boiler in a fossil-fuel power plant, a nuclear power plant is similar to a large coal-fired power plant, with pumps, valves, steam generators, turbines, electric generators, condensers, and associated equipment.

World Nuclear Power

Nuclear power provides almost 15 percent of the world’s electricity. The first nuclear power plants, which were small demonstration facilities, were built in the 1960s. These prototypes provided “proof-of-concept” and laid the groundwork for the development of the higher-power reactors that followed.

The nuclear power industry went through a period of remarkable growth until about 1990, when the portion of electricity generated by nuclear power reached a high of 17 percent. That percentage remained stable through the 1990s and began to decline slowly around the turn of the 21st century, primarily because of the fact that total electricity generation grew faster than electricity from nuclear power while other sources of energy (particularly coal and natural gas) were able to grow more quickly to meet the rising demand. This trend appears likely to continue well into the 21st century. The Energy Information Administration (EIA), a statistical arm of the U.S. Department of Energy, has projected that world electricity generation between 2005 and 2035 will roughly double (from more than 15,000 terawatt-hours to 35,000 terawatt-hours) and that generation from all energy sources except petroleum will continue to grow.

In 2012 more than 400 nuclear reactors were in operation in 30 countries around the world, and more than 60 were under construction. The United States has the largest nuclear power industry, with more than 100 reactors; it is followed by France, which has more than 50. Of the top 15 electricity-producing countries in the world, all but two, Italy and Australia, utilize nuclear power to generate some of their electricity. The overwhelming majority of nuclear reactor generating capacity is concentrated in North America, Europe, and Asia. The early period of the nuclear power industry was dominated by North America (the United States and Canada), but in the 1980s that lead was overtaken by Europe. The EIA projects that Asia will have the largest nuclear capacity by 2035, mainly because of an ambitious building program in China.

A typical nuclear power plant has a generating capacity of approximately one gigawatt (GW; one billion watts) of electricity. At this capacity, a power plant that operates about 90 percent of the time (the U.S. industry average) will generate about eight terawatt-hours of electricity per year. The predominant types of power reactors are pressurized water reactors (PWRs) and boiling water reactors (BWRs), both of which are categorized as light water reactors (LWRs) because they use ordinary (light) water as a moderator and coolant. LWRs make up more than 80 percent of the world’s nuclear reactors, and more than three-quarters of the LWRs are PWRs.

Issues Affecting Nuclear Power

Countries may have a number of motives for deploying nuclear power plants, including a lack of indigenous energy resources, a desire for energy independence, and a goal to limit greenhouse gas emissions by using a carbon-free source of electricity. The benefits of applying nuclear power to these needs are substantial, but they are tempered by a number of issues that need to be considered, including the safety of nuclear reactors, their cost, the disposal of radioactive waste, and a potential for the nuclear fuel cycle to be diverted to the development of nuclear weapons. All of these concerns are discussed below.

Safety

The safety of nuclear reactors has become paramount since the Fukushima accident of 2011. The lessons learned from that disaster included the need to (1) adopt risk-informed regulation, (2) strengthen management systems so that decisions made in the event of a severe accident are based on safety and not cost or political repercussions, (3) periodically assess new information on risks posed by natural hazards such as earthquakes and associated tsunamis, and (4) take steps to mitigate the possible consequences of a station blackout.

The four reactors involved in the Fukushima accident were first-generation BWRs designed in the 1960s. Newer Generation III designs, on the other hand, incorporate improved safety systems and rely more on so-called passive safety designs (i.e., directing cooling water by gravity rather than moving it by pumps) in order to keep the plants safe in the event of a severe accident or station blackout. For instance, in the Westinghouse AP1000 design, residual heat would be removed from the reactor by water circulating under the influence of gravity from reservoirs located inside the reactor’s containment structure. Active and passive safety systems are incorporated into the European Pressurized Water Reactor (EPR) as well.

Traditionally, enhanced safety systems have resulted in higher construction costs, but passive safety designs, by requiring the installation of far fewer pumps, valves, and associated piping, may actually yield a cost saving.

Economics

A convenient economic measure used in the power industry is known as the levelized cost of electricity, or LCOE, which is the cost of generating one kilowatt-hour (kWh) of electricity averaged over the lifetime of the power plant. The LCOE is also known as the “busbar cost,” as it represents the cost of the electricity up to the power plant’s busbar, a conducting apparatus that links the plant’s generators and other components to the distribution and transmission equipment that delivers the electricity to the consumer.

The busbar cost of a power plant is determined by 1) capital costs of construction, including finance costs, 2) fuel costs, 3) operation and maintenance (O&M) costs, and 4) decommissioning and waste-disposal costs. For nuclear power plants, busbar costs are dominated by capital costs, which can make up more than 70 percent of the LCOE. Fuel costs, on the other hand, are a relatively small factor in a nuclear plant’s LCOE (less than 20 percent). As a result, the cost of electricity from a nuclear plant is very sensitive to construction costs and interest rates but relatively insensitive to the price of uranium. Indeed, the fuel costs for coal-fired plants tend to be substantially greater than those for nuclear plants. Even though fuel for a nuclear reactor has to be fabricated, the cost of nuclear fuel is substantially less than the cost of fossil fuel per kilowatt-hour of electricity generated. This fuel cost advantage is due to the enormous energy content of each unit of nuclear fuel compared to fossil fuel.

The O&M costs for nuclear plants tend to be higher than those for fossil-fuel plants because of the complexity of a nuclear plant and the regulatory issues that arise during the plant’s operation. Costs for decommissioning and waste disposal are included in fees charged by electrical utilities. In the United States, nuclear-generated electricity was assessed a fee of $0.001 per kilowatt-hour to pay for a permanent repository of high-level nuclear waste. This seemingly modest fee yielded about $750 million per year for the Nuclear Waste Fund.

At the beginning of the 21st century, electricity from nuclear plants typically cost less than electricity from coal-fired plants, but this formula may not apply to the newer generation of nuclear power plants, given the sensitivity of busbar costs to construction costs and interest rates. Another major uncertainty is the possibility of carbon taxes or stricter regulations on carbon dioxide emissions. These measures would almost certainly raise the operating costs of coal plants and thus make nuclear power more competitive.

Radioactive-waste disposal

Spent nuclear reactor fuel and the waste stream generated by fuel reprocessing contain radioactive materials and must be conditioned for permanent disposal. The amount of waste coming out of the nuclear fuel cycle is very small compared with the amount of waste generated by fossil fuel plants. However, nuclear waste is highly radioactive (hence its designation as high-level waste, or HLW), which makes it very dangerous to the public and the environment. Extreme care must be taken to ensure that it is stored safely and securely, preferably deep underground in permanent geologic repositories.

Despite years of research into the science and technology of geologic disposal, no permanent disposal site is in use anywhere in the world. In the last decades of the 20th century, the United States made preparations for constructing a repository for commercial HLW beneath Yucca Mountain, Nevada, but by the turn of the 21st century, this facility had been delayed by legal challenges and political decisions. Pending construction of a long-term repository, U.S. utilities have been storing HLW in so-called dry casks aboveground. Some other countries using nuclear power, such as Finland, Sweden, and France, have made more progress and expect to have HLW repositories operational in the period 2020–25.

Proliferation

The claim has long been made that the development and expansion of commercial nuclear power led to nuclear weapons proliferation, because elements of the nuclear fuel cycle (including uranium enrichment and spent-fuel reprocessing) can also serve as pathways to weapons development. However, the history of nuclear weapons development does not support the notion of a necessary connection between weapons proliferation and commercial nuclear power.

The first pathway to proliferation, uranium enrichment, can lead to a nuclear weapon based on highly enriched uranium (see nuclear weapon: Principles of atomic (fission) weapons). It is considered relatively straightforward for a country to fabricate a weapon with highly enriched uranium, but the impediment historically has been the difficulty of the enrichment process. Since nuclear reactor fuel for LWRs is only slightly enriched (less than 5 percent of the fissile isotope uranium-235) and weapons need a minimum of 20 percent enriched uranium, commercial nuclear power is not a viable pathway to obtaining highly enriched uranium.

The second pathway to proliferation, reprocessing, results in the separation of plutonium from the highly radioactive spent fuel. The plutonium can then be used in a nuclear weapon. However, reprocessing is heavily guarded in those countries where it is conducted, making commercial reprocessing an unlikely pathway for proliferation. Also, it is considered more difficult to construct a weapon with plutonium versus highly enriched uranium.

More than 20 countries have developed nuclear power industries without building nuclear weapons. On the other hand, countries that have built and tested nuclear weapons have followed other paths than purchasing commercial nuclear reactors, reprocessing the spent fuel, and obtaining plutonium. Some have built facilities for the express purpose of enriching uranium; some have built plutonium production reactors; and some have surreptitiously diverted research reactors to the production of plutonium. All these pathways to nuclear proliferation have been more effective, less expensive, and easier to hide from prying eyes than the commercial nuclear power route. Nevertheless, nuclear proliferation remains a highly sensitive issue, and any country that wishes to launch a commercial nuclear power industry will necessarily draw the close attention of oversight bodies such as the International Atomic Energy Agency.

a03fde78d17a4484d4fec87a8ce03f1e0c161d79.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#178 2018-07-19 00:43:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

161) Eyeglasses

Eyeglasses, also called glasses or spectacles, lenses set in frames for wearing in front of the eyes to aid vision or to correct such defects of vision as myopia, hyperopia, and astigmatism. In 1268 Roger Bacon made the earliest recorded comment on the use of lenses for optical purposes, but magnifying lenses inserted in frames were used for reading both in Europe and China at this time, and it is a matter of controversy whether the West learned from the East or vice versa. In Europe eyeglasses first appeared in Italy, their introduction being attributed to Alessandro di Spina of Florence. The first portrait to show eyeglasses is that of Hugh of Provence by Tommaso da Modena, painted in 1352. In 1480 Domenico Ghirlandaio painted St. Jerome at a desk from which dangled eyeglasses; as a result, St. Jerome became the patron saint of the spectacle-makers’ guild. The earliest glasses had convex lenses to aid farsightedness. A concave lens for myopia, or nearsightedness, is first evident in the portrait of Pope Leo X painted by Raphael in 1517.

In 1784 Benjamin Franklin invented bifocals, dividing his lenses for distant and near vision, the split parts being held together by the frame. Cemented bifocals were invented in 1884, and the fused and one-piece types followed in 1908 and 1910, respectively. Trifocals and new designs in bifocals were later introduced, including the Franklin bifocal revived in one-piece form.

Originally, lenses were made of transparent quartz and beryl, but increased demand led to the adoption of optical glass, for which Venice and Nürnberg were the chief centres of production. Ernst Abbe and Otto Schott in 1885 demonstrated that the incorporation of new elements into the glass melt led to many desirable variations in refractive index and dispersive power. In the modern process, glass for lenses is first rolled into plate form. Most lenses are made from clear crown glass of refractive index 1.523. In high myopic corrections, a cosmetic improvement is effected if the lenses are made of dense flint glass (refractive index 1.69) and coated with a film of magnesium fluoride to nullify the surface reflections. Flint glass, or barium crown, which has less dispersive power, is used in fused bifocals. Plastic lenses have become increasingly popular, particularly if the weight of the lenses is a problem, and plastic lenses are more shatterproof than glass ones. In sunglasses, the lenses are tinted to reduce light transmission and avoid glare.

eyeglasses-repair.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#179 2018-07-21 00:09:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

162) Electric furnace

Electric furnace, heating chamber with electricity as the heat source for achieving very high temperatures to melt and alloy metals and refractories. The electricity has no electrochemical effect on the metal but simply heats it.

Modern electric furnaces generally are either arc furnaces or induction furnaces. A third type, the resistance furnace, is still used in the production of silicon carbide and electrolytic aluminum; in this type, the furnace charge (i.e., the material to be heated) serves as the resistance element. In one type of resistance furnace, the heat-producing current is introduced by electrodes buried in the metal. Heat also may be produced by resistance elements lining the interior of the furnace.

Electric furnaces produce roughly two-fifths of the steel made in the United States. They are used by specialty steelmakers to produce almost all the stainless steels, electrical steels, tool steels, and special alloys required by the chemical, automotive, aircraft, machine-tool, transportation, and food-processing industries. Electric furnaces also are employed, exclusively, by mini-mills, small plants using scrap charges to produce reinforcing bars, merchant bars (e.g., angles and channels), and structural sections.

The German-born British inventor Sir William Siemens first demonstrated the arc furnace in 1879 at the Paris Exposition by melting iron in crucibles. In this furnace, horizontally placed carbon electrodes produced an electric arc above the container of metal. The first commercial arc furnace in the United States was installed in 1906; it had a capacity of four tons and was equipped with two electrodes. Modern furnaces range in heat size from a few tons up to 400 tons, and the arcs strike directly into the metal bath from vertically positioned, graphite electrodes. Although the three-electrode, three-phase, alternating-current furnace is in general use, single-electrode, direct-current furnaces have been installed more recently.

In the induction furnace, a coil carrying alternating electric current surrounds the container or chamber of metal. Eddy currents are induced in the metal (charge), the circulation of these currents producing extremely high temperatures for melting the metals and for making alloys of exact composition.

electric_furnace_parts_diagram-300x261.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#180 2018-07-23 00:44:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

163) Dyslexia

What is dyslexia?

Dyslexia is a language-based learning disability. Dyslexia refers to a cluster of symptoms, that result in people having difficulties with specific language skills, particularly reading. Students with dyslexia often experience difficulties with both oral and written other language skills, such as writing, and pronouncing words and writing. Dyslexia affects individuals throughout their lives; however, its impact can change at different stages in a person’s life. It is referred to as a learning disability because dyslexia can make it very difficult for a student to succeed without phonics-based reading instruction that is unavailable in most public schools. In its more severe forms, a student with dyslexia may qualify for special education with specially designed instruction, and as appropriate, accommodations. 

What causes dyslexia?

The exact causes of dyslexia are still not completely clear, but anatomical and brain imagery studies show differences in the way the brain of a person with dyslexia develops and functions. Moreover, most people with dyslexia have been found to have difficulty with identifying the separate speech sounds within a word and/or learning how letters represent those sounds, a key factor in their reading difficulties. Dyslexia is not due to either lack of intelligence or desire to learn; with appropriate teaching methods, individuals with dyslexia can learn successfully.

What are the effects of dyslexia?

The impact that dyslexia has is different for each person and depends on the severity of the condition and the effectiveness of instruction or remediation. The core difficulty is with reading words and this is related to difficulty with processing and manipulating sounds. Some individuals with dyslexia manage to learn early reading and spelling tasks, especially with excellent instruction, but later experience their most challenging problems when more complex language skills are required, such as grammar, understanding textbook material, and writing essays.

People with dyslexia can also have problems with spoken language, even after they have been exposed to good language models in their homes and good language instruction in school. They may find it difficult to express themselves clearly, or to fully comprehend what others mean when they speak. Such language problems are often difficult to recognize, but they can lead to major problems in school, in the workplace, and in relating to other people. The effects of dyslexia can reach well beyond the classroom.

Dyslexia can also affect a person’s self-image. Students with dyslexia often end up feeling less intelligent and less capable than they actually are. After experiencing a great deal of stress due to academic problems, a student may become discouraged about continuing in school.

Are There Other Learning Disabilities Besides Dyslexia?

Dyslexia is one type of learning disability. Other learning disabilities besides Dyslexia include the following:

(i) Dyscalculia – a mathematical disability in which a person has unusual difficulty solving arithmetic problems and grasping math concepts.
(ii) Dysgraphia – a condition of impaired letter writing by hand—disabled handwriting. Impaired handwriting can interfere with learning to spell words in writing and speed of writing text. Children with dysgraphia may have only impaired handwriting, only impaired spelling (without reading problems), or both impaired handwriting and impaired spelling.
(iii) Attention Deficit Disorder (ADD) and Attention Deficit Hyperactive Disorders (ADHD) can and do impact learning but they are not learning disabilities.  An individual can have more than one learning or behavioral disability. In various studies as many as 50% of those diagnosed with a learning or reading disability have also been diagnosed with ADHD.  Although disabilities may co-occur, one is not the cause of the other.

How Common Are Language-Based Learning Disabilities? 15-20% of the population has a language-based learning disability. Of the students with specific learning disabilities receiving special education services, 70-80% have deficits in reading. Dyslexia is the most common cause of reading, writing and spelling difficulties. Dyslexia affects males and females nearly equally as well as, people from different ethnic and socio-economic backgrounds nearly equally.

Can Individuals Who Have Dyslexia Learn To Read? Yes. If children who have dyslexia receive effective phonological awareness and phonics training in Kindergarten and 1st grade, they will have significantly fewer problems in learning to read at grade level than do children who are not identified or helped until 3rd grade. 74% of the children who are poor readers in 3rd grade remain poor readers in the 9th grade, many because they do not receive appropriate Structured Literacy instruction with the needed intensity or duration. Often they can’t read well as adults either. It is never too late for individuals with dyslexia to learn to read, process, and express information more efficiently. Research shows that programs utilizing Structured Literacy instructional techniques can help children and adults learn to read.

How Do People “Get” Dyslexia?

The causes for dyslexia are neurobiological and genetic. Individuals inherit the genetic links for dyslexia. Chances are that one of the child’s parents, grandparents, aunts, or uncles has dyslexia.  Dyslexia is not a disease. With proper diagnosis, appropriate instruction, hard work, and support from family, teachers, friends, and others, individuals who have dyslexia can succeed in school and later as working adults.

dyslexic-writing.jpg


--------------------------------------------------------------------------------------------------------------------------------------------------------------


164) Obesity

Obesity, also called corpulence or fatness, excessive accumulation of body fat, usually caused by the consumption of more calories than the body can use. The excess calories are then stored as fat, or adipose tissue. Overweight, if moderate, is not necessarily obesity, particularly in muscular or large-boned individuals.

Defining Obesity

Obesity was traditionally defined as an increase in body weight that was greater than 20 percent of an individual’s ideal body weight—the weight associated with the lowest risk of death, as determined by certain factors, such as age, height, and gender. Based on these factors, overweight could then be defined as a 15–20 percent increase over ideal body weight. However, today the definitions of overweight and obesity are based primarily on measures of height and weight—not morbidity. These measures are used to calculate a number known as body mass index (BMI). This number, which is central to determining whether an individual is clinically defined as obese, parallels fatness but is not a direct measure of body fat. Interpretation of BMI numbers is based on weight status groupings, such as underweight, healthy weight, overweight, and obese, that are adjusted for age and gender. For all adults over age 20, BMI numbers correlate to the same weight status designations; for example, a BMI between 25.0 and 29.9 equates with overweight and 30.0 and above with obesity. Morbid obesity (also known as extreme, or severe, obesity) is defined as a BMI of 40.0 or higher.

The Obesity Epidemic

Body weight is influenced by the interaction of multiple factors. There is strong evidence of genetic predisposition to fat accumulation, and obesity tends to run in families. However, the rise in obesity in populations worldwide since the 1980s has outpaced the rate at which genetic mutations are normally incorporated into populations on a large scale. In addition, growing numbers of persons in parts of the world where obesity was once rare have also gained excessive weight. According to the World Health Organization (WHO), which considered global obesity an epidemic, in 2014 more than 1.9 billion adults (age 18 or older) worldwide were overweight and 600 million, representing 13 percent of the world’s adult population, were obese.

The prevalence of overweight and obesity varied across countries, across towns and cities within countries, and across populations of men and women. In China and Japan, for instance, the obesity rate for men and women was about 5 percent, but in some cities in China it had climbed to nearly 20 percent. In 2005 it was found that more than 70 percent of Mexican women were obese. WHO survey data released in 2010 revealed that more than half of the people living in countries in the Pacific Islands region were overweight, with some 80 percent of women in American Samoa found to be obese.

Childhood Obesity

Childhood obesity has become a significant problem in many countries. Overweight children often face stigma and suffer from emotional, psychological, and social problems. Obesity can negatively impact a child’s education and future socioeconomic status. In 2004 an estimated nine million American children over age six, including teenagers, were overweight, or obese (the terms were typically used interchangeably in describing excess fatness in children). Moreover, in the 1980s and 1990s the prevalence of obesity had more than doubled among children age 2 to 5 (from 5 percent to 10 percent) and age 6 to 11 (from 6 percent to 15 percent). In 2008 these numbers had increased again, with nearly 20 percent of children age 2 to 19 being obese in the United States. Further estimates in some rural areas of the country indicated that more than 30 percent of school-age children suffered from obesity. Similar increases were seen in other parts of the world. In the United Kingdom, for example, the prevalence of obesity among children age 2 to 10 had increased from 10 percent in 1995 to 14 percent in 2003, and data from a study conducted there in 2007 indicated that 23 percent of children age 4 to 5 and 32 percent of children age 10 to 11 were overweight or obese. By 2014, WHO data indicated, worldwide some 41 million children age 5 or under were overweight or obese.

In 2005 the American Academy of Pediatrics called obesity “the pediatric epidemic of the new millennium.” Overweight and obese children were increasingly diagnosed with high blood pressure, elevated cholesterol, and type II diabetes mellitus—conditions once seen almost exclusively in adults. In addition, overweight children experience broken bones and problems with joints more often than normal-weight children. The long-term consequences of obesity in young people are of great concern to pediatricians and public health experts because obese children are at high risk of becoming obese adults. Experts on longevity have concluded that today’s American youth might “live less healthy and possibly even shorter lives than their parents” if the rising prevalence of obesity is left unchecked.

Curbing the rise in childhood obesity was the aim of the Alliance for a Healthier Generation, a partnership formed in 2005 by the American Heart Association, former U.S. president Bill Clinton, and the children’s television network Nickelodeon. The alliance intended to reach kids through a vigorous public-awareness campaign. Similar projects followed, including American first lady Michelle Obama’s Let’s Move! program, launched in 2010, and campaigns against overweight and obesity were made in other countries as well.

Efforts were also under way to develop more-effective childhood obesity-prevention strategies, including the development of methods capable of predicting infants’ risk of later becoming overweight or obese. One such tool reported in 2012 was found to successfully predict newborn obesity risk by taking into account newborn weight, maternal and paternal BMI, the number of members in the newborn’s household, maternal occupational status, and maternal smoking during pregnancy.

Causes Of Obesity

In European and other Caucasian populations, genome-wide association studies have identified genetic variations in small numbers of persons with childhood-onset morbid obesity or adult morbid obesity. In one study, a chromosomal deletion involving 30 genes was identified in a subset of severely obese individuals whose condition manifested in childhood. Although the deleted segment was found in less than 1 percent of the morbidly obese study population, its loss was believed to contribute to aberrant hormone signaling, namely of leptin and insulin, which regulate appetite and glucose metabolism, respectively. Dysregulation of these hormones is associated with overeating (or hyperphagy) and with tissue resistance to insulin, increasing the risk of type II diabetes. The identification of genomic defects in persons affected by morbid obesity has indicated that, at least for some individuals, the condition arises from a genetic cause.

For most persons affected by obesity, however, the causes of their condition are more complex, involving the interaction of multiple factors. Indeed, the rapid rise in obesity worldwide is likely due to major shifts in environmental factors and changes in behaviour rather than a significant change in human genetics. For example, early feeding patterns imposed by an obese mother upon her offspring may play a major role in a cultural, rather than genetic, transmission of obesity from one generation to the next. Likewise, correlations between childhood obesity and practices such as infant birth by cesarean section, which has risen substantially in incidence worldwide, indicate that environment and behaviour may have a much larger influence on the early onset of obesity than previously thought. More generally, the distinctive way of life of a nation and the individual’s behavioral and emotional reaction to it may contribute significantly to widespread obesity. Among affluent populations, an abundant supply of readily available high-calorie foods and beverages, coupled with increasingly sedentary living habits that markedly reduce caloric needs, can easily lead to overeating. The stresses and tensions of modern living also cause some individuals to turn to foods and alcoholic drinks for “relief.” Indeed, researchers have found that the cause of obesity in all countries shares distinct similarities—diets rich in sweeteners and saturated fats, lack of exercise, and the availability of inexpensive processed foods.

The root causes of childhood obesity are complex and are not fully understood, but it is clear that children become obese when they eat too much and exercise too little. In addition, many children make poor food decisions, choosing to eat unhealthy, sugary snacks instead of healthy fruits and vegetables. Lack of calorie-burning exercise has also played a major role in contributing to childhood obesity. In 2005 a survey found that American children age 8 to 18 spent an average of about four hours a day watching television and videos and two additional hours playing video games and using computers. Furthermore, maternal consumption of excessive amounts of fat during pregnancy programs overeating behaviour in children. For example, children have an increased preference for fatty foods if their mothers ate a high-fat diet during pregnancy. The physiological basis for this appears to be associated with fat-induced changes in the fetal brain. For example, when pregnant rats consume high-fat diets, brain cells in the developing fetuses produce large quantities of appetite-stimulating proteins called orexigenic peptides. These peptides continue to be produced at high levels following birth and throughout the lifetime of the offspring. As a result, these rats eat more, weigh more, and mature sexually earlier in life compared with rats whose mothers consumed normal levels of fats during pregnancy.

Health Effects Of Obesity

Obesity may be undesirable from an aesthetic sense, especially in parts of the world where slimness is the popular preference, but it is also a serious medical problem. Generally, obese persons have a shorter life expectancy; they suffer earlier, more often, and more severely from a large number of diseases than do their normal-weight counterparts. For example, people who are obese are also frequently affected by diabetes; in fact, worldwide, roughly 90 percent of type II diabetes cases are caused by excess weight.

The association between obesity and the deterioration of cardiovascular health, which manifests in conditions such as diabetes and hypertension (abnormally high blood pressure), places obese persons at risk for accelerated cognitive decline as they age. Investigations of brain size in persons with long-term obesity revealed that increased body fat is associated with the atrophy (wasting away) of brain tissue, particularly in the temporal and frontal lobes of the brain. In fact, both overweight and obesity, and thus a BMI of 25 or higher, are associated with reductions in brain size, which increases the risk of dementia, the most common form of which is Alzheimer disease.

Obese women are often affected by infertility, taking longer to conceive than normal-weight women, and obese women who become pregnant are at an increased risk of miscarriage. Men who are obese are also at increased risk of fertility problems, since excess body fat is associated with decreased testosterone levels. In general, relative to normal-weight individuals, obese individuals are more likely to die prematurely of degenerative diseases of the heart, arteries, and kidneys, and they have an increased risk of developing cancer. Obese individuals also have an increased risk of death from accidents and constitute poor surgical risks. Mental health is affected; behavioral consequences of an obese appearance, ranging from shyness and withdrawal to overly bold self-assertion, may be rooted in neuroses and psychoses.

Treatment Of Obesity

The treatment of obesity has two main objectives: removal of the causative factors, which may be difficult if the causes are of emotional or psychological origin, and removal of surplus fat by reducing food intake. Return to normal body weight by reducing calorie intake is best done under medical supervision. Dietary fads and reducing diets that produce quick results without effort are of doubtful effectiveness in reducing body weight and keeping it down, and most are actually deleterious to health. (See dieting.) Weight loss is best achieved through increased physical activity and basic dietary changes, such as lowering total calorie intake by substituting fruits and vegetables for refined carbohydrates.

Several drugs are approved for the treatment of obesity. Two of them are Belviq (lorcaserin hydrochloride) and Qsymia (phentermine and topiramate). Belviq decreases obese individuals’ cravings for carbohydrate-rich foods by stimulating the release of serotonin, which normally is triggered by carbohydrate intake. Qsymia leverages the weight-loss side effects of topiramate, an antiepileptic drug, and the stimulant properties of phentermine, an existing short-term treatment for obesity. Phentermine previously had been part of fen-phen (fenfluramine-phentermine), an antiobesity combination that was removed from the U.S. market in 1997 because of the high risk for heart valve damage associated with fenfluramine.

h9991024.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#181 2018-07-25 00:28:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

165) Lifeboat

Lifeboat, watercraft especially built for rescue missions. There are two types, the relatively simple versions carried on board ships and the larger, more complex craft based on shore. Modern shore-based lifeboats are generally about 40–50 feet (12–15 metres) long and are designed to stay afloat under severe sea conditions. Sturdiness of construction, self-righting ability, reserve buoyancy, and manoeuvrability in surf, especially in reversing direction, are prime characteristics.

As early as the 18th century, attempts were made in France and England to build “unsinkable” lifeboats. After a tragic shipwreck in 1789 at the mouth of the Tyne, a lifeboat was designed and built at Newcastle that would right itself when capsized and would retain its buoyancy when nearly filled with water. Named the “Original,” the double-ended, ten-oared craft remained in service for 40 years and became the prototype for other lifeboats. In 1807 the first practical line-throwing device was invented. In 1890 the first mechanically powered, land-based lifeboat was launched, equipped with a steam engine; in 1904 the gasoline engine was introduced, and a few years later the diesel.

A typical modern land-based lifeboat is either steel-hulled or of double-skin, heavy timber construction; diesel powered; and equipped with radio, radar, and other electronic gear. It is manned by a crew of about seven, most of whom are usually volunteers who can be summoned quickly in an emergency.

0117073_1001_3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#182 2018-07-26 03:12:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

166) Rafflesiaceae

Rafflesiaceae, flowering plant family (order Malpighiales) notable for being strictly parasitic upon the roots or stems of other plants and for the remarkable growth forms exhibited as adaptations to this mode of nutrition. Members of the family are endoparasites, meaning that the vegetative organs are so reduced and modified that the plant body exists only as a network of threadlike cellular strands living almost wholly within the tissues of the host plant. There are no green photosynthetic tissues, leaves, roots, or stems in the generally accepted sense, although vestiges of leaves exist in some species as scales. The flowers are well developed, however, and can be extremely large.

The family Rafflesiaceae includes the following three genera, mostly in the Old World subtropics: Rafflesia (about 28 species), Rhizanthes (4 species), and Sapria (1 or 2 species). The taxonomy of the family has been contentious, especially given the difficulty in obtaining specimens to study. The group formerly comprised seven genera, based on morphological similarities, but molecular evidence led to a dramatic reorganization by the Angiosperm Phylogeny Group III (APG III) botanical classification system. The genera Bdallophytum and Cytinus were transferred to the family Cytinaceae (order Malvales), and the genera Apodanthes and Pilostyles were moved to the family Apodanthaceae (order Cucurbitales).

The monster flower genus (Rafflesia) consists of about 28 species native to Southeast Asia, all of which are parasitic upon the roots of Tetrastigma vines (family Vitaceae). The genus includes the giant R. arnoldii, sometimes known as the corpse flower, which produces the largest known individual flower of any plant species in the world and is found in the forested mountains of Sumatra and Borneo. Its fully developed flower appears aboveground as a thick fleshy five-lobed structure weighing up to 11 kg (24 pounds) and measuring almost one metre (about one yard) across. It remains open five to seven days, emitting a fetid odour that attracts carrion-feeding flies, which are believed to be the pollinating agents. The flower’s colour is reddish or purplish brown, sometimes in a mottled pattern, with the gender organs in a central cup. The fruit is a berry containing sticky seeds thought to be disseminated by fruit-eating rodents. Other members of the genus have a similar reproductive biology. At least one species (R. magnifica) is listed as critically endangered by the IUCN Red List of Threatened Species.

The flowers of the genus Sapria are similar to those of Rafflesia and also emit a carrion odour. Members of the genus Rhizanthes produce flowers with nectaries, and some species are not malodorous. One species, Rhizanthes lowii, is known to generate heat with its flowers and buds, an adaptation that may aid in attracting pollinators. Species of both Sapria and Rhizanthes are considered rare and are threatened by habitat loss.

Rafflesia, The World's Largest Bloom
* * *
A plant with no leaves, no roots, no stem and the biggest flower in the world sounds like the stuff of comic books or science fiction.

'It is perhaps the largest and most magnificent flower in the world' was how Sir Stamford Raffles described his discovery in 1818 of Rafflesia arnoldii, modestly named after himself and his companion, surgeon-naturalist Dr James Arnold.

This jungle parasite of south-east Asia holds the all-time record-breaking bloom of 106.7 centimetres (3 ft 6 in) diameter and 11 kilograms (24 lb) weight, with petal-like lobes an inch thick.

It is one of the rarest plants in the world and on the verge of extinction.

As if size and rarity weren't enough, Rafflesia is also one of the world's most distasteful plants, designed to imitate rotting meat or dung.

The flower is basically a pot, flanked by five lurid red-brick and spotted cream 'petals,' advertising a warm welcome to carrion flies hungry for detritus. Yet the plant is now hanging on to a precarious existence in a few pockets of Sumatra, Borneo, Thailand and the Philippines, struggling to survive against marauding humans and its own infernal biology.

Everything seems stacked against Rafflesia. First, its seeds are difficult to germinate. Then it has gambled its life entirely on parasitising just one sort of vine. This is a dangerously cavalier approach to life, because without the vine it's dead.

Having gorged itself on the immoral earnings of parasitism for a few years, the plant eventually breaks out as a flower bud, swells up over several months, and then bursts into flower. But most of the flower buds die before opening, and even in bloom Rafflesia is fighting the clock. Because the flower only lasts a few days, it has to mate quickly with a nearby flower of the opposite gender. The trouble is, the male and female flowers are now so rare that it's a miracle to find a couple ready to cross-pollinate each other.

To be fair, though, Rafflesia's lifestyle isn't so ridiculous. After all, few other plants feed so well that they have evolved monstrous flowers.

But now that logging is cutting down tropical forests, the precious vine that Rafflesia depends on is disappearing, and Rafflesia along with it. The years of living dangerously are becoming all too clear.

There are at least 13 species of Rafflesia, but two of them have already been unsighted since the Second World War and are presumed extinct, and the record-holding Rafflesia arnoldii is facing extinction. To make matters worse, no one has ever cultivated Rafflesia in a garden or laboratory.

Considering all these threats for the species, some efforts of initiating a research centre and introducing laws to protect the largest and one of the rarest flowers in the world, like it happened in Malaysia and other SE Asian countries some years ago, is more than welcome.

Rafflesiaceae.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#183 2018-07-27 00:36:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

167) Holography

Holography, means of creating a unique photographic image without the use of a lens. The photographic recording of the image is called a hologram, which appears to be an unrecognizable pattern of stripes and whorls but which—when illuminated by coherent light, as by a laser beam—organizes the light into a three-dimensional representation of the original object.

An ordinary photographic image records the variations in intensity of light reflected from an object, producing dark areas where less light is reflected and light areas where more light is reflected. Holography, however, records not only the intensity of the light but also its phase, or the degree to which the wave fronts making up the reflected light are in step with each other, or coherent. Ordinary light is incoherent—that is, the phase relationships between the multitude of waves in a beam are completely random; wave fronts of ordinary light waves are not in step.

Dennis Gabor, a Hungarian-born scientist, invented holography in 1948, for which he received the Nobel Prize for Physics more than 20 years later (1971). Gabor considered the possibility of improving the resolving power of the electron microscope, first by utilizing the electron beam to make a hologram of the object and then by examining this hologram with a beam of coherent light. In Gabor’s original system the hologram was a record of the interference between the light diffracted by the object and a collinear background. This automatically restricts the process to that class of objects that have considerable areas that are transparent. When the hologram is used to form an image, twin images are formed. The light associated with these images is propagating in the same direction, and hence in the plane of one image light from the other image appears as an out-of-focus component. Although a degree of coherence can be obtained by focusing light through a very small pinhole, this technique reduces the light intensity too much for it to serve in holography; therefore, Gabor’s proposal was for several years of only theoretical interest. The development of lasers in the early 1960s suddenly changed the situation. A laser beam has not only a high degree of coherence but high intensity as well.

Of the many kinds of laser beam, two have especial interest in holography: the continuous-wave (CW) laser and the pulsed laser. The CW laser emits a bright, continuous beam of a single, nearly pure colour. The pulsed laser emits an extremely intense, short flash of light that lasts only about 1/100,000,000 of a second. Two scientists in the United States, Emmett N. Leith and Juris Upatnieks of the University of Michigan, applied the CW laser to holography and achieved great success, opening the way to many research applications.

holography.jpg

---------------------------------------------------------------------------------------------------------------------------------------------------

168) Penicillin

Penicillin, one of the first and still one of the most widely used antibiotic agents, derived from the Penicillium mold. In 1928 Scottish bacteriologist Alexander Fleming first observed that colonies of the bacterium Staphylococcus aureus failed to grow in those areas of a culture that had been accidentally contaminated by the green mold Penicillium notatum. He isolated the mold, grew it in a fluid medium, and found that it produced a substance capable of killing many of the common bacteria that infect humans. Australian pathologist Howard Florey and British biochemist Ernst Boris Chain isolated and purified penicillin in the late 1930s, and by 1941 an injectable form of the drug was available for therapeutic use.

The several kinds of penicillin synthesized by various species of the mold Penicillium may be divided into two classes: the naturally occurring penicillins (those formed during the process of mold fermentation) and the semisynthetic penicillins (those in which the structure of a chemical substance—6-aminopenicillanic acid—found in all penicillins is altered in various ways). Because it is possible to change the characteristics of the antibiotic, different types of penicillin are produced for different therapeutic purposes.

The naturally occurring penicillins, penicillin G (benzylpenicillin) and penicillin V (phenoxymethylpenicillin), are still used clinically. Because of its poor stability in acid, much of penicillin G is broken down as it passes through the stomach; as a result of this characteristic, it must be given by intramuscular injection, which limits its usefulness. Penicillin V, on the other hand, typically is given orally; it is more resistant to digestive acids than penicillin G. Some of the semisynthetic penicillins are also more acid-stable and thus may be given as oral medication.

All penicillins work in the same way—namely, by inhibiting the bacterial enzymes responsible for cell wall synthesis in replicating microorganisms and by activating other enzymes to break down the protective wall of the microorganism. As a result, they are effective only against microorganisms that are actively replicating and producing cell walls; they also therefore do not harm human cells (which fundamentally lack cell walls).

Some strains of previously susceptible bacteria, such as Staphylococcus, have developed a specific resistance to the naturally occurring penicillins; these bacteria either produce β-lactamase (penicillinase), an enzyme that disrupts the internal structure of penicillin and thus destroys the antimicrobial action of the drug, or they lack cell wall receptors for penicillin, greatly reducing the ability of the drug to enter bacterial cells. This has led to the production of the penicillinase-resistant penicillins (second-generation penicillins). While able to resist the activity of β-lactamase, however, these agents are not as effective against Staphylococcus as the natural penicillins, and they are associated with an increased risk for liver toxicity. Moreover, some strains of Staphylococcus have become resistant to penicillinase-resistant penicillins; an example is methicillin-resistant Staphylococcus aureus (MRSA).

Penicillins are used in the treatment of throat infections, meningitis, syphilis, and various other infections. The chief side effects of penicillin are hypersensitivity reactions, including skin rash, hives, swelling, and anaphylaxis, or allergic shock. The more serious reactions are uncommon. Milder symptoms may be treated with corticosteroids but usually are prevented by switching to alternative antibiotics. Anaphylactic shock, which can occur in previously sensitized individuals within seconds or minutes, may require immediate administration of epinephrine.

olc_153_penicillin_front.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#184 2018-07-29 01:36:20

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

169) Windmill

Windmill, device for tapping the energy of the wind by means of sails mounted on a rotating shaft. The sails are mounted at an angle or are given a slight twist so that the force of wind against them is divided into two components, one of which, in the plane of the sails, imparts rotation.

Like waterwheels, windmills were among the original prime movers that replaced human beings as a source of power. The use of windmills was increasingly widespread in Europe from the 12th century until the early 19th century. Their slow decline, because of the development of steam power, lasted for a further 100 years. Their rapid demise began following World War I with the development of the internal-combustion engine and the spread of electric power; from that time on, however, electrical generation by wind power has served as the subject of more and more experiments.

The earliest-known references to windmills are to a Persian millwright in AD 644 and to windmills in Seistan, Persia, in AD 915. These windmills are of the horizontal-mill type, with sails radiating from a vertical axis standing in a fixed building, which has openings for the inlet and outlet of the wind diametrically opposite to each other. Each mill drives a single pair of stones directly, without the use of gears, and the design is derived from the earliest water mills. Persian millwrights, taken prisoner by the forces of Genghis Khan, were sent to China to instruct in the building of windmills; their use for irrigation there has lasted ever since.

The vertical windmill, with sails on a horizontal axis, derives directly from the Roman water mill with its right-angle drive to the stones through a single pair of gears. The earliest form of vertical mill is known as the post mill. It has a boxlike body containing the gearing, millstones, and machinery and carrying the sails. It is mounted on a well-supported wooden post socketed into a horizontal beam on the level of the second floor of the mill body. On this it can be turned so that the sails can be faced into the wind.

The next development was to place the stones and gearing in a fixed tower. This has a movable top, or cap, which carries the sails and can be turned around on a track, or curb, on top of the tower. The earliest-known illustration of a tower mill is dated about 1420. Both post and tower mills were to be found throughout Europe and were also built by settlers in America.

To work efficiently, the sails of a windmill must face squarely into the wind, and in the early mills the turning of the post-mill body, or the tower-mill cap, was done by hand by means of a long tailpole stretching down to the ground. In 1745 Edmund Lee in England invented the automatic fantail. This consists of a set of five to eight smaller vanes mounted on the tailpole or the ladder of a post mill at right angles to the sails and connected by gearing to wheels running on a track around the mill. When the wind veers it strikes the sides of the vanes, turns them and hence the track wheels also, which turn the mill body until the sails are again square into the wind. The fantail may also be fitted to the caps of tower mills, driving down to a geared rack on the curb.

The sails of a mill are mounted on an axle, or windshaft, inclined upward at an angle of from 5° to 15° to the horizontal. The first mill sails were wooden frames on which sailcloth was spread; each sail was set individually with the mill at rest. The early sails were flat planes inclined at a constant angle to the direction of rotation; later they were built with a twist like that of an airplane propeller.

In 1772 Andrew Meikle, a Scot, invented his spring sail, substituting hinged shutters, like those of a Venetian blind, for sailcloths and controlling them by a connecting bar and a spring on each sail. Each spring had to be adjusted individually with the mill at rest according to the power required; the sails were then, within limits, self-regulating.

In 1789 Stephen Hooper in England utilized roller blinds instead of shutters and devised a remote control to enable all the blinds to be adjusted simultaneously while the mill was at work. In 1807 Sir William Cubitt invented his “patent sail” combining Meikle’s hinged shutters with Hooper’s remote control by chain from the ground via a rod passing through a hole drilled through the windshaft; the operation was comparable to operating an umbrella; by varying the weights hung on the chain the sails were made self-regulating.

The annular-sailed wind pump was brought out in the United States by Daniel Hallady in 1854, and its production in steel by Stuart Perry in 1883 led to worldwide adoption, for, although inefficient, it was cheap and reliable. The design consists of a number of small vanes set radially in a wheel. Governing is automatic: of yaw by tail vane, and of torque by setting the wheel off-centre with respect to the vertical yaw axis. Thus, as the wind increases the mill turns on its vertical axis, reducing the effective area and therefore the speed.

The most important use of the windmill was for grinding grain. In certain areas its uses in land drainage and water pumping were equally important. The windmill has been used as a source of electrical power since P. La Cour’s mill, built in Denmark in 1890 with patent sails and twin fantails on a steel tower. Interest in the use of windmills for the generation of electric power, on both single-user and commercial scales, revived in the 1970s.

shapeimage_3.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#185 2018-07-30 02:50:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

170) The Development of the Tape Recorder

Overview

A number of experimental sound recording devices were designed and developed in the early 1900s in Europe and the United States. Recording and reproducing the sounds were two separate processes, but both were necessary in the development of the tape recorder. Up until the 1920s, especially in the United States, a type of tape recorder using steel tape was designed and produced. In 1928 a coated magnetic tape was invented in Germany. The German engineers refined the magnetic tape in the 1930s and 1940s, developing a recorder called the Magnetophon. This type of machine was introduced to the United States after World War II and contributed to the eventual widespread use of the tape recorder. This unique ability to record sound and play it back would have implications politically, aesthetically, and commercially throughout Europe and the United States during World War II and after. Sound recording and reproduction formed the foundation of many new industries that included radio and film.

Background

Sound recording and reproduction began to interest inventors in the late nineteenth century, when several key technological innovations became available. Recordings required the following: a way to pick up sound via a microphone, a way to store information, and a playing device to access the stored data. As early as 1876 American inventor Alexander Graham Bell (1847-1922) invented the telephone, which incorporated many of the principles used in sound recording. The next year, American inventor Thomas Alva Edison (1847-1931) patented the phonograph, and German-American Emil Berliner (1851-1929) invented the flat-disc recording in 1887. The missing piece was a device to play back the recorded sounds.

The history of the tape recorder officially begins in 1878, when American mechanic Oberlin Smith visited Thomas Edison's laboratory. Smith was curious about the feasibility of recording telephone signals with a steel wire. He published his work in Electrical World outlining the process: "acoustic cycles are transferred to electric cycles and the moving sonic medium is magnetized with them. During the playing the medium generates electric cycles which have the identical frequency as during the recording." Smith's outline provided the theoretical framework used by others in the quest for a device that would both record and play the sound back.

In 1898 a Danish inventor, Valdemar Poulsen (1869-1942), patented the first device with the ability to play back the recorded sounds from steel wire. He reworked Smith's design and for several years actually manufactured the first "sonic recorders." This invention, patented in Denmark and the United States, was called the telegraphon, as it was to be used as an early kind of the telephone answering machine. The recording medium was a steel chisel and an electromagnet. He used steel wire coiled around a cylinder reminiscent of Thomas Edison's phonograph. Poulsen's telegraphon was shown at the 1900 International Exhibition in Paris and was praised by the scientific and technical press as a revolutionary innovation.

In the early 1920s Kurt Stille and Karl Bauer, German inventors, redesigned the telegraphon in order for the sound to be amplified electronically. They called their invention the Dailygraph, and it had the distinction of being able to accommodate a cassette. In the late 1920s the British Ludwig Blattner Picture Corporation bought Stille and Bauer's patent and attempted to produce films using synchronized sound. The British Marconi Wireless Telegraph company also bought the Stille and Bauer design and for a number of years made tape machines for the British Broadcasting Corporation. The Marconi-Stilles recording machines were used until the 1940s by the BBC radio service in Canada, Australia, France, Egypt, Sweden, and Poland.

In 1927 and 1928 Russian Boris Rtcheouloff and German chemist Fritz Pfleumer both patented an "improved" means of recording sound using magnetized tape. These ideas incorporated a way to record sound or pictures by causing a strip, disc, or cylinder of iron or other magnetic material to be magnetized. Pfleumer's patent had an interesting ingredient list. The "recipe" included using a powder of soft iron mixed with an organic binding medium such as dissolved sugar or molasses. This substance was dried and carbonized, and then the carbon was chemically combined into the iron by heating. The resulting steel powder, while still heated, was cooled by being placed in water, dried, and powered for the second time. This allowed the recording of sound onto varieties of "tape" made from soft paper, or films of cellulose derivatives.

In 1932 a large German electrical manufacturer purchased the patent rights of Pfleumer. German engineers made refinements to the magnetic tape as well as designing a device to play back the tape. By 1935 a machine known as the Magnetophon was marketed by the German Company AEG. In 1935 AEG debuted its Magnetophon with a recording of the London Philharmonic Orchestra.

Impact

By the beginning of World War II the development of the tape recorder continued to be in a state of flux. Experiments using different types and materials for recording tapes continued, as well as research into devices to play back the recorded sounds. Sound recording on coated-plastic tape manufactured by AEG was improved to the point that it became impossible to distinguish Adolf Hitler's radio addresses as a live or a recorded audio transmission. Engineers and inventors in the United States and Britain were unable to reproduce this quality of sound until several of the Magnetophons left Germany as war reparations in 1945. The German version combined a magnetic tape and a device to play back the recording. Another interesting feature, previously unknown, was that the replay head could be rotated against the direction of the tape transport. This enabled a recording to be played back slowly without lowering the frequency of the voice. These aspects were not available on the steel wire machines then available in the United States.

The most common U.S. version used a special steel tape that was made only in Sweden, and supplies were threatened at the onset of World War II. However, when patent rights on the German invention were seized by the United States Alien Property Custodian Act, there were no longer any licensing problems for U.S. companies to contend with, and the German innovations began to be incorporated into the United States designs.

In 1945 Alexander Poniatoff, an American manufacturer of electric motors, attended a Magnetophon demonstration given by John T. Mullen to the Institute of Radio Engineers. Poniatoff owned a company called Ampex that manufactured audio amplifiers and loudspeakers, and he recognized the commercial potential of the German design and desired to move forward with manufacturing and distributing the Magnetophon. In the following year he was given the opportunity to promote and manufacture the machine in a commercially viable way through an unusual set of circumstances.

A popular singer, Bing Crosby, had experienced a significant drop in his radio popularity. Crosby attributed his poor ratings to the inferior quality of sound recording used in taping his programs. Crosby, familiar with the Magnetophon machine, requested that it be used to tape record a sample program. He went on to record 26 radio shows for delayed broadcast using the German design. In 1947 Bing Crosby Enterprises, enthusiastic about the improved quality and listener satisfaction, decided to contract with Ampex to design and develop the Magnetophon recording device. Ampex agreed to build 20 professional recording units priced at $40,000 each, and Bing Crosby Enterprises then sold the units to the American Broadcasting Company.

In the film world, Walt Disney Studios released the animated film Fantasia. This film used a sound process called Fantasound, incorporating technological advances made in the field of sound recording and sound playback. These commercial uses of the magnetic tape recording devices allowed innovations and expansion in the movie and television-broadcasting field.

By 1950, two-channel tape recorders allowing recording in stereo and the first catalog of recorded music appeared in the United States. These continued advancements in tape recorder technology allowed people to play their favorite music, even if it had been recorded many years prior. Radio networks used sound recording for news broadcasts, special music programming, as well as for archival purposes. The fledgling television and motion pictures industries began experimenting with combining images with music, speech, and sound effects. New research into the development and use of three and four-track tape recorders and one-inch tape was in the works as well as portable tape cassette players and the video recorder. These new innovations were possible and viable because of the groundwork laid by many individuals and companies in the first half of the twentieth century.

history_td-102.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#186 2018-07-30 22:34:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

171) Geyser

Geyser, hot spring that intermittently spouts jets of steam and hot water. The term is derived from the Icelandic word geysir, meaning “to gush.”

Geysers result from the heating of groundwater by shallow bodies of magma. They are generally associated with areas that have seen past volcanic activity. The spouting action is caused by the sudden release of pressure that has been confining near-boiling water in deep, narrow conduits beneath a geyser. As steam or gas bubbles begin to form in the conduit, hot water spills from the vent of the geyser, and the pressure is lowered on the water column below. Water at depth then exceeds its boiling point and flashes into steam, forcing more water from the conduit and lowering the pressure further. This chain reaction continues until the geyser exhausts its supply of boiling water.

The boiling temperature of water increases with pressure; for example, at a depth of 30 metres (about 100 feet) below the surface, the boiling point is approximately 140 °C (285 °F). Geothermal power from steam wells depends on the same volcanic heat sources and boiling temperature changes with depth that drive geyser displays.

As water is ejected from geysers and is cooled, dissolved silica is precipitated in mounds on the surface. This material is known as sinter. Often geysers have been given fanciful names (such as Castle Geyser in Yellowstone National Park) inspired by the shapes of the colourful and contorted mounds of siliceous sinter at the vents.

Geysers are rare. There are more than 300 of them in Yellowstone in the western United States—approximately half the world’s total—and about 200 on the Kamchatka Peninsula in the Russian Far East, about 40 in New Zealand, 16 in Iceland, and 50 scattered throughout the world in many other volcanic areas. Perhaps the most famous geyser is Old Faithful in Yellowstone. It spouts a column of boiling water and steam to a height of about 30 to 55 metres (100 to 180 feet) on a roughly 90-minute timetable.

5104.geyser.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#187 2018-07-31 01:08:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

172) Robotics

History of Robotics

Although the science of robotics only came about in the 20th century, the history of human-invented automation has a much lengthier past. In fact, the ancient Greek engineer Hero of Alexandria, produced two texts, 'Pneumatica' and 'Automata', that testify to the existence of hundreds of different kinds of “wonder” machines capable of automated movement. Of course, robotics in the 20th and 21st centuries has advanced radically to include machines capable of assembling other machines and even robots that can be mistaken for human beings.

The word robotics was inadvertently coined by science fiction author Isaac Asimov in his 1941 story “Liar!” Science fiction authors throughout history have been interested in man’s capability of producing self-motivating machines and lifeforms, from the ancient Greek myth of Pygmalion to Mary Shelley’s Dr. Frankenstein and Arthur C. Clarke’s HAL 9000. Essentially, a robot is a re-programmable machine that is capable of movement in the completion of a task. Robots use special coding that differentiates them from other machines and machine tools, such as CNC. Robots have found uses in a wide variety of industries due to their robust resistance capabilities and precision function.

Historical Robotics

Many sources attest to the popularity of automatons in ancient and Medieval times. Ancient Greeks and Romans developed simple automatons for use as tools, toys, and as part of religious ceremonies. Predating modern robots in industry, the Greek God Hephaestus was supposed to have built automatons to work for him in a workshop. Unfortunately, none of the early automatons are extant.

In the Middle Ages, in both Europe and the Middle East, automatons were popular as part of clocks and religious worship. The Arab polymath Al-Jazari (1136-1206) left texts describing and illustrating his various mechanical devices, including a large elephant clock that moved and sounded at the hour, a musical robot band and a waitress automaton that served drinks. In Europe, there is an automaton monk extant that kisses the cross in its hands. Many other automata were created that showed moving animals and humanoid figures that operated on simple cam systems, but in the 18th century, automata were understood well enough and technology advanced to the point where much more complex pieces could be made. French engineer Jacques de Vaucanson is credited with creating the first successful biomechanical automaton, a human figure that plays a flute. Automata were so popular that they traveled Europe entertaining heads of state such as Frederick the Great and Napoleon Bonaparte.

Victorian Robots

The Industrial Revolution and the increased focus on mathematics, engineering and science in England in the Victorian age added to the momentum towards actual robotics. Charles Babbage (1791-1871) worked to develop the foundations of computer science in the early-to-mid nineteenth century, his most successful projects being the difference engine and the analytical engine. Although never completed due to lack of funds, these two machines laid out the basics for mechanical calculations. Others such as Ada Lovelace recognized the future possibility of computers creating images or playing music.

Automata continued to provide entertainment during the 19th century, but coterminous with this period was the development of steam-powered machines and engines that helped to make manufacturing much more efficient and quick. Factories began to employ machines to either increase work loads or precision in the production of many products.

The Twentieth Century to Today

In 1920, Karel Capek published his play R.U.R. (Rossum’s Universal Robots), which introduced the word “robot.” It was taken from an old Slavic word that meant something akin to “monotonous or forced labor.” However, it was thirty years before the first industrial robot went to work. In the 1950s, George Devol designed the Unimate, a robotic arm device that transported die castings in a General Motors plant in New Jersey, which started work in 1961. Unimation, the company Devol founded with robotic entrepreneur Joseph Engelberger, was the first robot manufacturing company. The robot was originally seen as a curiosity, to the extent that it even appeared on The Tonight Show in 1966. Soon, robotics began to develop into another tool in the industrial manufacturing math.

Robotics became a burgeoning science and more money was invested. Robots spread to Japan, South Korea and many parts of Europe over the last half century, to the extent that projections for the 2011 population of industrial robots are around 1.2 million. Additionally, robots have found a place in other spheres, as toys and entertainment, military weapons, search and rescue assistants, and many other jobs. Essentially, as programming and technology improve, robots find their way into many jobs that in the past have been too dangerous, dull or impossible for humans to achieve. Indeed, robots are being launched into space to complete the next stages of extraterrestrial and extrasolar research.

whatrobo.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#188 2018-07-31 15:12:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

173) Phenol

Phenol, any of a family of organic compounds characterized by a hydroxyl (−OH) group attached to a carbon atom that is part of an aromatic ring. Besides serving as the generic name for the entire family, the term phenol is also the specific name for its simplest member, monohydroxybenzene (C6H5OH), also known as benzenol, or carbolic acid.

Phenols are similar to alcohols but form stronger hydrogen bonds. Thus, they are more soluble in water than are alcohols and have higher boiling points. Phenols occur either as colourless liquids or white solids at room temperature and may be highly toxic and caustic.

Phenols are widely used in household products and as intermediates for industrial synthesis. For example, phenol itself is used (in low concentrations) as a disinfectant in household cleaners and in mouthwash. Phenol may have been the first surgical antiseptic. In 1865 the British surgeon Joseph Lister used phenol as an antiseptic to sterilize his operating field. With phenol used in this manner, the mortality rate from surgical amputations fell from 45 to 15 percent in Lister’s ward. Phenol is quite toxic, however, and concentrated solutions cause severe but painless burns of the skin and mucous membranes. Less-toxic phenols, such as n-hexylresorcinol, have supplanted phenol itself in cough drops and other antiseptic applications. Butylated hydroxytoluene (BHT) has a much lower toxicity and is a common antioxidant in foods.

In industry, phenol is used as a starting material to make plastics, explosives such as picric acid, and drugs such as aspirin. The common phenol hydroquinone is the component of photographic developer that reduces exposed silver bromide crystals to black metallic silver. Other substituted phenols are used in the dye industry to make intensely coloured azo dyes. Mixtures of phenols (especially the cresols) are used as components in wood preservatives such as creosote.

phenol.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#189 2018-08-02 00:33:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

174) Air-conditioning

Air-conditioning, the control of temperature, humidity, purity, and motion of air in an enclosed space, independent of outside conditions.

An early method of cooling air as practiced in India was to hang wet grass mats over windows where they cooled incoming air by evaporation. Modern air-conditioning had its beginnings in the 19th-century textile industry, in which atomized sprays of water were used for simultaneous humidification and cooling.

In the early 20th century, Willis Carrier of Buffalo, New York, devised the “dew point control,” an air-conditioning unit based on the principle that cooled air reaches saturation and loses moisture through condensation. Carrier also devised a system (first installed in 1922 at Grauman’s Metropolitan Theatre in Los Angeles) wherein conditioned air was fed from the ceiling and exhausted at floor level. The first fully air-conditioned office building, the Milam Building in San Antonio, Texas, was constructed in the late 1920s. The development of highly efficient refrigerant gases of low toxicity known as Freons (carbon compounds containing fluorine and chlorine or bromine) in the early 1930s was an important step. By the middle of that decade American railways had installed small air-conditioning units on their trains, and by 1950 compact units had become practical for use in single rooms. Since the late 1950s air conditioning has become more common in developed regions outside the United States.

In a simple air conditioner, the refrigerant, in a volatile liquid form, is passed through a set of evaporator coils across which air inside the room is passed. The refrigerant evaporates and, in the process, absorbs the heat contained in the air. When the cooled air reaches its saturation point, its moisture content condenses on fins placed over the coils. The water runs down the fins and drains. The cooled and dehumidified air is returned into the room by means of a blower.

In the meantime the vaporized refrigerant passes into a compressor where it is pressurized and forced through condenser coils, which are in contact with outside air. Under these conditions the refrigerant condenses back into a liquid form and gives off the heat it absorbed inside. This heated air is expelled to the outside, and the liquid recirculates to the evaporator coils to continue the cooling process. In some units the two sets of coils can reverse functions so that in winter, the inside coils condense the refrigerant and heat rather than cool the room. Such a unit is known as a heat pump.

Alternate systems of cooling include the use of chilled water. Water may be cooled by refrigerant at a central location and run through coils at other places. In some large factories a version of the earlier air-washer systems is still used to avoid the massive amount of coils needed otherwise. Water may be sprayed over glass fibres and air blown through it. Dehumidification is achieved in some systems by passing the air through silica gel which absorbs the moisture, and in others, liquid absorbents cause dehydration.

The design of air-conditioning systems takes many circumstances into consideration. A self-contained unit, described above, serves a space directly. More complex systems, as in tall buildings, use ducts to deliver cooled air. In the induction system, air is cooled once at a central plant and then conveyed to individual units, where water is used to adjust the air temperature according to such variables as sunlight exposure and shade. In the dual-duct system, warm air and cool air travel through separate ducts and are mixed to reach a desired temperature. A simpler way to control temperature is to regulate the amount of cold air supplied, cutting it off once a desired temperature is reached. This method, known as variable air volume, is widely used in both high-rise and low-rise commercial or institutional buildings.

Distribution of air is a concern because direct exposure to the cool air may cause discomfort. In some cases, cooled air needs to be slightly reheated before it is blown back into a room. One popular method of distribution is the ceiling diffuser, from which air is blown out along the ceiling level and allowed to settle down. The linear diffuser brings air through a plenum box or duct with a rectangular opening; louvers divert the down-flowing air. Other units are circular, and their fins radiate the air. Some ceilings are perforated to allow passage of cool air, and other ceilings are simply cooled so that basic ventilation can circulate the cool air.

Operating-System-Solar-Hybrid-Air-Conditioner1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#190 2018-08-04 00:52:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

175) Smartphone

Smartphone, also spelled smart phone, mobile telephone with a display screen (typically a liquid crystal display, or LCD), built-in personal information management programs (such as an electronic calendar and address book) typically found in a personal digital assistant (PDA), and an operating system (OS) that allows other computer software to be installed for Web browsing, e-mail, music, video, and other applications. A smartphone may be thought of as a handheld computer integrated within a mobile telephone.

The first smartphone was designed by IBM and sold by BellSouth (formerly part of the AT&T Corporation) in 1993. It included a touchscreen interface for accessing its calendar, address book, calculator, and other functions. As the market matured and solid-state computer memory and integrated circuits became less expensive over the following decade, smartphones became more computer-like, and more more-advanced services, such as Internet access, became possible. Advanced services became ubiquitous with the introduction of the so-called third-generation (3G) mobile phone networks in 2001. Before 3G, most mobile phones could send and receive data at a rate sufficient for telephone calls and text messages. Using 3G, communication takes place at bit-rates high enough for sending and receiving photographs, video clips, music files, e-mails, and more. Most smartphone manufacturers license an operating system, such as Microsoft Corporation’s Windows Mobile OS, Symbian OS, Google’s Android OS, or Palm OS. Research in Motion’s BlackBerry and Apple Inc.’s iPhone have their own proprietary systems.

Smartphones contain either a keyboard integrated with the telephone number pad or a standard “QWERTY” keyboard for text messaging, e-mailing, and using Web browsers. “Virtual” keyboards can be integrated into a touch-screen design. Smartphones often have a built-in camera for recording and transmitting photographs and short videos. In addition, many smartphones can access Wi-Fi “hot spots” so that users can access VoIP (voice over Internet protocol) rather than pay cellular telephone transmission fees. The growing capabilities of handheld devices and transmission protocols have enabled a growing number of inventive and fanciful applications—for instance, “augmented reality,” in which a smartphone’s global positioning system (GPS) location chip can be used to overlay the phone’s camera view of a street scene with local tidbits of information, such as the identity of stores, points of interest, or real estate listings.

4G-mobile-phone.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#191 2018-08-06 00:27:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

176) Pendulum

Pendulum, body suspended from a fixed point so that it can swing back and forth under the influence of gravity. Pendulums are used to regulate the movement of clocks because the interval of time for each complete oscillation, called the period, is constant. The Italian scientist Galileo first noted (c. 1583) the constancy of a pendulum’s period by comparing the movement of a swinging lamp in a Pisa cathedral with his pulse rate. The Dutch mathematician and scientist Christiaan Huygens invented a clock controlled by the motion of a pendulum in 1656. The priority of invention of the pendulum clock has been ascribed to Galileo by some authorities and to Huygens by others, but Huygens solved the essential problem of making the period of a pendulum truly constant by devising a pivot that caused the suspended body, or bob, to swing along the arc of a cycloid rather than that of a circle.

A simple pendulum consists of a bob suspended at the end of a thread that is so light as to be considered massless. The period of such a device can be made longer by increasing its length, as measured from the point of suspension to the middle of the bob. A change in the mass of the bob, however, does not affect the period, provided the length is not thereby affected. The period, on the other hand, is influenced by the position of the pendulum in relation to Earth. Because the strength of Earth’s gravitational field is not uniform everywhere, a given pendulum swings faster, and thus has a shorter period, at low altitudes and at Earth’s poles than it does at high altitudes and at the Equator.

There are various other kinds of pendulums. A compound pendulum has an extended mass, like a swinging bar, and is free to oscillate about a horizontal axis. A special reversible compound pendulum called Kater’s pendulum is designed to measure the value of g, the acceleration of gravity.

Another type is the Schuler pendulum. When the Schuler pendulum is vertically suspended, it remains aligned to the local vertical even if the point from which it is suspended is accelerated parallel to Earth’s surface. This principle of the Schuler pendulum is applied in some inertial guidance systems to maintain a correct internal vertical reference, even during rapid acceleration.

A spherical pendulum is one that is suspended from a pivot mounting, which enables it to swing in any of an infinite number of vertical planes through the point of suspension. In effect, the plane of the pendulum’s oscillation rotates freely. A simple version of the spherical pendulum, the Foucault pendulum, is used to show that Earth rotates on its axis.

figure1-1.png

Ballistic pendulum

Ballistic pendulum, device for measuring the velocity of a projectile, such as a bullet. A large wooden block suspended by two cords serves as the pendulum bob. When a bullet is fired into the bob, its momentum is transferred to the bob. The bullet’s momentum can be determined from the amplitude of the pendulum swing. The velocity of the bullet, in turn, can be derived from its calculated momentum. The ballistic pendulum was invented by the British mathematician and military engineer Benjamin Robins, who described the device in his major work, New Principles of Gunnery (1742).

The ballistic pendulum has been largely supplanted by other devices for projectile velocity tests, but it is still used in classrooms for demonstrating concepts pertaining to momentum and energy.

bpen.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#192 2018-08-08 00:37:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

177) DNA

DNA, abbreviation of deoxyribonucleic acid, organic chemical of complex molecular structure that is found in all prokaryotic and eukaryotic cells and in many viruses. DNA codes genetic information for the transmission of inherited traits.

The chemical DNA was first discovered in 1869, but its role in genetic inheritance was not demonstrated until 1943. In 1953 James Watson and Francis Crick determined that the structure of DNA is a double-helix polymer, a spiral consisting of two DNA strands wound around each other. Each strand is composed of a long chain of monomer nucleotides. The nucleotide of DNA consists of a deoxyribose sugar molecule to which is attached a phosphate group and one of four nitrogenous bases: two purines (adenine and guanine) and two pyrimidines (cytosine and thymine). The nucleotides are joined together by covalent bonds between the phosphate of one nucleotide and the sugar of the next, forming a phosphate-sugar backbone from which the nitrogenous bases protrude. One strand is held to another by hydrogen bonds between the bases; the sequencing of this bonding is specific—i.e., adenine bonds only with thymine, and cytosine only with guanine.

The configuration of the DNA molecule is highly stable, allowing it to act as a template for the replication of new DNA molecules, as well as for the production (transcription) of the related RNA (ribonucleic acid) molecule. A segment of DNA that codes for the cell’s synthesis of a specific protein is called a gene.

DNA replicates by separating into two single strands, each of which serves as a template for a new strand. The new strands are copied by the same principle of hydrogen-bond pairing between bases that exists in the double helix. Two new double-stranded molecules of DNA are produced, each containing one of the original strands and one new strand. This “semiconservative” replication is the key to the stable inheritance of genetic traits.

Within a cell, DNA is organized into dense protein-DNA complexes called chromosomes. In eukaryotes, the chromosomes are located in the nucleus, although DNA also is found in mitochondria and chloroplasts. In prokaryotes, which do not have a membrane-bound nucleus, the DNA is found as a single circular chromosome in the cytoplasm. Some prokaryotes, such as bacteria, and a few eukaryotes have extrachromosomal DNA known as plasmids, which are autonomous, self-replicating genetic material. Plasmids have been used extensively in recombinant DNA technology to study gene expression.

The genetic material of viruses may be single- or double-stranded DNA or RNA. Retroviruses carry their genetic material as single-stranded RNA and produce the enzyme reverse transcriptase, which can generate DNA from the RNA strand. Four-stranded DNA complexes known as G-quadruplexes have been observed in guanine-rich areas of the human genome.

6013680.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#193 2018-08-10 00:46:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

178) Chemical Bonds

Chemical bonds hold molecules together and create temporary connections that are essential to life. Types of chemical bonds including covalent, ionic, and hydrogen bonds and London dispersion forces.

Introduction

Living things are made up of atoms, but in most cases, those atoms aren’t just floating around individually. Instead, they’re usually interacting with other atoms (or groups of atoms).
For instance, atoms might be connected by strong bonds and organized into molecules or crystals. Or they might form temporary, weak bonds with other atoms that they bump into or brush up against. Both the strong bonds that hold molecules together and the weaker bonds that create temporary connections are essential to the chemistry of our bodies, and to the existence of life itself.
Why form chemical bonds? The basic answer is that atoms are trying to reach the most stable (lowest-energy) state that they can. Many atoms become stable when their valence shell is filled with electrons or when they satisfy the octet rule (by having eight valence electrons). If atoms don’t have this arrangement, they’ll “want” to reach it by gaining, losing, or sharing electrons via bonds.

Ions and ionic bonds

Some atoms become more stable by gaining or losing an entire electron (or several electrons). When they do so, atoms form ions, or charged particles. Electron gain or loss can give an atom a filled outermost electron shell and make it energetically more stable.

Forming ions

Ions come in two types. Cations are positive ions formed by losing electrons. For instance, a sodium atom loses an electron to become a sodium cation, +Na

Negative ions are formed by electron gain and are called anions. Anions are named using the ending “-ide”: is called chloride.

When one atom loses an electron and another atom gains that electron, the process is called electron transfer. Sodium and chlorine atoms provide a good example of electron transfer.

Sodium (Na) only has one electron in its outer electron shell, so it is easier (more energetically favorable) for sodium to donate that one electron than to find seven more electrons to fill the outer shell. Because of this, sodium tends to lose its one electron, forming Na+

Chlorine (Cl), on the other hand, has seven electrons in its outer shell. In this case, it is easier for chlorine to gain one electron than to lose seven, so it tends to take on an electron and become Cl-

Sodium transfers one of its valence electrons to chlorine, resulting in formation of a sodium ion (with no electrons in its 3n shell, meaning a full 2n shell) and a chloride ion (with eight electrons in its 3n shell, giving it a stable octet).

When sodium and chlorine are combined, sodium will donate its one electron to empty its shell, and chlorine will accept that electron to fill its shell. Both ions now satisfy the octet rule and have complete outermost shells. Because the number of electrons is no longer equal to the number of protons, each atom is now an ion and has a +1 (Na+)  or –1 (Cl-) charge.

In general, the loss of an electron by one atom and gain of an electron by another atom must happen at the same time: in order for a sodium atom to lose an electron, it needs to have a suitable recipient like a chlorine atom.

Making an ionic bond

Ionic bonds are bonds formed between ions with opposite charges. For instance, positively charged sodium ions and negatively charged chloride ions attract each other to make sodium chloride, or table salt. Table salt, like many ionic compounds, doesn't consist of just one sodium and one chloride ion; instead, it contains many ions arranged in a repeating, predictable 3D pattern (a crystal).

Certain ions are referred to in physiology as electrolytes (including sodium, potassium, and calcium). These ions are necessary for nerve impulse conduction, muscle contractions and water balance. Many sports drinks and dietary supplements provide these ions to replace those lost from the body via sweating during exercise.

Covalent bonds

Another way atoms can become more stable is by sharing electrons (rather than fully gaining or losing them), thus forming covalent bonds. Covalent bonds are more common than ionic bonds in the molecules of living organisms.
For instance, covalent bonds are key to the structure of carbon-based organic molecules like our DNA and proteins. Covalent bonds are also found in smaller inorganic molecules.

One, two, or three pairs of electrons may be shared between atoms, resulting in single, double, or triple bonds, respectively. The more electrons that are shared between two atoms, the stronger their bond will be.

Each hydrogen shares an electron with oxygen, and oxygen shares one of its electrons with each hydrogen:

The shared electrons split their time between the valence shells of the hydrogen and oxygen atoms, giving each atom something resembling a complete valence shell (two electrons for H, eight for O). This makes a water molecule much more stable than its component atoms would have been on their own.

Polar covalent bonds

There are two basic types of covalent bonds: polar and nonpolar. In a polar covalent bond, the electrons are unequally shared by the atoms and spend more time close to one atom than the other. Because of the unequal distribution of electrons between the atoms of different elements, slightly positive (δ+) and slightly negative (δ–) charges develop in different parts of the molecule.

In a water molecule (above), the bond connecting the oxygen to each hydrogen is a polar bond. Oxygen is a much more electronegative atom than hydrogen, meaning that it attracts shared electrons more strongly, so the oxygen of water bears a partial negative charge (has high electron density), while the hydrogens bear partial positive charges (have low electron density).

In general, the relative electronegativities of the two atoms in a bond – that is, their tendencies to "hog" shared electrons – will determine whether a covalent bond is polar or nonpolar. Whenever one element is significantly more electronegative than the other, the bond between them will be polar, meaning that one end of it will have a slight positive charge and the other a slight negative charge.

Nonpolar covalent bonds

Nonpolar covalent bonds form between two atoms of the same element, or between atoms of different elements that share electrons more or less equally.

Carbon has four electrons in its outermost shell and needs four more to achieve a stable octet. It gets these by sharing electrons with four hydrogen atoms, each of which provides a single electron. Reciprocally, the hydrogen atoms each need one additional electron to fill their outermost shell, which they receive in the form of shared electrons from carbon. Although carbon and hydrogen do not have exactly the same electronegativity, they are quite similar, so carbon-hydrogen bonds are considered nonpolar.

Hydrogen bonds and London dispersion forces

Covalent and ionic bonds are both typically considered strong bonds. However, other kinds of more temporary bonds can also form between atoms or molecules. Two types of weak bonds often seen in biology are hydrogen bonds and London dispersion forces.

Not to be overly dramatic, but without these two types of bonds, life as we know it would not exist! For instance, hydrogen bonds provide many of the life-sustaining properties of water and stabilize the structures of proteins and DNA, both key ingredients of cells.

Hydrogen bonds

In a polar covalent bond containing hydrogen (e.g., an O-H bond in a water molecule), the hydrogen will have a slight positive charge because the bond electrons are pulled more strongly toward the other element. Because of this slight positive charge, the hydrogen will be attracted to any neighboring negative charges. This interaction is called a hydrogen bond.

Hydrogen bonds are common, and water molecules in particular form lots of them. Individual hydrogen bonds are weak and easily broken, but many hydrogen bonds together can be very strong.

London dispersion forces

Like hydrogen bonds, London dispersion forces are weak attractions between molecules. However, unlike hydrogen bonds, they can occur between atoms or molecules of any kind, and they depend on temporary imbalances in electron distribution.

How does that work? Because electrons are in constant motion, there will be some moments when the electrons of an atom or molecule are clustered together, creating a partial negative charge in one part of the molecule (and a partial positive charge in another). If a molecule with this kind of charge imbalance is very close to another molecule, it can cause a similar charge redistribution in the second molecule, and the temporary positive and negative charges of the two molecules will attract each other.

Hydrogen bonds and London dispersion forces are both examples of van der Waals forces, a general term for intermolecular interactions that do not involve covalent bonds or ions.

How does that work in a cell?

Both strong and weak bonds play key roles in the chemistry of our cells and bodies. For instance, strong covalent bonds hold together the chemical building blocks that make up a strand of DNA. However, weaker hydrogen bonds hold together the two strands of the DNA double helix. These weak bonds keep the DNA stable, but also allow it to be opened up for copying and use by the cell.

More generally, bonds between ions, water molecules, and polar molecules are constantly forming and breaking in the watery environment of a cell. In this setting, molecules of different types can and will interact with each other via weak, charge-based attractions. For instance, a Na+ ion might interact with a water molecule in one moment, and with the negatively charged part of a protein in the next moment.

What's really amazing is to think that billions of these chemical bond interactions—strong and weak, stable and temporary—are going on in our bodies right now, holding us together and keeping us ticking!

covalent.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#194 2018-08-11 15:54:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

179) Computer graphics

Computer graphics, production of images on computers for use in any medium. Images used in the graphic design of printed material are frequently produced on computers, as are the still and moving images seen in comic strips and animations. The realistic images viewed and manipulated in electronic games and computer simulations could not be created or supported without the enhanced capabilities of modern computer graphics. Computer graphics also are essential to scientific visualization, a discipline that uses images and colours to model complex phenomena such as air currents and electric fields, and to computer-aided engineering and design, in which objects are drawn and analyzed in computer programs. Even the windows-based graphical user interface, now a common means of interacting with innumerable computer programs, is a product of computer graphics.

Image Display

Images have high information content, both in terms of information theory (i.e., the number of bits required to represent images) and in terms of semantics (i.e., the meaning that images can convey to the viewer). Because of the importance of images in any domain in which complex information is displayed or manipulated, and also because of the high expectations that consumers have of image quality, computer graphics have always placed heavy demands on computer hardware and software.

In the 1960s early computer graphics systems used vector graphics to construct images out of straight line segments, which were combined for display on specialized computer video monitors. Vector graphics is economical in its use of memory, as an entire line segment is specified simply by the coordinates of its endpoints. However, it is inappropriate for highly realistic images, since most images have at least some curved edges, and using all straight lines to draw curved objects results in a noticeable “stair-step” effect.

In the late 1970s and ’80s raster graphics, derived from television technology, became more common, though still limited to expensive graphics workstation computers. Raster graphics represents images by bitmaps stored in computer memory and displayed on a screen composed of tiny pixels. Each pixel is represented by one or more memory bits. One bit per pixel suffices for black-and-white images, while four bits per pixel specify a 16-step gray-scale image. Eight bits per pixel specify an image with 256 colour levels; so-called “true color” requires 24 bits per pixel (specifying more than 16 million colours). At that resolution, or bit depth, a full-screen image requires several megabytes (millions of bytes; 8 bits = 1 byte) of memory. Since the 1990s, raster graphics has become ubiquitous. Personal computers are now commonly equipped with dedicated video memory for holding high-resolution bitmaps.

3-D Rendering

Although used for display, bitmaps are not appropriate for most computational tasks, which need a three-dimensional representation of the objects composing the image. One standard benchmark for the rendering of computer models into graphical images is the Utah Teapot, created at the University of Utah in 1975. Represented skeletally as a wire-frame image, the Utah Teapot is composed of many small polygons. However, even with hundreds of polygons, the image is not smooth. Smoother representations can be provided by Bezier curves, which have the further advantage of requiring less computer memory. Bezier curves are described by cubic equations; a cubic curve is determined by four points or, equivalently, by two points and the curve’s slopes at those points. Two cubic curves can be smoothly joined by giving them the same slope at the junction. Bezier curves, and related curves known as B-splines, were introduced in computer-aided design programs for the modeling of automobile bodies.

Rendering offers a number of other computational challenges in the pursuit of realism. Objects must be transformed as they rotate or move relative to the observer’s viewpoint. As the viewpoint changes, solid objects must obscure those behind them, and their front surfaces must obscure their rear ones. This technique of “hidden surface elimination” may be done by extending the pixel attributes to include the “depth” of each pixel in a scene, as determined by the object of which it is a part. Algorithms can then compute which surfaces in a scene are visible and which ones are hidden by others. In computers equipped with specialized graphics cards for electronic games, computer simulations, and other interactive computer applications, these algorithms are executed so quickly that there is no perceptible lag—that is, rendering is achieved in “real time.”

Shading And Texturing

Visual appearance includes more than just shape and colour; texture and surface finish (e.g., matte, satin, glossy) also must be accurately modeled. The effects that these attributes have on an object’s appearance depend in turn on the illumination, which may be diffuse, from a single source, or both. There are several approaches to rendering the interaction of light with surfaces. The simplest shading techniques are flat, Gouraud, and Phong. In flat shading, no textures are used and only one colour tone is used for the entire object, with different amounts of white or black added to each face of the object to simulate shading. The resulting model appears flat and unrealistic. In Gouraud shading, textures may be used (such as wood, stone, stucco, and so forth); each edge of the object is given a colour that factors in lighting, and the computer interpolates (calculates intermediate values) to create a smooth gradient over each face. This results in a much more realistic image. Modern computer graphics systems can render Gouraud images in real time. In Phong shading each pixel takes into account any texture and all light sources. It generally gives more realistic results but is somewhat slower.

The shading techniques described thus far do not model specular reflection from glossy surfaces or model transparent and translucent objects. This can be done by ray tracing, a rendering technique that uses basic optical laws of reflection and refraction. Ray tracing follows an imaginary light ray from the viewpoint through each point in a scene. When the ray encounters an object, it is traced as it is reflected or refracted. Ray tracing is a recursive procedure; each reflected or refracted ray is again traced in the same fashion until it vanishes into the background or makes an insignificant contribution. Ray tracing may take a long time—minutes or even hours can be consumed in creating a complex scene.

In reality, objects are illuminated not only directly by a light source such as the Sun or a lamp but also more diffusely by reflected light from other objects. This type of lighting is re-created in computer graphics by radiosity techniques, which model light as energy rather than rays and which look at the effects of all the elements in a scene on the appearance of each object. For example, a brightly coloured object will cast a slight glow of the same colour on surrounding surfaces. Like ray tracing, radiosity applies basic optical principles to achieve realism—and like ray tracing, it is computationally expensive.

Processors And Programs

One way to reduce the time required for accurate rendering is to use parallel processing, so that in ray shading, for example, multiple rays can be traced at once. Another technique, pipelined parallelism, takes advantage of the fact that graphics processing can be broken into stages—constructing polygons or Bezier surfaces, eliminating hidden surfaces, shading, rasterization, and so on. Using pipelined parallelism, as one image is being rasterized, another can be shaded, and a third can be constructed. Both kinds of parallelism are employed in high-performance graphics processors. Demanding applications with many images may also use “farms” of computers. Even with all of this power, it may take days to render the many images required for a computer-animated motion picture.

Computer graphics relies heavily on standard software packages. The OpenGL (open graphics library) specifies a standard set of graphics routines that may be implemented in computer programming languages such as C or Java. PHIGS (programmer’s hierarchical interactive graphics system) is another set of graphics routines. VRML (virtual reality modeling language) is a graphics description language for World Wide Web applications. Several commercial and free packages provide extensive three-dimensional modeling capabilities for realistic graphics. More modest tools, offering only elementary two-dimensional graphics, are the “paint” programs commonly installed on home computers.

teaser_quadratic.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#195 2018-08-13 01:32:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

180) Tractor

Tractor, high-power, low-speed traction vehicle and power unit mechanically similar to an automobile or truck but designed for use off the road. The two main types are wheeled, which is the earliest form, and continuous track. Tractors are used in agriculture, construction, road building, etc., in the form of bulldozers, scrapers, and diggers. A notable feature of tractors in many applications is the power-takeoff accessory, used to operate stationary or drawn machinery and implements.

The first tractors, in the sense of powered traction vehicles, grew out of the stationary and portable steam engines operated on farms in the late 19th century and used to haul plows by the 1890s. In 1892 an Iowa blacksmith, John Froehlich, built the first farm vehicle powered by a gasoline engine. The first commercially successful manufacturers were C.W. Hart and C.H. Parr of Charles City, Iowa. By World War I the tractor was well established, and the U.S. Holt tractor was an inspiration for the tanks built for use in the war by the British and French.

Belt and power takeoffs, incorporated in tractors from the beginning, were standardized first in the rear-mounted, transmission-derived power takeoff and later in the independent, or live-power, takeoff, which permitted operation of implements at a constant speed regardless of the vehicular speed. Many modern tractors also have a hydraulic power-takeoff system operated by an oil pump, mounted either on the tractor or on a trailer.

Most modern tractors are powered by internal-combustion engines running on gasoline, kerosene (paraffin), LPG (liquefied petroleum gas), or diesel fuel. Power is transmitted through a propeller shaft to a gearbox having 8 or 10 speeds and through the differential gear to the two large rear-drive wheels. The engine may be from about 12 to 120 horsepower or more. Until 1932, when oversize pneumatic rubber tires with deep treads were introduced, all wheel-type farm tractors had steel tires with high, tapering lugs to engage the ground and provide traction.

Crawler, caterpillar, or tracklaying tractors run on two continuous tracks consisting of a number of plates or pads pivoted together and joined to form a pair of endless chains, each encircling two wheels on either side of the vehicle. These tractors provide better adhesion and lower ground pressure than the wheeled tractors do. Crawler tractors may be used on heavy, sticky soil or on very light soil that provides poor grip for a tire. The main chassis usually consists of a welded steel hull containing the engine and transmission. Tractors used on ground of irregular contours have tracks so mounted that their left and right front ends rise and fall independently of each other.

Four-wheel-drive tractors can be used under many soil conditions that immobilize two-wheel-drive tractors and caterpillars. Because of their complicated construction and consequent high cost, their use has grown rather slowly.

The single-axle (or walking) tractor is a small tractor carried on a pair of wheels fixed to a single-drive axle; the operator usually walks behind, gripping a pair of handles. The engine is usually in front of the axle, and the tools are on a bar behind. This type of machine may be used with a considerable range of equipment, including plows, hoes, cultivators, sprayers, mowers, and two-wheeled trailers. When the tractor is coupled to a trailer, the operator rides.

tractor_120_di.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#196 2018-08-15 00:14:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

181) Quasar

Quasar, an astronomical object of very high luminosity found in the centres of some galaxies and powered by gas spiraling at high velocity into an extremely large black hole. The brightest quasars can outshine all of the stars in the galaxies in which they reside, which makes them visible even at distances of billions of light-years. Quasars are among the most distant and luminous objects known.

Discovery Of Quasars

The term quasar derives from how these objects were originally discovered in the earliest radio surveys of the sky in the 1950s. Away from the plane of the Milky Way Galaxy, most radio sources were identified with otherwise normal-looking galaxies. Some radio sources, however, coincided with objects that appeared to be unusually blue stars, although photographs of some of these objects showed them to be embedded in faint, fuzzy halos. Because of their almost starlike appearance, they were dubbed “quasi-stellar radio sources,” which by 1964 had been shortened to “quasar.”

The optical spectra of the quasars presented a new mystery. Photographs taken of their spectra showed locations for emission lines at wavelengths that were at odds with all celestial sources then familiar to astronomers. The puzzle was solved by the Dutch American astronomer Maarten Schmidt, who in 1963 recognized that the pattern of emission lines in 3C 273, the brightest known quasar, could be understood as coming from hydrogen atoms that had a redshift (i.e., had their emission lines shifted toward longer, redder wavelengths by the expansion of the universe) of 0.158. In other words, the wavelength of each line was 1.158 times longer than the wavelength measured in the laboratory, where the source is at rest with respect to the observer. At a redshift of this magnitude, 3C 273 was placed by Hubble’s law at a distance of slightly more than two billion light-years. This was a large, though not unprecedented, distance (bright clusters of galaxies had been identified at similar distances), but 3C 273 is about 100 times more luminous than the brightest individual galaxies in those clusters, and nothing so bright had been seen so far away.

An even bigger surprise was that continuing observations of quasars revealed that their brightness can vary significantly on timescales as short as a few days, meaning that the total size of the quasar cannot be more than a few light-days across. Since the quasar is so compact and so luminous, the radiation pressure inside the quasar must be huge; indeed, the only way a quasar can keep from blowing itself up with its own radiation is if it is very massive, at least a million solar masses if it is not to exceed the Eddington limit—the minimum mass at which the outward radiation pressure is balanced by the inward pull of gravity (named after English astronomer Arthur Eddington). Astronomers were faced with a conundrum: how could an object about the size of the solar system have a mass of about a million stars and outshine by 100 times a galaxy of a hundred billion stars?

The right answer—accretion by gravity onto supermassive black holes—was proposed shortly after Schmidt’s discovery independently by Russian astronomers Yakov Zel’dovich and Igor Novikov and Austrian American astronomer Edwin Salpeter. The combination of high luminosities and small sizes was sufficiently unpalatable to some astronomers that alternative explanations were posited that did not require the quasars to be at the large distances implied by their redshifts. These alternative interpretations have been discredited, although a few adherents remain. For most astronomers, the redshift controversy was settled definitively in the early 1980s when American astronomer Todd Boroson and Canadian American astronomer John Beverly Oke showed that the fuzzy halos surrounding some quasars are actually starlight from the galaxy hosting the quasar and that these galaxies are at high redshifts.

By 1965 it was recognized that quasars are part of a much larger population of unusually blue sources and that most of these are much weaker radio sources too faint to have been detected in the early radio surveys. This larger population, sharing all quasar properties except extreme radio luminosity, became known as “quasi-stellar objects” or simply QSOs. Since the early 1980s most astronomers have regarded QSOs as the high-luminosity variety of an even larger population of “active galactic nuclei,” or AGNs. (The lower-luminosity AGNs are known as “Seyfert galaxies,” named after the American astronomer Carl K. Seyfert, who first identified them in 1943.)

Finding Quasars

Although the first quasars known were discovered as radio sources, it was quickly realized that quasars could be found more efficiently by looking for objects bluer than normal stars. This can be done with relatively high efficiency by photographing large areas of the sky through two or three different-coloured filters. The photographs are then compared to locate the unusually blue objects, whose nature is verified through subsequent spectroscopy. This remains the primary technique for finding quasars, although it has evolved over the years with the replacement of film by electronic charge-coupled devices (CCDs), the extension of the surveys to longer wavelengths in the infrared, and the addition of multiple filters that, in various combinations, are effective at isolating quasars at different redshifts. Quasars have also been discovered through other techniques, including searches for starlike sources whose brightness varies irregularly and X-ray surveys from space; indeed, a high level of X-ray emission is regarded by astronomers as a sure indicator of an accreting black-hole system.

Physical Structure Of Quasars

Quasars and other AGNs are apparently powered by gravitational accretion onto supermassive black holes, where “supermassive” means from roughly a million to a few billion times the mass of the Sun. Supermassive black holes reside at the centres of many large galaxies. In about 5–10 percent of these galaxies, gas tumbles into the deep gravitational well of the black hole and is heated to incandescence as the gas particles pick up speed and pile up in a rapidly rotating “accretion disk” close to the horizon of the black hole. There is a maximum rate set by the Eddington limit at which a black hole can accrete matter before the heating of the infalling gas results in so much outward pressure from radiation that the accretion stops. What distinguishes an “active” galactic nucleus from other galactic nuclei (the 90–95 percent of large galaxies that are currently not quasars) is that the black hole in an active nucleus accretes a few solar masses of matter per year, which, if it is accreting at around 1 percent or more of the Eddington rate, is sufficient to account for a typical quasar with a total luminosity of about {10}^{39} watts. (The Sun’s luminosity is about 4 × {10}^{26} watts.)

In addition to black holes and accretion disks, quasars have other remarkable features. Just beyond the accretion disk are clouds of gas that move at high velocities around the inner structure, absorbing high-energy radiation from the accretion disk and reprocessing it into the broad emission lines of hydrogen and ions of other atoms that are the signatures of quasar spectra. Farther from the black hole but still largely in the accretion disk plane are dust-laden gas clouds that can obscure the quasar itself. Some quasars are also observed to have radio jets, which are highly collimated beams of plasma propelled out along the rotation axis of the accretion disk at speeds often approaching that of light. These jets emit beams of radiation that can be observed at X-ray and radio wavelengths (and less often at optical wavelengths).

Because of this complex structure, the appearance of a quasar depends on the orientation of the rotation axis of the accretion disk relative to the observer’s line of sight. Depending on this angle, different quasar components—the accretion disk, emission-line clouds, jets—appear to be more or less prominent. This results in a wide variety of observed phenomena from what are, in reality, physically similar sources.

Evolution Of Quasars

The number density of quasars increases dramatically with redshift, which translates through Hubble’s law to more quasars at larger distances. Because of the finite speed of light, when quasars are observed at great distances, they are observed as they were in the distant past. Thus, the increasing density of quasars with distance means that they were more common in the past than they are now. This trend increases until “look-back times” that correspond to around three billion years after the big bang, which occurred approximately 13.5 billion years ago. At earlier ages, the number density of quasars decreases sharply, corresponding to an era when the quasar population was still building up. The most distant, and thus earliest, quasars known were formed less than a billion years after the big bang.

Individual quasars appear as their central black holes begin to accrete gas at a high rate, possibly triggered by a merger with another galaxy, building up the mass of the central black hole. The current best estimate is that quasar activity is episodic, with individual episodes lasting around a million years and the total quasar lifetime lasting around 10 million years. At some point, quasar activity ceases completely, leaving behind the dormant massive black holes found in most massive galaxies. This “life cycle” appears to proceed most rapidly with the most-massive black holes, which become dormant earlier than less-massive black holes. Indeed, in the current universe the remaining AGN population is made up predominantly of lower-luminosity Seyfert galaxies with relatively small supermassive black holes.

In the present-day universe there is a close relationship between the mass of a black hole and the mass of its host galaxy. This is quite remarkable, since the central black hole accounts for only about 0.1 percent of the mass of the galaxy. It is believed that the intense radiation, mass outflows, and jets from the black hole during its active quasar phase are responsible. The radiation, outflows, and jets heat up and can even remove entirely the interstellar medium from the host galaxy. This loss of gas in the galaxy simultaneously shuts down star formation and chokes off the quasar’s fuel supply, thus freezing both the mass in stars and the mass of the black hole.

Quasar.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#197 2018-08-17 00:50:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

182) Pressure cooker

Pressure cooker, hermetically sealed pot which produces steam heat to cook food quickly. The pressure cooker first appeared in 1679 as Papin’s Digester, named for its inventor, the French-born physicist Denis Papin. The cooker heats water to produce very hot steam which forces the temperature inside the pot as high as 266° F (130° C), significantly higher than the maximum heat possible in an ordinary saucepan. The higher temperature of a pressure cooker penetrates food quickly, reducing cooking time without diminishing vitamin and mineral content.

Pressure cookers are especially useful at high altitudes, where they alleviate the problem of low temperature boiling caused by reduced atmospheric pressure.

Modern innovations in pressure cooker design include safety locks, pressure regulators, portable cookers, and low-pressure fryers.

020fc832f9c10df4cc4a54c82850d2a8-300x222.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#198 2018-08-19 00:36:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

183) Fluorescent lamp

Fluorescent lamp, electric discharge lamp, cooler and more efficient than incandescent lamps, that produces light by the fluorescence of a phosphor coating. A fluorescent lamp consists of a glass tube filled with a mixture of argon and mercury vapour. Metal electrodes at each end are coated with an alkaline earth oxide that gives off electrons easily. When current flows through the gas between the electrodes, the gas is ionized and emits ultraviolet radiation. The inside of the tube is coated with phosphors, substances that absorb ultraviolet radiation and fluoresce (reradiate the energy as visible light).

Because a fluorescent lightbulb does not provide light through the continual heating of a metallic filament, it consumes much less electricity than an incandescent bulb—only one-quarter the electricity or even less, by some estimates. However, up to four times the operating voltage of a fluorescent lamp is needed initially, when the lamp is switched on, in order to ionize the gas when starting. This extra voltage is supplied by a device called a ballast, which also maintains a lower operating voltage after the gas is ionized. In older fluorescent lamps the ballast is located in the lamp, separate from the bulb, and causes the audible humming or buzzing so often associated with fluorescent lamps. In newer, compact fluorescent lamps (CFLs), in which the fluorescent tube is coiled into a shape similar to an incandescent bulb, the ballast is nested into the cup at the base of the bulb assembly and is made of electronic components that reduce or eliminate the buzzing sound. The inclusion of a ballast in each individual bulb raises the cost of the bulb, but the overall cost to the consumer is still lower because of reduced energy consumption and the longer lifetime of the CFL.

CFLs are rated by energy use (in watts) and light output (in lumens), frequently in specific comparison with incandescent bulbs. Specific CFLs are configured for use with dimmer switches and three-way switches and in recessed fixtures.

fluorescent-lighting.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#199 2018-08-21 01:27:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

184) Thermocouple

Thermocouple, also called thermal junction, thermoelectric thermometer, or thermel, a temperature-measuring device consisting of two wires of different metals joined at each end. One junction is placed where the temperature is to be measured, and the other is kept at a constant lower temperature. A measuring instrument is connected in the circuit. The temperature difference causes the development of an electromotive force (known as the Seebeck effect below) that is approximately proportional to the difference between the temperatures of the two junctions. Temperature can be read from standard tables, or the measuring instrument can be calibrated to read temperature directly.

Any two different metals or metal alloys exhibit the thermoelectric effect, but only a few are used as thermocouples—e.g., antimony and bismuth, copper and iron, or copper and constantan (a copper-nickel alloy). Usually platinum, either with rhodium or a platinum-rhodium alloy, is used in high-temperature thermocouples. Thermocouple types are named (e.g., type E [nickel, chromium, and constantan], J [iron and constantan], N [two nickel-silicon alloys, one of which contains chromium and magnesium], or B [a platinum-rhodium alloy]) according to the metals used to make the wires. The most common type is K (nickel-aluminum and nickel-chromium wires) because of its wide temperature range (from about −200 to 1,260 °C [−300 to 2,300 °F]) and low cost.

A thermopile is a number of thermocouples connected in series. Its results are comparable to the average of several temperature readings. A series circuit also gives greater sensitivity, as well as greater power output, which can be used to operate a device such as a safety valve in a gas stove without the use of external power.

(Seebeck effect, production of an electromotive force (emf) and consequently an electric current in a loop of material consisting of at least two dissimilar conductors when two junctions are maintained at different temperatures. The conductors are commonly metals, though they need not even be solids. The German physicist Thomas Johann Seebeck discovered (1821) the effect. The Seebeck effect is used to measure temperature with great sensitivity and accuracy and to generate electric power for special applications.)

h1013v1_24_2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#200 2018-08-23 01:28:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,495

Re: Miscellany

185) Giant Tortoise

The giant tortoises of Galapagos are among the most famous of the unique fauna of the Islands.  While giant tortoises once thrived on most of the continents of the world, the Galapagos tortoises now represent one of the remaining two groups of giant tortoises in the entire world — the other group living on Aldabra Atoll in the Indian Ocean. The Galapagos Islands were named for their giant tortoises; the old Spanish word galapago meant saddle, a term early explorers used for the tortoises due to the shape of their shells.             
       
The closest living relative of the Galapagos giant tortoise is the small Chaco tortoise from South America, although it is not a direct ancestor. Scientists believe the first tortoises arrived to Galapagos 2–3 million years ago by drifting 600 miles from the South American coast on vegetation rafts or on their own. They were already large animals before arriving in Galapagos. Colonizing the eastern-most islands of Española and San Cristóbal first, they then dispersed throughout the archipelago, eventually establishing at least 15 separate populations on ten of the largest Galapagos Islands.

Although there is a great amount of variation in size and shape among Galapagos tortoises, two main morphological forms exist — the domed carapace (similar to their ancestral form) and the saddle-backed carapace. Domed tortoises tend to be much larger in size and do not have the upward thrust to the front of their carapace; they live on the larger, higher islands with humid highlands where forage is generally abundant and easily available.  Saddle-backed shells evolved on the arid islands in response to the lack of available food during drought.  The front of the carapace angles upward, allowing the tortoise to extend its head higher to reach the higher vegetation, such as cactus pads. 

Tortoise History in Galapagos

Of all of the native species of Galapagos, giant tortoises were the most devastated after the endemic rice rats, with the majority of rice rat species now extinct. One of the giant tortoise’s most amazing adaptations — its ability to survive without food or water for up to a year — was, unfortunately, the indirect cause of its demise. Once buccaneers, whalers and fur sealers discovered that they could have fresh meat for their long voyages by storing live giant tortoises in the holds of their ships, massive exploitation of the species began. Tortoises were also exploited for their oil, which was used to light the lamps of Quito. Two centuries of exploitation resulted in the loss of between 100,000 to 200,000 tortoises. Three species have been extinct for some time, and a fourth species lost its last member, Lonesome George, in June of 2012. It is estimated that 20,000–25,000 wild tortoises live on the islands today.

In addition to their direct exploitation by humans for both food and oil, giant tortoises faced other challenges due to the introduction of exotic species by human visitors. They suffered — and continue to suffer on some islands — predation on tortoise eggs and hatchlings by rats, pigs, and voracious ants, and competition for food and habitat with goats and other large mammals.

With the establishment of the Galapagos National Park and the Charles Darwin Foundation in 1959, a systematic review of the status of the tortoise populations began. Only 11 of the 14 originally named populations remained and most of these were endangered if not already on the brink of extinction. The only thing saving several of the populations was the longevity of tortoises, keeping some old adults alive until conservation efforts could save their species.

Tortoise Taxonomy

The taxonomy of giant tortoises has changed over the decades since they were first named. Today the different populations are considered separate species of the genus Chelonoidis. There are currently 15 species. Giant tortoises were native to each of the big islands (Española, Fernandina, Floreana, Pinta, Pinzón, San Cristóbal, Santa Cruz, Santa Fe and Santiago) as well as the five major volcanoes on Isabela Island (Wolf, Darwin, Alcedo, Sierra Negra and Cerro Azul). Two species have been identified from Santa Cruz. Tortoises are now extinct on Fernandina (due to volcanism), Floreana, Santa Fe and Pinta (due to exploitation). Pinta Island had a single known tortoise, Lonesome George, who lived until June of 2012 at the Tortoise Center on Santa Cruz where he spent the final 40 years of his life. A taxidermy specimen of Lonesome George is now on display at the Tortoise Center.

The Life of a Tortoise

Compared to humans, giant tortoises might deserve to be called “lazy,” spending an average of 16 hours a day resting. Their activity level is driven by ambient temperature and food availability.  In the cool season, they are active at midday, sleeping in in the morning and hitting the sack early in the afternoon. In the hot season, their active period is early morning and late afternoon, while midday finds them resting and trying to keep cool under the shade of a bush or half-submerged in muddy wallows. During periods of drought, they can be found resting/sleeping for weeks at a time.

Galapagos tortoises are herbivorous, feeding primarily on cactus pads, grasses, and native fruit.  They drink large quantities of water when available, which they can store in their bladders for long periods of time.

Tortoises breed primarily during the hot season (January to May), though tortoises can be seen mating any month of the year. During the cool season (June to November), female tortoises migrate to nesting zones (generally in more arid areas) to lay their eggs. A female can lay from 1-4 nests over a nesting season (June to December).  She digs the hole with her hind feet, then lets the eggs drop down into the nest, and finally covers it again with her hind feet. She never sees what she is doing. The number of eggs ranges from 2-7 for saddle-backed tortoises to sometimes more than 20-25 eggs for domed tortoises.  The eggs incubate from 110 to 175 days (incubation periods depend on the month the nest was laid, with eggs laid early in the cool season requiring longer incubation periods than eggs laid at the end of the cool season when the majority of their incubation will occur at the start of the hot season). After hatching, the young hatchlings remain in the nest for a few weeks before emerging out a small hole adjacent to the nest cap. The gender of a tortoise is determined by the temperature of incubation, with females developing at slightly hotter temperatures.

Tortoise Speak

Tortoises have several ways of communicating with each other. The only known vocalization in Galapagos tortoises is the sound that males make when mating — a bellowing, periodic “groan” that sounds similar to a loudly mooing cow. Female tortoises make no vocalizations at all. The main method of tortoise communication is behavioral. Like many other species, they have ways of conveying dominance and defending themselves. Competing tortoises will stand tall, face each other with mouths agape, and stretch their necks up as high as possible. The highest head nearly always “wins,” while the loser retreats submissively into the brush.

Fine Feathered Friends

It is quite common to see the tiny birds of Galapagos, such as Darwin’s finches and Vermilion flycatchers, perched on top of the shells of their oversized giant tortoise companions. Some finch species have developed a mutualistic relationship with giant tortoises, feeding on the ticks that hide in the folds of the tortoise’s reptilian skin or on their shell. In fact, these birds will dance around in front of the tortoise to indicate that they are ready to eat, and the tortoise then responds by standing tall and stretching out its neck to “expose the buffet.”

TortoiseLookingBack.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB