Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1176 2021-11-01 00:35:51

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1153) Scarecrow

Scarecrow, device posted on cultivated ground to deter birds or other animals from eating or otherwise disturbing seeds, shoots, and fruit; its name derives from its use against the crow. The scarecrow of popular tradition is a mannequin stuffed with straw; free-hanging, often reflective parts movable by the wind are commonly attached to increase effectiveness. A scarecrow outfitted in clothes previously worn by a hunter who has fired on the flock is regarded by some as especially efficacious. A common variant is the effigy of a predator (e.g., an owl or a snake).

The function of the scarecrow is sometimes filled by various audio devices, including recordings of the calls or sounds of predators or noisome insects. Recorded sounds of deerflies in flight, for example, are used to deter deer from young tree plantations. Automatically fired carbide cannons and other simulated gunfire are used to keep migrating geese out of cornfields.

A scarecrow is a decoy or mannequin, often in the shape of a human. Humanoid scarecrows are usually dressed in old clothes and placed in open fields to discourage birds from disturbing and feeding on recently cast seed and growing crops. Scarecrows are used across the world by farmers, and are a notable symbol of farms and the countryside in popular culture.


The common form of a scarecrow is a humanoid figure dressed in old clothes and placed in open fields to discourage birds such as crows or sparrows from disturbing and feeding on recently cast seed and growing crops. Machinery such as windmills have been employed as scarecrows, but the effectiveness lessens as animals become familiar with the structures.

Since the invention of the humanoid scarecrow, more effective methods have been developed. On California farmland, highly-reflective aluminized PET (polyethylene terephthalate) film ribbons are tied to the plants to produce shimmers from the sun. Another approach is using automatic noise guns powered by propane gas. One winery in New York has even used inflatable tube men or airdancers to scare away birds.


In England, the Urchfont Scarecrow Festival was established in the 1990s and has become a major local event, attracting up to 10,000 people annually for the May Day Bank Holiday. Originally based on an idea imported from Derbyshire, it was the first Scarecrow Festival to be established in the whole of southern England.

Belbroughton, north Worcestershire, holds an annual Scarecrow Weekend on the last weekend of each September since 1996, which raises money for local charities. The village of Meerbrook in Staffordshire holds an annual Scarecrow Festival during the month of May. Tetford and Salmonby, Lincolnshire, jointly host one.

The festival at Wray, Lancashire, was established in the early 1990s and continues to the present day. In the village of Orton, Eden, Cumbria scarecrows are displayed each year, often using topical themes such as a Dalek exterminating a Wind turbine to represent local opposition to a wind farm.

The village of Blackrod, near Bolton in Greater Manchester, holds a popular annual Scarecrow Festival over a weekend usually in early July.

Norland, West Yorkshire, has a Scarecrow festival. Kettlewell in North Yorkshire has held an annual festival since 1994. In Teesdale, County Durham, the villages of Cotherstone, Staindrop and Middleton-in-Teesdale have annual scarecrow festivals.

Scotland's first scarecrow festival was held in West Kilbride, North Ayrshire, in 2004, and there is also one held in Montrose. On the Isle of Skye, the Tattie bogal event is held each year, featuring a scarecrow trail and other events. Tonbridge in Kent also host an annual Scarecrow Trail, organised by the local Rotary Club to raise money for local charities. Gisburn, Lancashire, held its first Scarecrow Festival in June 2014.

Mullion, in Cornwall, has an annual scarecrow festival since 2007.

In the US, St. Charles, Illinois, hosts an annual Scarecrow Festival. Peddler's Village in Bucks County, Pennsylvania, hosts an annual scarecrow festival and presents a scarecrow display in September–October that draws tens of thousands of visitors.

The 'pumpkin people' come in the autumn months in the valley region of Nova Scotia, Canada. They are scarecrows with pumpkin heads doing various things such as playing the fiddle or riding a wooden horse. Hickling, in the south of Nottinghamshire, is another village that celebrates an annual scarecrow event. It is very popular and has successfully raised a great deal of money for charity. Meaford, Ontario, has celebrated the Scarecrow Invasion since 1996.

In the Philippines, the Province of Isabela has recently started a scarecrow festival named after the local language: the Bambanti Festival. The province invites all its cities and towns to participate for the festivities, which last a week; it has drawn tourists from around the island of Luzon.

The largest gathering of scarecrows in one location is 3,812 and was achieved by National Forest Adventure Farm in Burton-upon-Trent, Staffordshire, UK, on 7 August 2014.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1177 2021-11-02 00:21:25

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1154) Pathology

Pathology, medical specialty concerned with the determining causes of disease and the structural and functional changes occurring in abnormal conditions. Early efforts to study pathology were often stymied by religious prohibitions against autopsies, but these gradually relaxed during the late Middle Ages, allowing autopsies to determine the cause of death, the basis for pathology. The resultant accumulating anatomical information culminated in the publication of the first systematic textbook of morbid anatomy by the Italian Giovanni Battista Morgagni in 1761, which located diseases within individual organs for the first time. The correlation between clinical symptoms and pathological changes was not made until the first half of the 19th century.

The existing humoral theories of pathology were replaced by a more scientific cellular theory; Rudolf Virchow in 1858 argued that the nature of disease could be understood by means of the microscopic analysis of affected cells. The bacteriologic theory of disease developed late in the 19th century by Louis Pasteur and Robert Koch provided the final clue to understanding many disease processes.

Pathology as a separate specialty was fairly well established by the end of the 19th century. The pathologist does much of his work in the laboratory and reports to and consults with the clinical physician who directly attends to the patient. The types of laboratory specimens examined by the pathologist include surgically removed body parts, blood and other body fluids, urine, feces, exudates, etc. Pathology practice also includes the reconstruction of the last chapter of the physical life of a deceased person through the procedure of autopsy, which provides valuable and otherwise unobtainable information concerning disease processes. The knowledge required for the proper general practice of pathology is too great to be attainable by single individuals, so wherever conditions permit it, subspecialists collaborate. Among the laboratory subspecialties in which pathologists work are neuropathology, pediatric pathology, general surgical pathology, dermatopathology, and forensic pathology.

Microbial cultures for the identification of infectious disease, simpler access to internal organs for biopsy through the use of glass fibre-optic instruments, finer definition of subcellular structures with the electron microscope, and a wide array of chemical stains have greatly expanded the information available to the pathologist in determining the causes of disease. Formal medical education with the attainment of an M.D. degree or its equivalent is required prior to admission to pathology postgraduate programs in many Western countries. The program required for board certification as a pathologist roughly amounts to five years of postgraduate study and training.

Pathology is the study of the causes and effects of disease or injury. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a more narrow fashion to refer to processes and tests which fall within the contemporary medical field of "general pathology", an area which includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue, cell, and body fluid samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"), and the affix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.

As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).

Pathology is a significant field in modern medical diagnosis and medical research.

Pathology is the medical specialty concerned with the study of the nature and causes of diseases. It underpins every aspect of medicine, from diagnostic testing and monitoring of chronic diseases to cutting-edge genetic research and blood transfusion technologies. Pathology is integral to the diagnosis of every cancer.

Pathology plays a vital role across all facets of medicine throughout our lives, from pre-conception to post mortem. In fact it has been said that "Medicine IS Pathology".

Due to the popularity of many television programs, the word ‘pathology’ conjures images of dead bodies and people in lab coats investigating the cause of suspicious deaths for the police. That's certainly a side of pathology, but in fact it’s far more likely that pathologists are busy in a hospital clinic or laboratory helping living people.

Pathologists are specialist medical practitioners who study the cause of disease and the ways in which diseases affect our bodies by examining changes in the tissues and in blood and other body fluids. Some of these changes show the potential to develop a disease, while others show its presence, cause or severity or monitor its progress or the effects of treatment.

The doctors you see in surgery or at a clinic all depend on the knowledge, diagnostic skills and advice of pathologists. Whether it’s a GP arranging a blood test or a surgeon wanting to know the nature of the lump removed at operation, the definitive answer is usually provided by a pathologist. Some pathologists also see patients and are involved directly in the day-to-day delivery of patient care.

Currently pathology has nine major areas of activity. These relate to either the methods used or the types of disease which they investigate.

* Anatomical Pathology
* Chemical Pathology
* Clinical Pathology
* Forensic Pathology
* General Pathology
* Genetic Pathology
* Hematology
* Immuno Pathology
* Microbiology


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1178 2021-11-02 22:04:30

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1155) Otolaryngology

Otolaryngology, also called otorhinolaryngology, medical specialty concerned with the diagnosis and treatment of diseases of the ear, nose, and throat. Traditionally, treatment of the ear was associated with that of the eye in medical practice. With the development of laryngology in the late 19th century, the connection between the ear and throat became known, and otologists became associated with laryngologists.

The study of ear diseases did not develop a clearly scientific basis until the first half of the 19th century, when Jean-Marc-Gaspard Itard and Prosper Ménière made ear physiology and disease a matter of systematic investigation. The scientific basis of the specialty was first formulated by William R. Wilde of Dublin, who in 1853 published Practical Observations on Aural Surgery, and the Nature and Treatment of Diseases of the Ear. Further advances were made with the development of the otoscope, an instrument that enabled visual examination of the tympanic membrane (eardrum).

The investigation of the larynx and its diseases, meanwhile, was aided by a device that was invented in 1855 by Manuel García, a Spanish singing teacher. This instrument, the laryngoscope, was adopted by Ludwig Türck and Jan Czermak, who undertook detailed studies of the pathology of the larynx; Czermak also turned the laryngoscope’s mirror upward to investigate the physiology of the nasopharyngeal cavity, thereby establishing an essential link between laryngology and rhinology. One of Czermak’s assistants, Friedrich Voltolini, improved laryngoscopic illumination and also adapted the instrument for use with the otoscope.

In 1921 Carl Nylen pioneered in the use of a high-powered binocular microscope to perform ear surgery; the operating microscope opened the way to several new corrective procedures on the delicate structures of the ear. Another important 20th-century achievement was the development in the 1930s of the electric audiometer, an instrument used to measure hearing acuity.


Otorhinolaryngology, abbreviated ORL and also known as otolaryngology, otolaryngology – head and neck surgery, or ear, nose, and throat (ENT), is a surgical subspecialty within medicine that deals with the surgical and medical management of conditions of the head and neck. Doctors who specialize in this area are called otorhinolaryngologists, otolaryngologists, head and neck surgeons, or ENT surgeons or physicians. Patients seek treatment from an otorhinolaryngologist for diseases of the ear, nose, throat, base of the skull, head, and neck. These commonly include functional diseases that affect the senses and activities of eating, drinking, speaking, breathing, swallowing, and hearing. In addition, ENT surgery encompasses the surgical management and reconstruction of cancers and benign tumors of the head and neck as well as plastic surgery of the face and neck.


Otorhinolaryngologists are physicians (MD, DO, MBBS, MBChB, etc.) who complete medical school and then 5–7 years of post-graduate surgical training in ORL-H&N. In the United States, trainees complete at least five years of surgical residency training. This comprises three to six months of general surgical training and four and a half years in ORL-H&N specialist surgery. In Canada and the United States, practitioners complete a five-year residency training after medical school.

Following residency training, some otolaryngologist-head & neck surgeons complete an advanced sub-specialty fellowship, where training can be one to two years in duration. Fellowships include head and neck surgical oncology, facial plastic surgery, rhinology and sinus surgery, neuro-otology, pediatric otolaryngology, and laryngology. In the United States and Canada, otorhinolaryngology is one of the most competitive specialties in medicine in which to obtain a residency position following medical school.

In the United Kingdom entrance to otorhinolaryngology higher surgical training is highly competitive and involves a rigorous national selection process. The training programme consists of 6 years of higher surgical training after which trainees frequently undertake fellowships in a sub-speciality prior to becoming a consultant.

The typical total length of education and training, post-secondary school is 12–14 years. Otolaryngology is among the more highly compensated surgical specialties in the United States ($461,000 2019 average annual income).

What Is an Otolaryngologist?

If you have a health problem with your head or neck, your doctor might recommend that you see an otolaryngologist. That's someone who treats issues in your ears, nose, or throat as well as related areas in your head and neck. They're called ENT's for short.

In the 19th century, doctors figured out that the ears, nose, and throat are closely connected by a system of tubes and passages. They made special tools to take a closer look at those areas and came up with ways to treat problems. A new medical specialty was born.

What Conditions Do Otolaryngologists Treat?

ENT's can do surgery and treat many different medical conditions. You would see one if you have a problem involving:

* An ear condition, such as an infection, hearing loss, or trouble with balance
* Nose and nasal issues like allergies, sinusitis, or growths
* Throat problems like tonsillitis, difficulty swallowing, and voice issues
* Sleep trouble like snoring or obstructive sleep apnea, in which your airway is narrow or blocked and it interrupts your breathing while you sleep
* Infections or tumors (cancerous or not) of your head or neck

Some areas of your head are treated by other kinds of doctors. For example, neurologists deal with problems with your brain or nervous system, and ophthalmologists care for your eyes and vision.

How Are ENT Doctors Trained?

Otolaryngologists go to 4 years of medical school. They then have at least 5 years of special training. Finally, they need to pass an exam to be certified by the American Board of Otolaryngology.

Some also get 1 or 2 years of training in a subspecialty:

* Allergy: These doctors treat environmental allergies (like pollen or pet dander) with medicine or a series of shots called immunology. They also can help you find out if you have a food allergy.
* Facial and reconstructive surgery:These doctors do cosmetic surgery like face lifts and nose jobs. They also help people whose looks have been changed by an accident or who were born with issues that need to be fixed.
* Head and neck:If you have a tumor in your nose, sinuses, mouth, throat, voice box, or upper esophagus, this kind of specialist can help you.
* Laryngology:These doctors treat diseases and injuries that affect your voice box (larynx) and vocal cords. They also can help diagnose and treat swallowing problems.
* Otology and neurotology:If you have any kind of issue with your ears, these specialists can help. They treat conditions like infections, hearing loss, dizziness, and ringing or buzzing in your ears (tinnitus).
* Pediatric ENT: Your child might not be able to tell their doctor what's bothering them. Pediatric ENTs are specially trained to treat youngsters, and they have tools and exam rooms designed to put kids at ease.

Common problems include ear infections, tonsillitis, asthma, and allergies. Pediatric ENT's also care for children with birth defects of the head and neck. They also can help figure out if your child has a speech or language problem.

* Rhinology: These doctors focus on your nose and sinuses. They treat sinusitis, nose bleeds, loss of smell, stuffy nose, and unusual growths.
* Sleep medicine: Some ENT's specialize in sleep problems that involve your breathing, for instance snoring or sleep apnea. Your doctor may order a sleep study to see if you have trouble breathing at times during the night.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1179 2021-11-03 20:35:54

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1156) Auscultation

Auscultation, diagnostic procedure in which the physician listens to sounds within the body to detect certain defects or conditions, such as heart-valve malfunctions or pregnancy. Auscultation originally was performed by placing the ear directly on the chest or abdomen, but it has been practiced mainly with a stethoscope since the invention of that instrument in 1819.

The technique is based on characteristic sounds produced, in the head and elsewhere, by abnormal blood circuits; in the joints by roughened surfaces; in the lower arm by the pulse wave; and in the abdomen by an active fetus or by intestinal disturbances. It is most commonly employed, however, in diagnosing diseases of the heart and lungs.

The heart sounds consist mainly of two separate noises occurring when the two sets of heart valves close. Either partial obstruction of these valves or leakage of blood through them because of imperfect closure results in turbulence in the blood current, causing audible, prolonged noises called murmurs. In certain congenital abnormalities of the heart and the blood vessels in the chest, the murmur may be continuous. Murmurs are often specifically diagnostic for diseases of the individual heart valves; that is, they sometimes reveal which heart valve is causing the ailment. Likewise, modification of the quality of the heart sounds may reveal disease or weakness of the heart muscle. Auscultation is also useful in determining the types of irregular rhythm of the heart and in discovering the sound peculiar to inflammation of the pericardium, the sac surrounding the heart.

Auscultation also reveals the modification of sounds produced in the air tubes and sacs of the lungs during breathing when these structures are diseased.

Auscultation (based on the Latin verb auscultare "to listen") is listening to the internal sounds of the body, usually using a stethoscope. Auscultation is performed for the purposes of examining the circulatory and respiratory systems (heart and breath sounds), as well as the alimentary canal.

The term was introduced by René Laennec. The act of listening to body sounds for diagnostic purposes has its origin further back in history, possibly as early as Ancient Egypt. (Auscultation and palpation go together in physical examination and are alike in that both have ancient roots, both require skill, and both are still important today.) Laënnec's contributions were refining the procedure, linking sounds with specific pathological changes in the chest, and inventing a suitable instrument (the stethoscope) to mediate between the patient's body and the clinician's ear.

Auscultation is a skill that requires substantial clinical experience, a fine stethoscope and good listening skills. Health professionals (doctors, nurses, etc.) listen to three main organs and organ systems during auscultation: the heart, the lungs, and the gastrointestinal system. When auscultating the heart, doctors listen for abnormal sounds, including heart murmurs, gallops, and other extra sounds coinciding with heartbeats. Heart rate is also noted. When listening to lungs, breath sounds such as wheezes, crepitations and crackles are identified. The gastrointestinal system is auscultated to note the presence of bowel sounds.

Electronic stethoscopes can be recording devices, and can provide noise reduction and signal enhancement. This is helpful for purposes of telemedicine (remote diagnosis) and teaching. This opened the field to computer-aided auscultation. Ultrasonography (US) inherently provides capability for computer-aided auscultation, and portable US, especially portable echocardiography, replaces some stethoscope auscultation (especially in cardiology), although not nearly all of it (stethoscopes are still essential in basic checkups, listening to bowel sounds, and other primary care contexts).

What is auscultation?

Auscultation is the medical term for using a stethoscope to listen to the sounds inside of your body. This simple test poses no risks or side effects.

Why is auscultation used?

Abnormal sounds may indicate problems in these areas:

* lungs
* abdomen
* heart
* major blood vessels

Potential issues can include:

* irregular heart rate
* Crohn’s disease
* phlegm or fluid buildup in your lungs

Your doctor can also use a machine called a Doppler ultrasound for auscultation. This machine uses sound waves that bounce off your internal organs to create images. This is also used to listen to your baby’s heart rate when you’re pregnant.

How is the test performed?

Your doctor places the stethoscope over your bare skin and listens to each area of your body. There are specific things your doctor will listen for in each area.


To hear your heart, your doctor listens to the four main regions where heart valve sounds are the loudest. These are areas of your chest above and slightly below your left breast. Some heart sounds are also best heard when you’re turned toward your left side. In your heart, your doctor listens for:

* what your heart sounds like
* how often each sound occurs
* how loud the sound is


Your doctor listens to one or more regions of your abdomen separately to listen to your bowel sounds. They may hear swishing, gurgling, or nothing at all. Each sound informs your doctor about what’s happening in your intestines.


When listening to your lungs, your doctor compares one side with the other and compares the front of your chest with the back of your chest. Airflow sounds differently when airways are blocked, narrowed, or filled with fluid. They’ll also listen for abnormal sounds such as wheezing. Learn more about breath sounds.

How are results interpreted?

Auscultation can tell your doctor a lot about what’s going on inside of your body.


Traditional heart sounds are rhythmic. Variations can signal to your doctor that some areas may not be getting enough blood or that you have a leaky valve. Your doctor may order additional testing if they hear something unusual.


Your doctor should be able to hear sounds in all areas of your abdomen. Digested material may be stuck or your intestine may be twisted if an area of your abdomen has no sounds. Both possibilities can be very serious.


Lung sounds can vary as much as heart sounds. Wheezes can be either high- or low-pitched and can indicate that mucus is preventing your lungs from expanding properly. One type of sound your doctor might listen for is called a rub. Rubs sound like two pieces of sandpaper rubbing together and can indicate irritated surfaces around your lungs.

What are some alternatives to auscultation?

Other methods that you doctor can use to determine what’s happening inside of your body are palpation and percussion.


Your doctor can perform a palpation simply by placing their fingers over one of your arteries to measure systolic pressure. Doctors usually look for a point of maximal impact (PMI) around your heart.

If your doctor feels something abnormal, they can identify possible issues related to your heart. Abnormalities may include a large PMI or thrill. A thrill is a vibration caused by your heart that’s felt on the skin.


Percussion involves your doctor tapping their fingers on various parts of your abdomen. Your doctor uses percussion to listen for sounds based on the organs or body parts underneath your skin.

You’ll hear hollow sounds when your doctor taps body parts filled with air and much duller sounds when your doctor taps above bodily fluids or an organ, such as your liver.

Percussion allows your doctor to identify many heart-related issues based on the relative dullness of sounds. Conditions that can be identified using percussion include:

* enlarged heart, which is called cardiomegaly
* excessive fluid around the heart, which is called pericardial effusion
* emphysema

(Emphysema is a disease of the lungs. It occurs most often in people who smoke, but it also occurs in people who regularly breathe in irritants.

Emphysema destroys alveoli, which are air sacs in the lungs. The air sacs weaken and eventually break, which reduces the surface area of the lungs and the amount of oxygen that can reach the bloodstream. This makes it harder to breathe, especially when exercising. Emphysema also causes the lungs to lose their elasticity.)

Why is auscultation important?

Auscultation gives your doctor a basic idea about what’s occurring in your body. Your heart, lungs, and other organs in your abdomen can all be tested using auscultation and other similar methods.

For example, if your doctor doesn’t identify a fist-sized area of dullness left of your sternum, you might be tested for emphysema. Also, if your doctor hears what’s called an “opening snap” when listening to your heart, you might be tested for mitral stenosis. You might need additional tests for a diagnosis depending on the sounds your doctor hears.

Auscultation and related methods are a good way for your doctor to know whether or not you need close medical attention. Auscultation can be an excellent preventive measure against certain conditions. Ask your doctor to perform these procedures whenever you have a physical exam.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1180 2021-11-04 17:13:31

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1157) Search for Extraterrestrial Intelligence (SETI)

Are we alone in the universe? Are there advanced civilizations that we can detect and what would be the societal impact if we do? How can we better the odds of making contact? These questions are both fundamental and universal. Today’s generation is the first that has the science and technology to prove that there is other intelligence in the cosmos. The SETI Institute’s first project was to conduct a search for narrow-band radio transmissions that would betray the existence of technically competent beings elsewhere in the galaxy. Today, the SETI Institute uses a specially designed instrument for its SETI efforts – the Allen Telescope Array (ATA) located in the Cascade Mountains of California. The ATA is embarking upon a two-year survey of tens of thousands of red dwarf stars, which have many characteristics that make them prime locales in the search for intelligent life. The Institute also uses the ATA to examine newly-discovered exoplanets that are found in their star’s habitable zone. There are likely to be tens of billions of such worlds in our galaxy. Additionally, the Institute is developing a relatively low-cost system for doing optical SETI, which searches for laser flashes that other societies might use to signal their presence. While previous optical SETI programs were limited to examining a single pixel on the sky at any given time, the new system will be able to monitor the entire night sky simultaneously. It will be a revolution in our ability to discover intermittent signals that otherwise would never be found. The search has barely begun – but the age-old question of “Are we alone in the universe?” could be answered in our lifetime.

SETI, in full Search for Extraterrestrial Intelligence, ongoing effort to seek intelligent extraterrestrial life. SETI focuses on receiving and analyzing signals from space, particularly in the radio and visible-light regions of the electromagnetic spectrum, looking for nonrandom patterns likely to have been sent either deliberately or inadvertently by technologically advanced beings. The first modern SETI search was Project Ozma (1960), which made use of a radio telescope in Green Bank, West Virginia. SETI approaches include targeted searches, which typically concentrate on groups of nearby Sun-like stars, and systematic surveys covering all directions. The value of SETI efforts has been controversial; programs initiated by NASA in the 1970s were terminated by congressional action in 1993. Subsequently, SETI researchers organized privately funded programs—e.g., the targeted-search Project Phoenix in the U.S. and the survey-type SERENDIP projects in the U.S. and Australia. See also Drake equation.

Extraterrestrial intelligence

Extraterrestrial intelligence, hypothetical extraterrestrial life that is capable of thinking, purposeful activity. Work in the new field of astrobiology has provided some evidence that evolution of other intelligent species in the Milky Way Galaxy is not utterly improbable. In particular, more than 4,000 extrasolar planets have been detected, and underground water is likely present on Mars and on some of the moons of the outer solar system. These efforts suggest that there could be many worlds on which life, and occasionally intelligent life, might arise. Searches for radio signals or optical flashes from other star systems that would indicate the presence of extraterrestrial intelligence have so far proved fruitless, but detection of such signals would have an enormous scientific and cultural impact.

Argument for extraterrestrial intelligence

The argument for the existence of extraterrestrial intelligence is based on the so-called principle of mediocrity. Widely believed by astronomers since the work of Nicolaus Copernicus, this principle states that the properties and evolution of the solar system are not unusual in any important way. Consequently, the processes on Earth that led to life, and eventually to thinking beings, could have occurred throughout the cosmos.

The most important assumptions in this argument are that (1) planets capable of spawning life are common, (2) biota will spring up on such worlds, and (3) the workings of natural selection on planets with life will at least occasionally produce intelligent species. To date, only the first of these assumptions has been proven. However, astronomers have found several small rocky planets that, like Earth, are the right distance from their stars to have atmospheres and oceans able to support life. Unlike the efforts that have detected massive, Jupiter-size planets by measuring the wobble they induce in their parent stars, the search for smaller worlds involves looking for the slight dimming of a star that occurs if an Earth-size planet passes in front of it. The U.S. satellite Kepler, launched in 2009, found thousands of planets, more than 20 of which are Earth-sized planets in the habitable zone where liquid water can survive on the surface, by observing such transits. Another approach is to construct space-based telescopes that can analyze the light reflected from the atmospheres of planets around other stars, in a search for gases such as oxygen or methane that are indicators of biological activity. In addition, space probes are trying to find evidence that the conditions for life might have emerged on Mars or other worlds in the solar system, thus addressing assumption 2. Proof of assumption 3, that thinking beings will evolve on some of the worlds with life, requires finding direct evidence. This evidence might be encounters, discovery of physical artifacts, or the detection of signals. Claims of encounters are problematic. Despite decades of reports involving unidentified flying objects, crashed spacecraft, crop circles, and abductions, most scientists remain unconvinced that any of these are adequate proof of visiting aliens.

Searching for extraterrestrial intelligence:

Artifacts in the solar system

Extraterrestrial artifacts have not yet been found. At the beginning of the 20th century, American astronomer Percival Lowell claimed to see artificially constructed canals on Mars. These would have been convincing proof of intelligence, but the features seen by Lowell were in fact optical illusions. Since 1890, some limited telescopic searches for alien objects near Earth have been made. These investigated the so-called Lagrangian points, stable locations in the Earth-Moon system. No large objects—at least down to several tens of metres in size—were seen.


The most promising scheme for finding extraterrestrial intelligence is to search for electromagnetic signals, more particularly radio or light, that may be beamed toward Earth from other worlds, either inadvertently (in the same way that Earth leaks television and radar signals into space) or as a deliberate beacon signal. Physical law implies that interstellar travel requires enormous amounts of energy or long travel times. Sending signals, on the other hand, requires only modest energy expenditure, and the messages travel at the speed of light.

Radio searches

Projects to look for such signals are known as the search for extraterrestrial intelligence (SETI). The first modern SETI experiment was American astronomer Frank Drake’s Project Ozma, which took place in 1960. Drake used a radio telescope (essentially a large antenna) in an attempt to uncover signals from nearby Sun-like stars. In 1961 Drake proposed what is now known as the Drake equation, which estimates the number of signaling worlds in the Milky Way Galaxy. This number is the product of terms that define the frequency of habitable planets, the fraction of habitable planets upon which intelligent life will arise, and the length of time sophisticated societies will transmit signals. Because many of these terms are unknown, the Drake equation is more useful in defining the problems of detecting extraterrestrial intelligence than in predicting when, if ever, this will happen.

By the mid-1970s the technology used in SETI programs had advanced enough for the National Aeronautics and Space Administration to begin SETI projects, but concerns about wasteful government spending led Congress to end these programs in 1993. However, SETI projects funded by private donors (in the United States) continued. One such search was Project Phoenix, which began in 1995 and ended in 2004. Phoenix scrutinized approximately 1,000 nearby star systems (within 150 light-years of Earth), most of which were similar in size and brightness to the Sun. The search was conducted on several radio telescopes, including the 305-metre (1,000-foot) radio telescope at the Arecibo Observatory in Puerto Rico, and was run by the SETI Institute of Mountain View, California.

Other radio SETI experiments, such as Project SERENDIP V (begun in 2009 by the University of California at Berkeley) and Australia’s Southern SERENDIP (begun in 1998 by the University of Western Sydney at Macarthur), scan large tracts of the sky and make no assumption about the directions from which signals might come. The former uses the Green Bank Telescope and, until its collapse in 2020, the Arecibo telescope, and the latter (which ended in 2005) was carried out with the 64-metre (210-foot) telescope near Parkes, New South Wales. Such sky surveys are generally less sensitive than targeted searches of individual stars, but they are able to “piggyback” onto telescopes that are already engaged in making conventional astronomical observations, thus securing a large amount of search time. In contrast, targeted searches such as Project Phoenix require exclusive telescope access.

In 2007 a new instrument, jointly built by the SETI Institute and the University of California at Berkeley and designed for round-the-clock SETI observations, began operation in northeastern California. The Allen Telescope Array (ATA, named after its principal funder, American technologist Paul Allen) has 42 small (6 metres [20 feet] in diameter) antennas. When complete, the ATA will have 350 antennas and be hundreds of times faster than previous experiments in the search for transmissions from other worlds.

Beginning in 2016, the Breakthrough Listen project began a 10-year survey of the one million closest stars, the nearest 100 galaxies, the plane of the Milky Way Galaxy, and the galactic centre using the Parkes telescope and the 100-metre (328-foot) telescope at the National Radio Astronomy Observatory in Green Bank, West Virginia. That same year the largest single-dish radio telescope in the world, the Five-hundred-meter Aperture Spherical Radio Telescope in China, began operation and had searching for extraterrestrial intelligence as one of its objectives.

Since 1999 some of the data collected by Project SERENDIP (and since 2016, Breakthrough Listen) has been distributed on the Web for use by volunteers who have downloaded a free screen saver, [email protected] The screen saver searches the data for signals and sends its results back to Berkeley. Because the screen saver is used by several million people, enormous computational power is available to look for a variety of signal types. Results from the home processing are compared with subsequent observations to see if detected signals appear more than once, suggesting that they may warrant further confirmation study.

Nearly all radio SETI searches have used receivers tuned to the microwave band near 1,420 megahertz. This is the frequency of natural emission from hydrogen and is a spot on the radio dial that would be known by any technically competent civilization. The experiments hunt for narrowband signals (typically 1 hertz wide or less) that would be distinct from the broadband radio emissions naturally produced by objects such as pulsars and interstellar gas. Receivers used for SETI contain sophisticated digital devices that can simultaneously measure radio energy in many millions of narrowband channels.

Optical SETI

SETI searches for light pulses are also under way at a number of institutions, including the University of California at Berkeley as well as Lick Observatory and Harvard University. The Berkeley and Lick experiments investigate nearby star systems, and the Harvard effort scans all the sky that is visible from Massachusetts. Sensitive photomultiplier tubes are affixed to conventional mirror telescopes and are configured to look for flashes of light lasting a nanosecond (a billionth of a second) or less. Such flashes could be produced by extraterrestrial societies using high-powered pulsed lasers in a deliberate effort to signal other worlds. By concentrating the energy of the laser into a brief pulse, the transmitting civilization could ensure that the signal momentarily outshines the natural light from its own sun.

Results and two-way communication

No confirmed extraterrestrial signals have yet been found by SETI experiments. Early searches, which were unable to quickly determine whether an emission was terrestrial or extraterrestrial in origin, would frequently find candidate signals. The most famous of these was the so-called “Wow” signal, measured by a SETI experiment at Ohio State University in 1977. Subsequent observations failed to find this signal again, and so the Wow signal, as well as other similar detections, is not considered a good candidate for being extraterrestrial.

Most SETI experiments do not transmit signals into space. Because the distance even to nearby extraterrestrial intelligence could be hundreds or thousands of light-years, two-way communication would be tedious. For this reason, SETI experiments focus on finding signals that could have been deliberately transmitted or could be the result of inadvertent emission from extraterrestrial civilizations.

(The search for extraterrestrial intelligence (SETI) is a collective term for scientific searches for intelligent extraterrestrial life, for example, monitoring electromagnetic radiation for signs of transmissions from civilizations on other planets.

Scientific investigation began shortly after the advent of radio in the early 1900s, and focused international efforts have been going on since the 1980s. In 2015, Stephen Hawking and Russian billionaire Yuri Milner announced a well-funded effort called Breakthrough Listen.

Breakthrough Listen is a project to search for intelligent extraterrestrial communications in the Universe. With $100 million in funding and thousands of hours of dedicated telescope time on state-of-the-art facilities, it is the most comprehensive search for alien communications to date. The project began in January 2016, and is expected to continue for 10 years. It is a component of Yuri Milner's Breakthrough Initiatives program. The science program for Breakthrough Listen is based at Berkeley SETI Research Center, located in the Astronomy Department at the University of California, Berkeley.

The project uses radio wave observations from the Green Bank Observatory and the Parkes Observatory, and visible light observations from the Automated Planet Finder. Targets for the project include one million nearby stars and the centers of 100 galaxies. All data generated from the project are available to the public, and SETI@Home (BOINC) is used for some of the data analysis. The first results were published in April 2017, with further updates expected every 6 months.)


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1181 2021-11-05 15:23:51

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1158) Cardiology

The term cardiology is derived from the Greek words “cardia,” which refers to the heart and “logy” meaning “study of.” Cardiology is a branch of medicine that concerns diseases and disorders of the heart, which may range from congenital defects through to acquired heart diseases such as coronary artery disease and congestive heart failure.

Physicians who specialize in cardiology are called cardiologists and they are responsible for the medical management of various heart diseases. Cardiac surgeons are the specialist physicians who perform surgical procedures to correct heart disorders.

Cardiology milestones

Some of the major milestones in the discipline of cardiology are listed below:

1628  :   The circulation of blood was described by an English Physician William Harvey.
1706  :  A French anatomy professor, Raymond de Vieussens, described the structure of the heart's chambers and vessels.
1733  :  Blood pressure was first measured by an English clergyman and scientist called Stephen Hales.
1816  :   A French physician, Rene Laennec, invented the stethoscope.
1903  :  A Dutch physiologist Willem Einthoven, developed the electrocardiograph or ECG, a vital instrument used to measure the electrical activity of the heart and diagnose heart abnormalities.
1912  :  An American physician, James Herric, described atherosclerosis – one of the most common diseases of the heart.
1938   :  Robert Gross, an American surgeon, performed the first heart surgery
1951  :   The first artificial heart valve was developed by Charles Hufnagel.
1952  :   An American surgeon called Floyd John Lewis performed the first open heart surgery
1967  :  Christian Barnard, a South African surgeon, performed the first whole heart transplant
1982  :  An American surgeon called Willem DeVries implanted a permanent artificial heart designed by Robert Jarvik, into a patient.

Cardiology is a branch of medicine that deals with the disorders of the heart as well as some parts of the circulatory system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery.

Cardiology, medical specialty dealing with the diagnosis and treatment of diseases and abnormalities involving the heart and blood vessels. Cardiology is a medical, not surgical, discipline. Cardiologists provide the continuing care of patients with cardiovascular disease, performing basic studies of heart function and supervising all aspects of therapy, including the administration of drugs to modify heart functions.

The foundation of the field of cardiology was laid in 1628, when English physician William Harvey published his observations on the anatomy and physiology of the heart and circulation. From that period, knowledge grew steadily as physicians relied on scientific observation, rejecting the prejudices and superstitions of previous eras, and conducted fastidious and keen studies of the physiology, anatomy, and pathology of the heart and blood vessels. During the 18th and 19th centuries physicians acquired a deeper understanding of the vagaries of pulse and blood pressure, of heart sounds and heart murmurs (through the practice of auscultation, aided by the invention of the stethoscope by French physician René Laënnec), of respiration and exchange of blood gases in the lungs, of heart muscle structure and function, of congenital heart defects, of electrical activity in the heart muscle, and of irregular heart rhythms (arrhythmias). Dozens of clinical observations conducted in those centuries live on today in the vernacular of cardiology—for example, Adams-Stokes syndrome, a type of heart block named for Irish physicians Robert Adams and William Stokes; Austin Flint murmur, named for the American physician who discovered the disorder; and tetralogy of Fallot, a combination of congenital heart defects named for French physician Étienne-Louis-Arthur Fallot.

Much of the progress in cardiology during the 20th century was made possible by improved diagnostic tools. Electrocardiography, the measurement of electrical activity in the heart, evolved from research by Dutch physiologist Willem Einthoven in 1903, and radiological evaluation of the heart grew out of German physicist Wilhelm Conrad Röntgen’s experiments with X-rays in 1895. Echocardiography, the generation of images of the heart by directing ultrasound waves through the chest wall, was introduced in the early 1950s. Cardiac catheterization, invented in 1929 by German surgeon Werner Forssmann and refined soon after by American physiologists André Cournand and math Richards, opened the way for measuring pressure inside the heart, studying normal and abnormal electrical activity, and directly visualizing the heart chambers and blood vessels (angiography). Today the discipline of nuclear cardiology provides a means of measuring blood flow and contraction in heart muscle through the use of radioisotopes.

As diagnostic capabilities have grown, so have treatment options. Drugs have been developed by the pharmaceutical industry to treat heart failure, angina pectoris, coronary heart disease, hypertension (high blood pressure), arrhythmia, and infections such as endocarditis. In parallel with advances in cardiac catheterization and angiography, surgeons developed techniques for allowing the blood circulation to bypass the heart through heart-lung machines, thereby permitting surgical correction of all manner of acquired and congenital heart diseases. Other advances in cardiology include electrocardiographic monitors, pacemakers and defibrillators for detecting and treating arrhythmias, radio-frequency ablation of certain abnormal rhythms, and balloon angioplasty and other nonsurgical treatments of blood vessel obstruction. It is expected that discoveries in genetics and molecular biology will further aid cardiologists in their understanding of cardiovascular disease.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1182 2021-11-06 00:22:29

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1159) Nervous System

Nervous system, organized group of cells specialized for the conduction of electrochemical stimuli from sensory receptors through a network to the site at which a response occurs.

All living organisms are able to detect changes within themselves and in their environments. Changes in the external environment include those of light, temperature, sound, motion, and odour, while changes in the internal environment include those in the position of the head and limbs as well as in the internal organs. Once detected, these internal and external changes must be analyzed and acted upon in order to survive. As life on Earth evolved and the environment became more complex, the survival of organisms depended upon how well they could respond to changes in their surroundings. One factor necessary for survival was a speedy reaction or response. Since communication from one cell to another by chemical means was too slow to be adequate for survival, a system evolved that allowed for faster reaction. That system was the nervous system, which is based upon the almost instantaneous transmission of electrical impulses from one region of the body to another along specialized nerve cells called neurons.

Nervous systems are of two general types, diffuse and centralized. In the diffuse type of system, found in lower invertebrates, there is no brain, and neurons are distributed throughout the organism in a netlike pattern. In the centralized systems of higher invertebrates and vertebrates, a portion of the nervous system has a dominant role in coordinating information and directing responses. This centralization reaches its culmination in vertebrates, which have a well-developed brain and spinal cord. Impulses are carried to and from the brain and spinal cord by nerve fibres that make up the peripheral nervous system.

This article begins with a discussion of the general features of nervous systems—that is, their function of responding to stimuli and the rather uniform electrochemical processes by which they generate a response. Following that is a discussion of the various types of nervous systems, from the simplest to the most complex.

Form and function of nervous systems:

Stimulus-response coordination

The simplest type of response is a direct one-to-one stimulus-response reaction. A change in the environment is the stimulus; the reaction of the organism to it is the response. In single-celled organisms, the response is the result of a property of the cell fluid called irritability. In simple organisms, such as algae, protozoans, and fungi, a response in which the organism moves toward or away from the stimulus is called taxis. In larger and more complicated organisms—those in which response involves the synchronization and integration of events in different parts of the body—a control mechanism, or controller, is located between the stimulus and the response. In multicellular organisms, this controller consists of two basic mechanisms by which integration is achieved—chemical regulation and nervous regulation.

In chemical regulation, substances called hormones are produced by well-defined groups of cells and are either diffused or carried by the blood to other areas of the body where they act on target cells and influence metabolism or induce synthesis of other substances. The changes resulting from hormonal action are expressed in the organism as influences on, or alterations in, form, growth, reproduction, and behaviour.

Plants respond to a variety of external stimuli by utilizing hormones as controllers in a stimulus-response system. Directional responses of movement are known as tropisms and are positive when the movement is toward the stimulus and negative when it is away from the stimulus. When a seed germinates, the growing stem turns upward toward the light, and the roots turn downward away from the light. Thus, the stem shows positive phototropism and negative geotropism, while the roots show negative phototropism and positive geotropism. In this example, light and gravity are the stimuli, and directional growth is the response. The controllers are certain hormones synthesized by cells in the tips of the plant stems. These hormones, known as auxins, diffuse through the tissues beneath the stem tip and concentrate toward the shaded side, causing elongation of these cells and, thus, a bending of the tip toward the light. The end result is the maintenance of the plant in an optimal condition with respect to light.

In animals, in addition to chemical regulation via the endocrine system, there is another integrative system called the nervous system. A nervous system can be defined as an organized group of cells, called neurons, specialized for the conduction of an impulse—an excited state—from a sensory receptor through a nerve network to an effector, the site at which the response occurs.

Organisms that possess a nervous system are capable of much more complex behaviour than are organisms that do not. The nervous system, specialized for the conduction of impulses, allows rapid responses to environmental stimuli. Many responses mediated by the nervous system are directed toward preserving the status quo, or homeostasis, of the animal. Stimuli that tend to displace or disrupt some part of the organism call forth a response that results in reduction of the adverse effects and a return to a more normal condition. Organisms with a nervous system are also capable of a second group of functions that initiate a variety of behaviour patterns. Animals may go through periods of exploratory or appetitive behaviour, nest building, and migration. Although these activities are beneficial to the survival of the species, they are not always performed by the individual in response to an individual need or stimulus. Finally, learned behaviour can be superimposed on both the homeostatic and initiating functions of the nervous system.

Intracellular systems

All living cells have the property of irritability, or responsiveness to environmental stimuli, which can affect the cell in different ways, producing, for example, electrical, chemical, or mechanical changes. These changes are expressed as a response, which may be the release of secretory products by gland cells, the contraction of muscle cells, the bending of a plant-stem cell, or the beating of whiplike “hairs,” or cilia, by ciliated cells.

The responsiveness of a single cell can be illustrated by the behaviour of the relatively simple amoeba. Unlike some other protozoans, an amoeba lacks highly developed structures that function in the reception of stimuli and in the production or conduction of a response. The amoeba behaves as though it had a nervous system, however, because the general responsiveness of its cytoplasm serves the functions of a nervous system. An excitation produced by a stimulus is conducted to other parts of the cell and evokes a response by the animal. An amoeba will move to a region of a certain level of light. It will be attracted by chemicals given off by foods and exhibit a feeding response. It will also withdraw from a region with noxious chemicals and exhibit an avoidance reaction upon contacting other objects.

Organelle systems

In more-complex protozoans, specialized cellular structures, or organelles, serve as receptors of stimulus and as effectors of response. Receptors include stiff sensory bristles in ciliates and the light-sensitive eyespots of flagellates. Effectors include cilia (slender, hairlike projections from the cell surface), flagella (elongated, whiplike cilia), and other organelles associated with drawing in food or with locomotion. Protozoans also have subcellular cytoplasmic filaments that, like muscle tissue, are contractile. The vigorous contraction of the protozoan Vorticella, for example, is the result of contraction of a threadlike structure called a myoneme in the stalk.

Although protozoans clearly have specialized receptors and effectors, it is not certain that there are special conducting systems between the two. In a ciliate such as Paramecium, the beating of the cilia—which propels it along—is not random, but coordinated. Beating of the cilia begins at one end of the organism and moves in regularly spaced waves to the other end, suggesting that coordinating influences are conducted longitudinally. A system of fibrils connecting the bodies in which the cilia are rooted may provide conducting paths for the waves, but coordination of the cilia may also take place without such a system. Each cilium may respond to a stimulus carried over the cell surface from an adjacent cilium—in which case, coordination would be the result of a chain reaction from cilium to cilium.

The best evidence that formed structures are responsible for coordination comes from another ciliate, Euplotes, which has a specialized band of ciliary rows (membranelles) and widely separated tufts of cilia (cirri). By means of the coordinated action of these structures, Euplotes is capable of several complicated movements in addition to swimming (e.g., turning sharply, moving backward, spinning). The five cirri at the rear of the organism are connected to the anterior end in an area known as the motorium. The fibres of the motorium apparently provide coordination between the cirri and the membranelles. The membranelles, cirri, and motorium constitute a neuromotor system.

Nervous systems

The basic pattern of stimulus-response coordination in animals is an organization of receptor, adjustor, and effector units. External stimuli are received by the receptor cells, which, in most cases, are neurons. (In a few instances, a receptor is a non-nervous sensory epithelial cell, such as a hair cell of the inner ear or a taste cell, which stimulates adjacent neurons.) The stimulus is modified, or transduced, into an electrical impulse in the receptor neuron. This incoming excitation, or afferent impulse, then passes along an extension, or axon, of the receptor to an adjustor, called an interneuron. (All neurons are capable of conducting an impulse, which is a brief change in the electrical charge on the cell membrane. Such an impulse can be transmitted, without loss in strength, many times along an axon until the message, or input, reaches another neuron, which in turn is excited.) The interneuron-adjustor selects, interprets, or modifies the input from the receptor and sends an outgoing, or efferent, impulse to an efferent neuron, such as a motor neuron. The efferent neuron, in turn, makes contact with an effector such as a muscle or gland, which produces a response.

In the simplest arrangement, the receptor-adjustor-effector units form a functional group known as the reflex arc. Sensory cells carry afferent impulses to a central interneuron, which makes contact with a motor neuron. The motor neuron carries efferent impulses to the effector, which produces the response. Three types of neurons are involved in this reflex arc, but a two-neuron arc, in which the receptor makes contact directly with the motor neuron, also occurs. In a two-neuron arc, simple reflexes are prompt, short-lived, and automatic and involve only a part of the body. Examples of simple reflexes are the contraction of a muscle in response to stretch, the blink of the eye when the cornea is touched, and salivation at the sight of food. Reflexes of this type are usually involved in maintaining homeostasis.

The differences between simple and complex nervous systems lie not in the basic units but in their arrangement. In higher nervous systems, there are more interneurons concentrated in the central nervous system (brain and spinal cord) that mediate the impulses between afferent and efferent neurons. Sensory impulses from particular receptors travel through specific neuronal pathways to the central nervous system. Within the central nervous system, though, the impulse can travel through multiple pathways formed by numerous neurons. Theoretically, the impulse can be distributed to any of the efferent motor neurons and produce a response in any of the effectors. It is also possible for many kinds of stimuli to produce the same response.

As a result of the integrative action of the interneuron, the behaviour of the organism is more than the simple sum of its reflexes; it is an integrated whole that exhibits coordination between many individual reflexes. Reflexes can occur in a complicated sequence producing elaborate behaviour patterns. Behaviour in such cases is characterized not by inherited, stereotyped responses but by flexibility and adaptability to circumstances. Many automatic, unconditioned reflexes can be modified by or adapted to new stimuli. The experiments of Russian physiologist Ivan Petrovich Pavlov, for example, showed that if an animal salivates at the sight of food while another stimulus, such as the sound of a bell, occurs simultaneously, the sound alone can induce salivation after several trials. This response, known as a conditioned reflex, is a form of learning. The behaviour of the animal is no longer limited by fixed, inherited reflex arcs but can be modified by experience and exposure to an unlimited number of stimuli. The most evolved nervous systems are capable of even higher associative functions such as thinking and memory. The complex manipulation of the signals necessary for these functions depends to a great extent on the number and intricacy of the arrangement of interneurons.

The nerve cell

The watershed of all studies of the nervous system was an observation made in 1889 by Spanish scientist Santiago Ramón y Cajal, who reported that the nervous system is composed of individual units that are structurally independent of one another and whose internal contents do not come into direct contact. According to his hypothesis, now known as the neuron theory, each nerve cell communicates with others through contiguity rather than continuity. That is, communication between adjacent but separate cells must take place across the space and barriers separating them. It has since been proved that Cajal’s theory is not universally true, but his central idea—that communication in the nervous system is largely communication between independent nerve cells—has remained an accurate guiding principle for all further study.

There are two basic cell types within the nervous system: neurons and neuroglial cells.

The neuron

In the human brain there are an estimated 85 billion to 200 billion neurons. Each neuron has its own identity, expressed by its interactions with other neurons and by its secretions; each also has its own function, depending on its intrinsic properties and location as well as its inputs from other select groups of neurons, its capacity to integrate those inputs, and its ability to transmit the information to another select group of neurons.

With few exceptions, most neurons consist of three distinct regions, as shown in the diagram: (1) the cell body, or soma; (2) the nerve fibre, or axon; and (3) the receiving processes, or dendrites.


Plasma membrane:

The neuron is bound by a plasma membrane, a structure so thin that its fine detail can be revealed only by high-resolution electron microscopy. About half of the membrane is the lipid bilayer, two sheets of mainly phospholipids with a space between. One end of a phospholipid molecule is hydrophilic, or water attaching, and the other end is hydrophobic, or water repelling. The bilayer structure results when the hydrophilic ends of the phospholipid molecules in each sheet turn toward the watery mediums of both the cell interior and the extracellular environment, while the hydrophobic ends of the molecules turn in toward the space between the sheets. These lipid layers are not rigid structures; the loosely bonded phospholipid molecules can move laterally across the surfaces of the membrane, and the interior is in a highly liquid state.

The centre of the field is occupied by the cell body, or soma, of the neuron. Most of the cell body is occupied by the nucleus, which contains a nucleolus. The double membrane of the nucleus is surrounded by cytoplasm, containing elements of the Golgi apparatus lying at the base of the apical dendrite. Mitochondria can be seen dispersed in the cytoplasm, which also contains the rough endoplasmic reticulum. Another dendrite is seen to the side, and the axon hillock is shown at the initial segment of the emerging axon. A synapse impinges onto the neuron close to the axon hillock.

Embedded within the lipid bilayer are proteins, which also float in the liquid environment of the membrane. These include glycoproteins containing polysaccharide chains, which function, along with other carbohydrates, as adhesion sites and recognition sites for attachment and chemical interaction with other neurons. The proteins provide another basic and crucial function: those which penetrate the membrane can exist in more than one conformational state, or molecular shape, forming channels that allow ions to pass between the extracellular fluid and the cytoplasm, or internal contents of the cell. In other conformational states, they can block the passage of ions. This action is the fundamental mechanism that determines the excitability and pattern of electrical activity of the neuron.

A complex system of proteinaceous intracellular filaments is linked to the membrane proteins. This cytoskeleton includes thin neurofilaments containing actin, thick neurofilaments similar to myosin, and microtubules composed of tubulin. The filaments are probably involved with movement and translocation of the membrane proteins, while microtubules may anchor the proteins to the cytoplasm.


Each neuron contains a nucleus defining the location of the soma. The nucleus is surrounded by a double membrane, called the nuclear envelope, that fuses at intervals to form pores allowing molecular communication with the cytoplasm. Within the nucleus are the chromosomes, the genetic material of the cell, through which the nucleus controls the synthesis of proteins and the growth and differentiation of the cell into its final form. Proteins synthesized in the neuron include enzymes, receptors, hormones, and structural proteins for the cytoskeleton.


The endoplasmic reticulum (ER) is a widely spread membrane system within the neuron that is continuous with the nuclear envelope. It consists of series of tubules, flattened sacs called cisternae, and membrane-bound spheres called vesicles. There are two types of ER. The rough endoplasmic reticulum (RER) has rows of knobs called ribosomes on its surface. Ribosomes synthesize proteins that, for the most part, are transported out of the cell. The RER is found only in the soma. The smooth endoplasmic reticulum (SER) consists of a network of tubules in the soma that connects the RER with the Golgi apparatus. The tubules can also enter the axon at its initial segment and extend to the axon terminals.

The Golgi apparatus is a complex of flattened cisternae arranged in closely packed rows. Located close to and around the nucleus, it receives proteins synthesized in the RER and transferred to it via the SER. At the Golgi apparatus, the proteins are attached to carbohydrates. The glycoproteins so formed are packaged into vesicles that leave the complex to be incorporated into the cell membrane.


The axon arises from the soma at a region called the axon hillock, or initial segment. This is the region where the plasma membrane generates nerve impulses; the axon conducts these impulses away from the soma or dendrites toward other neurons. Large axons acquire an insulating myelin sheath and are known as myelinated, or medullated, fibres. Myelin is composed of 80 percent lipid and 20 percent protein; cholesterol is one of the major lipids, along with variable amounts of cerebrosides and phospholipids. Concentric layers of these lipids separated by thin layers of protein give rise to a high-resistance, low-capacitance electrical insulator interrupted at intervals by gaps called nodes of Ranvier, where the nerve membrane is exposed to the external environment. In the central nervous system the myelin sheath is formed from glial cells called oligodendrocytes, and in peripheral nerves it is formed from Schwann cells (see below The neuroglia).

While the axon mainly conducts nerve impulses from the soma to the terminal, the terminal itself secretes chemical substances called neurotransmitters. The synthesis of these substances can occur in the terminal itself, but the synthesizing enzymes are formed by ribosomes in the soma and must be transported down the axon to the terminal. This process is known as axoplasmic flow; it occurs in both directions along the axon and may be facilitated by microtubules.

At the terminal of the axon, and sometimes along its length, are specialized structures that form junctions with other neurons and with muscle cells. These junctions are called synapses. Presynaptic terminals, when seen by light microscope, look like small knobs and contain many organelles. The most numerous of these are synaptic vesicles, which, filled with neurotransmitters, are often clumped in areas of the terminal membrane that appear to be thickened. The thickened areas are called presynaptic dense projections, or active zones.

The presynaptic terminal is unmyelinated and is separated from the neuron or muscle cell onto which it impinges by a gap called the synaptic cleft, across which neurotransmitters diffuse when released from the vesicles. In nerve-muscle junctions the synaptic cleft contains a structure called the basal lamina, which holds an enzyme that destroys neurotransmitters and thus regulates the amount that reaches the postsynaptic receptors on the receiving cell. Most knowledge of postsynaptic neurotransmitter receptors comes from studies of the receptor on muscle cells. This receptor, called the end plate, is a glycoprotein composed of five subunits. Other neurotransmitter receptors do not have the same structure, but they are all proteins and probably have subunits with a central channel that is activated by the neurotransmitter.

While the chemically mediated synapse described above forms the majority of synapses in vertebrate nervous systems, there are other types of synapses in vertebrate brains and, in especially great numbers, in invertebrate and fish nervous systems. At these synapses there is no synaptic gap; instead, there are gap junctions, direct channels between neurons that establish a continuity between the cytoplasm of adjacent cells and a structural symmetry between the pre- and postsynaptic sites. Rapid neuronal communication at these junctions is probably electrical in nature. (For further discussion, see below Transmission at the synapse.)


Besides the axon, neurons have other branches called dendrites that are usually shorter than axons and are unmyelinated. Dendrites are thought to form receiving surfaces for synaptic input from other neurons. In many dendrites these surfaces are provided by specialized structures called dendritic spines, which, by providing discrete regions for the reception of nerve impulses, isolate changes in electrical current from the main dendritic trunk.

The traditional view of dendritic function presumes that only axons conduct nerve impulses and only dendrites receive them, but dendrites can form synapses with dendrites and axons and even somata can receive impulses. Indeed, some neurons have no axon; in these cases nervous transmission is carried out by the dendrites.

The neuroglia

Neurons form a minority of the cells in the nervous system. Exceeding them in number by at least 10 to 1 are neuroglial cells, which exist in the nervous systems of invertebrates as well as vertebrates. Neuroglia can be distinguished from neurons by their lack of axons and by the presence of only one type of process. In addition, they do not form synapses, and they retain the ability to divide throughout their life span. While neurons and neuroglia lie in close apposition to one another, there are no direct junctional specializations, such as gap junctions, between the two types. Gap junctions do exist between neuroglial cells.

Types of neuroglia

Apart from conventional histological and electron-microscopic techniques, immunologic techniques are used to identify different neuroglial cell types. By staining the cells with antibodies that bind to specific protein constituents of different neuroglia, neurologists have been able to discern two (in some opinions, three) main groups of neuroglia: (1) astrocytes, subdivided into fibrous and protoplasmic types; (2) oligodendrocytes, subdivided into interfascicular and perineuronal types; and sometimes (3) microglia.

Fibrous astrocytes are prevalent among myelinated nerve fibres in the white matter of the central nervous system. Organelles seen in the somata of neurons are also seen in astrocytes, but they appear to be much sparser. These cells are characterized by the presence of numerous fibrils in their cytoplasm. The main processes exit the cell in a radial direction (hence the name astrocyte, meaning “star-shaped cell”), forming expansions and end feet at the surfaces of vascular capillaries.

Unlike fibrous astrocytes, protoplasmic astrocytes occur in the gray matter of the central nervous system. They have fewer fibrils within their cytoplasm, and cytoplasmic organelles are sparse, so that the somata are shaped by surrounding neurons and fibres. The processes of protoplasmic astrocytes also make contact with capillaries.

Oligodendrocytes have few cytoplasmic fibrils but a well-developed Golgi apparatus. They can be distinguished from astrocytes by the greater density of both the cytoplasm and the nucleus, the absence of fibrils and of glycogen in the cytoplasm, and large numbers of microtubules in the processes. Interfascicular oligodendrocytes are aligned in rows between the nerve fibres of the white matter of the central nervous system. In gray matter, perineuronal oligodendrocytes are located in close proximity to the somata of neurons. In the peripheral nervous system, neuroglia that are equivalent to oligodendrocytes are called Schwann cells.

Microglial cells are small cells with dark cytoplasm and a dark nucleus. It is uncertain whether they are merely damaged neuroglial cells or occur as a separate group in living tissue.

Neuroglial functions

The term neuroglia means “nerve glue,” and these cells were originally thought to be structural supports for neurons. This is still thought to be plausible, but other functions of the neuroglia are now generally accepted. Oligodendrocytes and Schwann cells produce the myelin sheath around neuronal axons. Some constituent of the axonal surface stimulates Schwann cell proliferation; the type of axon determines whether there is loose or tight myelination of the axon. In tight myelination a glial cell wraps itself like a rolled sheet around a length of axon until the fibre is covered by several layers. Between segments of myelin wrapping are exposed sections called nodes of Ranvier, which are important in the transmission of nerve impulses. Myelinated nerve fibres are found only in vertebrates, leading biologists to conclude that they are an adaptation to transmission over relatively long distances.

Another well-defined role of neuroglial cells is the repair of the central nervous system following injury. Astrocytes divide after injury to the nervous system and occupy the spaces left by injured neurons. The role of oligodendrocytes after injury is unclear, but they may proliferate and form myelin sheaths.

When neurons of the peripheral nervous system are severed, they undergo a process of degeneration followed by regeneration; fibres regenerate in such a way that they return to their original target sites. Schwann cells that remain after nerve degeneration apparently determine the route. This route direction is also performed by astrocytes during development of the central nervous system. In the developing cerebral cortex and cerebellum of primates, astrocytes project long processes to certain locations, and neurons migrate along these processes to arrive at their final locations. Thus, neuronal organization is brought about to some extent by the neuroglia.

Astrocytes are also thought to have high-affinity uptake systems for neurotransmitters such as glutamate and gamma-aminobutyric acid (GABA). This function is important in the modulation of synaptic transmission. Uptake systems tend to terminate neurotransmitter action at the synapses and may also act as storage systems for neurotransmitters when they are needed. For instance, when motor nerves are severed, nerve terminals degenerate and their original sites are occupied by Schwann cells. The synthesis of neurotransmitters by neurons apparently also requires the presence of neuroglial cells in the vicinity.

Finally, the environment surrounding neurons in the brain consists of a network of very narrow extracellular clefts. In 1907 Italian biologist Emilio Lugaro suggested that neuroglial cells exchange substances with the extracellular fluid and in this way exert control on the neuronal environment. It has since been shown that glucose, amino acids, and ions—all of which influence neuronal function—are exchanged between the extracellular space and neuroglial cells. After high levels of neuronal activity, for instance, neuroglial cells can take up and spatially buffer potassium ions and thus maintain normal neuronal function.

Transmission of information in the nervous system

In the nervous system of animals at all levels of the evolutionary scale, the signals containing information about a particular stimulus are electrical in nature. In the past the nerve fibre and its contents were compared to metal wire, while the membrane was compared to insulation around the wire. This comparison was erroneous for a number of reasons. First, the charge carriers in nerves are ions, not electrons, and the density of ions in the axon is much less than that of electrons in a metal wire. Second, the membrane of an axon is not a perfect insulator, so that the movement of current along the axon is not complete. Finally, nerve fibres are smaller than most wires, so that the currents they can carry are limited in amplitude.


Our nervous system is divided in two components: the central nervous system (CNS), which includes the brain and spinal cord, and the peripheral nervous system (PNS), which encompasses nerves outside the brain and spinal cord. These two components cooperate at all times to ensure our lively functions: we are nothing without our nervous system!

Unlike the brain and the spinal cord of the central nervous system that are protected by the vertebrae and the skull, the nerves and cells of the peripheral nervous system are not enclosed by bones, and therefore are more susceptible to trauma.

If we consider the entire nervous system as an electric grid, the central nervous system would represent the powerhouse, whereas the peripheral nervous system would represent long cables that connect the powerhouse to the outlying cities (limbs, glands and organs) to bring them electricity and send information back about their status.

Basically, signals from the brain and spinal cord are relayed to the periphery by motor nerves, to tell the body to move or to conduct resting functions (like breathing, salivating and digesting), for example. The peripheral nervous system sends back the status report to the brain by relaying information via sensory nerves (see above image).

As with the central nervous system, the basic cell units of the peripheral central nervous system are neurons. Each neuron has a long process, known as the axon, which transmits the electrochemical signals through which neurons communicate.

Axons of the peripheral nervous system run together in bundles called fibres, and multiple fibres form the nerve, the cable of the electric circuit. The nerves, which also contain connective tissue and blood vessels, reach out to the muscles, glands and organs in the entire body

Nerves of the peripheral nervous system are classified based on the types of neurons they contain - sensory, motor or mixed nerves (if they contain both sensory and motor neurons), as well as the direction of information flow – towards or away from the brain.

The afferent nerves, from the Latin "afferre" that means "to bring towards", contain neurons that bring information to the central nervous system. In this case, the afferent are sensory neurons, which have the role of receiving a sensory input – hearing, vision, smell, taste and touch - and pass the signal to the CNS to encode the appropriate sensation.

The afferent neurons have also another important subconscious function. In this case, the peripheral nervous system brings information to the central nervous system about the inner state of the organs (homeostasis), providing feedback on their conditions, without the need for us to be consciously aware. For example, afferent nerves communicate to the brain the level of energy intake of various organs.

The efferent nerves, from the Latin "efferre" that means "to bring away from", contain efferent neurons that transmit the signals originating in the central nervous system to the organs and muscles, and put into action the orders from the brain. For example, motor neurons (efferent neurons) contact the skeletal muscles to execute the voluntary movement of raising your arm and wiggling your hand about.

Peripheral nervous system nerves often extend a great length from the central nervous system to reach the periphery of the body. The longest nerve in the human body, the sciatic nerve, originates around the lumbar region of the spine and its branches reach until the tip of the toes, measuring a meter or more in an average adult.

Importantly, injuries can occur at any point in peripheral nerves and could break the connection between the "powerhouse" and its "cities", resulting in a loss of function of the parts of the body that nerves reach into. So, it of great interest for scientists to understand how the nerves, or even how the axonal structure within the nerves, are protected from the constant mechanical stresses exerted on them. Work in this area of biology is carried out by Dr. Sean Coakley, in the laboratory of A/Prof Massimo Hilliard.

The peripheral nervous system can be divided into somatic, autonomic and enteric nervous systems, determined by the function of the parts of the body they connect to.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1183 2021-11-07 00:39:10

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1160) Somniloquy

Somniloquy, commonly referred to as sleep-talking, is a parasomnia that refers to talking aloud while asleep. It can range from simple mumbling sounds to loud shouts and long, frequently inarticulate speeches, and can occur many times during a sleep cycle.

As with sleepwalking and night terrors, sleep-talking usually occurs during delta-wave NREM sleep stages or during temporary arousals therefrom. It can also occur during the REM sleep stage, at which time it represents what sleep therapists call a motor breakthrough (see sleep paralysis) of dream speech: words spoken in a dream are spoken out loud. Depending on its frequency, this may or may not be considered pathological. All motor functions are typically disabled during REM sleep thus, motoric, i.e., verbal elaboration of dream content could be considered an REM behavior disorder. (REM: Rapid eye movement).

Have you been told that you whisper sweet nothings in your sleep -- unaware that you ever spoke a word? Or, maybe your child shouts out streams of babble late at night -- only to fall right back to sleep. Have you been hoping your sleep-talking spouse will spill a long-time secret? Go ahead. Pose a question while they are sleeping, and don't be surprised if you get a single-syllable answer! But be warned: A sleep talker usually doesn't remember anything that's said during sleep.

Talking in your sleep can be a funny thing. Perhaps you chitchat unconsciously with unseen associates at the midnight hour. Or maybe a family member unknowingly carries on nightly conversations. Here are answers to your questions about talking in your sleep -- what you need to know about sleep talking, from causes to treatments.

What is sleep talking?

Sleep talking, or somniloquy, is the act of speaking during sleep. It's a type of parasomnia -- an abnormal behavior that takes place during sleep. It's a very common occurrence and is not usually considered a medical problem.

The nighttime chatter may be harmless, or it could be graphic, even R rated. Sometimes, listeners find the content offensive or vulgar. Sleep talkers normally speak for no more than 30 seconds per episode, but some people sleep talk many times during a night.

The late-night diatribes may be exceptionally eloquent, or the words may be mumbled and hard to decipher. Sleep talking may involve simple sounds or long, involved speeches. Sleep talkers usually seem to be talking to themselves. But sometimes, they appear to carry on conversations with others. They may whisper, or they might shout. If you share a bedroom with someone who talks in their sleep, you might not be getting enough shut-eye.

Who talks in their sleep?

Many people talk in their sleep. Half of all kids between the ages of 3 and 10 years old carry on conversations while asleep, and a small number of adults -- about 5% -- keep chit-chatting after they go to bed. The utterances can take place occasionally or every night. A 2004 poll showed that more than 1 in 10 young children converse in their sleep more than a few nights a week.

Girls talk in their sleep as much as boys. And experts think that sleep talking may run in families.

What are the symptoms of talking in your sleep?

It's hard to tell if you've been talking in your own sleep. Usually, people will tell you they've heard you shout out during the night or while you were napping. Or maybe someone might complain that your sleep talking is keeping them up all night.

What causes sleep talking?

You might think that sleep talking occurs during dreaming. But scientists still are not sure if such chatter is linked to nighttime reveries. The talking can occur in any stage of sleep.

Sleep talking usually occurs by itself and is most often harmless. However, in some cases, it might be a sign of a more serious sleep disorder or health condition.

REM sleep behavior disorder (RBD) and sleep terrors are two types of sleep disorders that cause some people to shout during sleep. Sleep terrors, also called night terrors, usually involve frightening screams, thrashing, and kicking. It's hard to wake someone having a sleep terror. Children with sleep terrors usually sleep talk and sleepwalk.

People with RBD yell, shout, grunt, and act out their dreams, often violently.

Sleep talking can also occur with sleepwalking and nocturnal sleep-related eating disorder(NS-RED), a condition in which a person eats while asleep.

Other things that can cause sleep talking include:

* Certain medications
* Emotional stress
* Fever
* Mental health disorder
* Substance abuse

How is talking in your sleep treated?

It is a good idea to see a sleep specialist if your sleep talking occurs suddenly as an adult or if it involves intense fear, screaming, or violent actions. You might also consider seeing a doctor if unconscious chatter is interfering with your sleep -- or that of your roommates.

If you think your child has sleep problems, make an appointment with your pediatrician.

A sleep specialist will ask you how long you've been talking in your sleep. You'll have to ask your bed partner, roommate -- even your parents -- this question. Keep in mind, you may have started sleep talking in childhood.

There are no tests needed to diagnose sleep talking. However, your doctor may order tests, such as a sleep study or sleep recording (polysomnogram), if you have signs of another sleep disorder.

Sleep talking rarely requires treatment. However, severe sleep talking may be the result of another more serious sleep disorder or medical condition, which can be treated. Talk to your doctor about your treatment options.

How can someone reduce their amount of sleep talking?

There is no known way to reduce sleep talking. Avoiding stress and getting plenty of sleep might make you less likely to talk in your sleep.

Keeping a sleep diary can help identify your sleep patterns and may help your doctor find out if an underlying problem is causing your sleep talking. Keep a sleep diary for two weeks. Note the times you go to bed, when you think you fell asleep, and when you woke up. You'll also want to write down the following:

* the medicines you take, and the time of day you take them
* what you drink each day and when, especially caffeinated drinks such as cola, tea, and coffee, as well as alcohol
* when you exercise

What is sleep talking?

Sleep talking is actually a sleep disorder known as somniloquy. Doctors don’t know a lot about sleep talking, like why it happens or what occurs in the brain when a person sleep talks. The sleep talker isn’t aware that they’re talking and won’t remember it the next day.

If you’re a sleep talker, you may talk in full sentences, speak gibberish, or talk in a voice or language different from what you’d use while awake. Sleep talking appears to be harmless.

Stage and severity

Sleep talking is defined by both stages and severity:

Stages 1 and 2: In these stages, the sleep talker isn’t in as deep of sleep as stages 3 and 4, and their speech is easier to understand. A sleep talker in stages 1 or 2 can have entire conversations that make sense.

Stages 3 and 4: The sleep talker is in a deeper sleep, and their speech is usually harder to understand. It may sound like moaning or gibberish.
Sleep talk severity is determined by how frequently it occurs:

Mild: Sleep talk happens less than once a month.
Moderate: Sleep talk occurs once a week, but not every night. The talking doesn’t interfere much with the sleep of other people in the room.
Severe: Sleep talking happens every night and may interfere with the sleep of other people in the room.

Who is at increased risk

Sleep talking can happen to anyone at any time, but it appears to be more common in children and men. There may alsoTrusted Source be a genetic link to sleep talking. So if you have parents or other family members who talked a lot in their sleep, you may be at risk too. Likewise, if you talk in your sleep and you have children, you may notice that your children talk in their sleep too.

Sleep talking can increase at certain times in your life and may be triggered by:

* sickness
* fever
* drinking alcohol
* stress
* mental health conditions, such as depression
* sleep deprivation

People with other sleep disorders are also at an increased risk for sleep talking, including people with a history of:

* sleep apnea
* sleep walking
* night terrors or nightmares

When to see a doctor

Sleep talking usually isn’t a serious medical condition, but there are times when it might be appropriate to see a doctor.

If your sleep talking is so extreme that it’s interfering with your quality of sleep or if you’re overly exhausted and can’t concentrate during the day, talk to your doctor. In rare situations, sleep talking can occurTrusted Source with more serious problems, like a psychiatric disorder or nighttime seizures.

If you suspect that your sleep talking is a symptom of another, more serious sleep disorder, such as sleep walking or sleep apnea, it’s helpful to see a doctor for a full examination. If you start sleep talking for the first time after the age of 25, schedule an appointment with a doctor. Sleep talking later in life may be caused by an underlying medical condition.


There’s no known treatment for sleep talking, but a sleep expert or a sleep center may be able to help you manage your condition. A sleep expert can also help to make sure your body is getting the adequate rest at night that it needs.

If you have a partner who’s bothered by your sleep talking, it might also be helpful to talk to a professional about how to manage both of your sleep needs. Some things you may want to try are:

* sleeping in different beds or rooms
* having your partner wear ear plugs
* using a white noise machine in your room to drown out any talking

Lifestyle changes such as the following may also help control your sleep talking:

* avoiding drinking alcohol
* avoiding heavy meals close to bedtime
* setting up a regular sleep schedule with nighttime rituals to coax your brain into sleep


Sleep talking is a harmless condition that is more common in children and men and may occur at certain periods in your life. It requires no treatment, and most of the time sleep talking will resolve on its own. It can be a chronic or temporary condition. It also may go away for many years and then reoccur.

Talk to your doctor if sleep talking is interfering with your or your partner’s sleep.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1184 2021-11-08 00:15:14

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1161) Cynophobia

Cynophobia is the fear of dogs. Like all specific phobias, cynophobia is intense, persistent, and irrational. According to a recent diagnostic manual, between 7% and 9% of any community may suffer from a specific phobia.

A phobia goes beyond mild discomfort or situational fear. It is not just fear in response to a particular situation. Instead, specific phobias interfere with daily life and can cause serious physical and emotional distress. You often can manage or treat cynophobia with medication or psychotherapy.

What Are the Symptoms of Cynophobia?

Cynophobia and other phobias related to animals are often diagnosed through the use of questionnaires and clinical interviews. For example, one snake phobia questionnaire presents a set of 12 statements about your reaction to snakes and asks you to agree or disagree with each statement.

In order to diagnose cynophobia, a doctor would evaluate your behavior and emotional responses concerning dogs. Symptoms of phobias may include any of the following:

* Sweating
* Trembling
* Difficulty breathing
* Rapid heartbeat
* Nausea
* Dizziness
* A feeling of danger
* Fear of losing control
* A fear of dying
* A sense of things being unreal
* Excessive avoidance or anxiety

If you regularly have any of these symptoms in relation to dogs, you may want to talk to your doctor or a licensed therapist about it.

At their most serious, specific phobias can lead to other problems. If you start struggling with any of these, contact your doctor for help:

* Social isolation
* Anxiety disorders or depression
* Substance abuse
* Thoughts of ending life

What Causes Cynophobia?

Specific phobias often appear in childhood. However, adults can develop them as well. No one knows exactly what makes someone develop a specific phobia. Potential causes include:

Traumatic experiences: For example, someone may develop a fear of dogs after being attacked by one.

Family tendencies: Either genetics or environment can play a role in the development of phobias. If someone in your family has a phobia, you are more likely to develop it as well.

Changes in brain function: Some people appear to develop phobias as a result of neurological disorders or physical trauma.

How to Treat or Manage Cynophobia

Several forms of therapy have helped people with cynophobia. Consult your doctor or a licensed mental health professional to find the right treatment or combination of treatments.

Exposure therapy. The most common treatment for specific phobias is exposure therapy. This is also called desensitization. In simple terms, persons undergoing exposure therapy practice interacting with the objects that they fear.

To treat cynophobia, some therapists suggest that you gradually increase both the closeness and length of your exposure. You could start by watching programs that feature dogs or watching dogs from a distance. Then, you work up to spending periods of time with dogs in person.

Another form of exposure therapy with some proven success is called active-imaginal exposure. In this style of treatment, you would vividly imagine interacting with dogs and practice using certain techniques to manage your feelings in response.

More recently, many therapists have had success with virtual reality exposure. Both sound and sight elements are combined in a virtual reality experience. This gives the person practice being around dogs in a safe and controlled environment.

Cognitive-behavioral therapy (CBT). Cognitive-behavioral therapy is also used to treat specific phobias. It generally includes exposure therapy. In addition, it emphasizes learning to retrain the brain and reframe negative experiences.

The goal of cognitive behavioral therapy is to develop a sense of control over your thoughts and emotions. The therapist aims to help you gain confidence in your ability to handle difficult situations.

Medications. The impact of drugs on specific phobias has been inconsistent. They appear to work best when used with exposure therapy instead of on their own. However, some anti-anxiety medications such as beta-blockers and sedatives can help you treat the physical symptoms of severe attacks.

More recently, researchers have discovered that a steroid called glucocorticoid can successfully decrease the physical symptoms associated with the anxiety connected to specific phobias. This includes the fear of dogs.

As far as Zoophobia (fear of animals) is concerned, the fear of dogs or Cynophobia is not as common as the fear of snakes or spiders. However, it is important to note that people who fear dogs are also highly likely to encounter them in their day to day lives. This makes the phobic avoid all kinds of situations involving dogs. As a result, his/her social, familial or occupational activities can be negatively impacted.

Causes of fear of dogs

The fear of dogs is known to be quite common owing to the historic association between dogs and wolves. As a result, most Cynophobics generally fear large and vicious looking dogs, though, in extreme cases, one might even fear small or aggressive puppies.

In reality, dogs are considered loyal and faithful companions that are capable of forming close ties with humans. However, to a phobic, owing to a prior bad experience with dogs, all canines appear dangerous or evil.

Parents might unknowingly instill a fear of dogs in their child by warning them against petting or approaching dogs. Thus, the negative experience one has had with a dog in the past might not necessarily be a direct one: having watched a sibling or a close friend getting attacked or barked at by a dog can also sometimes result in an excessive fear of dogs.

Cynophobic individuals are often afraid of the bark or the growling sound made by dogs rather than just their bites.

Symptoms of Cynophobia

As with most other phobias, Cynophobia can cause the sufferers to feel terribly anxious and frightened. This can lead to different physical and psychological symptoms such as:

Physical symptoms

* Dizziness and feeling faint, disoriented
* Excess sweating
* Shaking and trembling
* Nausea and gastrointestinal distress
* Dry mouth, feeling of choking or difficulty in swallowing
* Freezing
* Running away
* Crying

Psychological symptoms

* Having thoughts of dying
* Feeling like losing control or going crazy
* Inability to distinguish between reality and unreality
* Trying to avoid situations which bring confrontation with a dog

These symptoms might be present days before an actual confrontation with a dog and the individual might go to great lengths to avoid it.

Diagnosis and treatment of Cynophobia

Many people are afraid of dogs; hence diagnosis of Cynophobia includes determining if the fear is persistent or triggers an immediate anxiety response. To be categorized as Cynophobia, one’s fear of dogs would also be required to interfere with social, familial or occupational activities.

Therapy and self help techniques can be used for treating Cynophobia.

The most popular and effective technique for treating phobias is the systematic desensitization technique developed by Joseph Wolpe in 1958. It involves having the patient imagine being in the same room with a dog while employing specific breathing and relaxation techniques to reduce one’s anxiety.

In-vivo or exposure therapy can also help one get rid of one’s fear of dogs. This therapy involves a prolonged exposure to a dog until the patient can have a normal response to the animal.
Exposure therapy can also be utilized in the form of self-help technique wherein the patient gradually exposes himself to canines, looks at photos, and progresses gradually to petting a dog etc. This type of gradual exposure can help one realize that his/her fears are unfounded.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1185 2021-11-09 00:04:44

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1162) NASA

The National Aeronautics and Space Administration is an independent agency of the U.S. federal government responsible for the civilian space program, as well as aeronautics and space research.

NASA was established in 1958, succeeding the National Advisory Committee for Aeronautics (NACA). The new agency was to have a distinctly civilian orientation, encouraging peaceful applications in space science. Since its establishment, most US space exploration efforts have been led by NASA, including the Apollo Moon landing missions, the Skylab space station, and later the Space Shuttle. NASA is supporting the International Space Station and is overseeing the development of the Orion spacecraft, the Space Launch System, Commercial Crew vehicles, and the planned Lunar Gateway space station. The agency is also responsible for the Launch Services Program, which provides oversight of launch operations and countdown management for uncrewed NASA launches.

NASA's science is focused on better understanding Earth through the Earth Observing System; advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program; exploring bodies throughout the Solar System with advanced robotic spacecraft such as New Horizons; and researching astrophysics topics, such as the Big Bang, through the Great Observatories and associated programs.

National Aeronautics and Space Administration (NASA), independent U.S. governmental agency established in 1958 for the research and development of vehicles and activities for the exploration of space within and outside Earth’s atmosphere.

The organization is composed of four mission directorates: Aeronautics Research, for the development of advanced aviation technologies; Science, dealing with programs for understanding the origin, structure, and evolution of the universe, the solar system, and Earth; Space Technology, for the development of space science and exploration technologies; and Human Exploration and Operations, concerning the management of crewed space missions, including those to the International Space Station, as well as operations related to launch services, space transportation, and space communications for both crewed and robotic exploration programs. A number of additional research centres are affiliated, including the Goddard Space Flight Center in Greenbelt, Maryland; the Jet Propulsion Laboratory in Pasadena, California; the Johnson Space Center in Houston, Texas; and the Langley Research Center in Hampton, Virginia. Headquarters of NASA are in Washington, D.C.

NASA was created largely in response to the Soviet launching of Sputnik in 1957. It was organized around the National Advisory Committee for Aeronautics (NACA), which had been created by Congress in 1915. NASA’s organization was well under way by the early years of Pres. John F. Kennedy’s administration when he proposed that the United States put a man on the Moon by the end of the 1960s. To that end, the Apollo program was designed, and in 1969 the U.S. astronaut Neil Armstrong became the first person on the Moon. Later, uncrewed programs—such as Viking, Mariner, Voyager, and Galileo—explored other bodies of the solar system.

NASA was also responsible for the development and launching of a number of satellites with Earth applications, such as Landsat, a series of satellites designed to collect information on natural resources and other Earth features; communications satellites; and weather satellites. It also planned and developed the space shuttle, a reusable vehicle capable of carrying out missions that could not be conducted with conventional spacecraft.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1186 2021-11-10 00:42:03

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1163) Nyctophobia

Fear of the dark is a common fear or phobia among children and, to a varying degree, adults. A fear of the dark does not always concern darkness itself; it can also be a fear of possible or imagined dangers concealed by darkness. Some degree of fear of the dark is natural, especially as a phase of child development. Most observers report that fear of the dark seldom appears before the age of 2 years. When fear of the dark reaches a degree that is severe enough to be considered pathological, it is sometimes called scotophobia ("darkness"), or lygophobia ("twilight").

Some researchers, beginning with Sigmund Freud, consider the fear of the dark to be a manifestation of separation anxiety disorder.

An alternate theory was posited in the 1960s, when scientists conducted experiments in a search for molecules responsible for memory. In one experiment, rats, normally nocturnal animals, were conditioned to fear the dark and a substance called "scotophobin" was supposedly extracted from the rats' brains; this substance was claimed to be responsible for remembering this fear. These findings were subsequently debunked.


Nyctophobia is a phobia characterized by a severe fear of the dark. It is triggered by the brain's disfigured perception of what would, or could happen when in a dark environment. It can also be temporarily triggered if the mind is unsteady or scared about recent events or ideas, or a partaking in content the brain considers a threat (examples could include indulging in horror content, witnessing vulgar actions, or having linked dark environments to prior events or ideas that disturb the mind). Normally, since humans are not nocturnal by nature, humans are usually a bit more cautious or alert at night than in the day, since the dark is a vastly different environment. Nyctophobia produces symptoms beyond the normal instinctive parameters, such as breathlessness, excessive sweating, nausea, dry mouth, feeling sick, shaking, heart palpitations, inability to speak or think clearly or sensation of detachment from reality and death. Nyctophobia can be severely detrimental physically and mentally if these symptoms are not resolved. There are many types of therapies to help manage Nyctophobia.

Nightlights, such as this one, may be used to counteract fear of the dark.

Exposure therapy can be very effective when exposing the person to darkness. With this method a therapist can help with relaxation strategies such as meditation. Another form of therapy is Cognitive Behavioral Therapy. Therapists can help guide patients with behavior routines that are performed daily and nightly to reduce the symptoms associated with Nyctophobia. In severe cases, anti-depressants and anti-anxiety medication drugs can be effective to those dealing with symptoms that may not be manageable if therapy could not reduce the symptoms of Nyctophobia.

Despite its pervasive nature, there has been a lack of etiological research on the subject. Nyctophobia is generally observed in children but, according to J. Adrian Williams' article "Indirect Hypnotic Therapy of Nyctophobia: A Case Report", many clinics with pediatric patients have a great chance of having adults who have nyctophobia. The same article states that "the phobia has been known to be extremely disruptive to adult patients and… incapacitating".


Nyctophobia is an extreme fear of night or darkness that can cause intense symptoms of anxiety and depression. A fear becomes a phobia when it’s excessive, irrational, or impacts your day-to-day life.

Being afraid of the dark often starts in childhood and is viewed as a normal part of development. Studies focused on this phobia have shown that humans often fear the dark for its lack of any visual stimuli. In other words, people may fear night and darkness because they cannot see what’s around them.

While some fear is normal, when it starts to impact daily life and sleep patterns, it may be time to visit your doctor.


The symptoms you may experience with nyctophobia are much like those you would experience with other phobias. People with this phobia experience extreme fear that causes distress when they’re in the dark. Symptoms may interfere with daily activities, and school or work performance. They may even lead to health issues.

Different phobias share similar symptoms. These signs may be either physical or emotional. With nyctophobia, symptoms may be triggered by being in the dark or even thinking about situations where you’d find yourself in the dark.

Physical symptoms include:

* trouble breathing
* racing heart rate
* chest tightness or pain
* shaking, trembling, or tingling sensations
* lightheadedness or dizziness
* upset stomach
* hot or cold flashes
* sweating

Emotional symptoms include:

* overwhelming feelings of anxiety or panic
* an intense need to escape the situation
* detachment from self or feeling “unreal”
* losing control or feeling crazy
* feeling like you may die or lose consciousness
* feeling powerless over your fear

Risk factors

Fear of darkness and night often starts in childhood between the ages of 3 and 6. At this point, it may be a normal part of development. It’s also common at this age to fear:

* ghosts
* monsters
* sleeping alone
* strange noises

For many children, sleeping with a nightlight helps until they outgrow the fear. When the fear makes it impossible to sleep, causes severe anxiety, or continues into adulthood, it may be considered nyctophobia.

Additional risk factors include:

An anxious caregiver. Some children learn to be fearful by seeing a parent’s anxiety over certain issues.
An overprotective caregiver. Some may develop a generalized anxiety if they’re too dependent on a parent or caregiver, or if they feel helpless.
Stressful events. Trauma, such as a motor vehicle accident or injury, may also make a person more likely to develop a phobia.
Genetics. Some adults and children are simply more susceptible to fears, possibly due to their genetics.

Nyctophobia and sleep disorders

Nyctophobia may be associated with a sleep disorder, like insomnia. A small study on college students with insomnia uncovered that nearly half of the students had a fear of the dark. The researchers measured the students’ responses to noises in both light and darkness. Those who had the most trouble sleeping were more easily startled by noise in the dark. Not only that, but the good sleepers actually became used to the noises with time. The students with insomnia grew more and more anxious and anticipatory.


Make an appointment to see a doctor if you or your child:

* have trouble sleeping
* feel particularly anxious or distressed in the dark
* have another reason to believe you may have nyctophobia

Diagnosis involves meeting with your doctor and answering questions about your symptoms.


Some phobias don’t necessarily require treatment, especially if your fear is of something you don’t normally encounter in everyday life, like snakes or spiders. Nyctophobia, on the other hand, can make it very difficult to get enough sleep. That can affect your overall health and lead to sleep disorders like insomnia.

In general, you may consider seeking treatment if:

* your fear makes you feel extreme anxiety or panic
* you feel your fear is excessive or even unreasonable
* you avoid certain situations due to your fear
* you’ve noticed these feelings for six months or longer

Exposure therapy

This treatment exposes people to their fears repeatedly until the thing they fear, such as being in the dark, no longer triggers feelings of anxiety or panic.

There are a couple of ways to be exposed to fears, including visualizing the fear and experiencing the fear in real life. Many treatment plans blend these two approaches. Some exposure-based treatment plans have worked for people in as little as one long session.

Cognitive therapy

This type of therapy helps people identify their feelings of anxiety and replace them with more positive or realistic thoughts.

With nyctophobia, a person may be presented with information to show that being in the dark doesn’t necessarily lead to negative consequences. This type of treatment is usually not used alone to treat phobias.


Relaxation treatment includes things like deep breathing and exercise. It can help people manage the stress and physical symptoms related to their phobias.


Medication isn’t always an appropriate treatment for people with specific phobias. Unlike medications for other anxiety disorders, there’s little research regarding treating specific phobias with medication.


If you suspect that your or your child has nyctophobia, there are many resources where you might find help. Contacting your doctor or a psychologist is a good first step toward getting treatment.

Many people experience fear related to anything from flying to enclosed spaces. When fear interferes with your everyday life and affects your sleep, especially if it’s been six or more months, let your doctor know. Treatment through cognitive or behavioral therapy can help you overcome your fear and get a better night’s rest.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1187 2021-11-11 21:17:18

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1164) Astraphobia

Astraphobia, also known as astrapophobia, brontophobia, keraunophobia, or tonitrophobia is an abnormal fear of thunder and lightning or an unwarranted fear of scattered and/or isolated thunderstorms, a type of specific phobia. It is a treatable phobia that both humans and animals can develop.

Signs and symptoms

A person with astraphobia will often feel anxious during a thunderstorm even when they understand that the threat to them is minimal. Some symptoms are those accompanied with many phobias, such as trembling, crying, sweating, panicked reactions, the sudden feeling of using the bathroom, nausea, the feeling of dread, insertion of the fingers in the ears, and rapid heartbeat. However, there are some reactions that are unique to astraphobia. For instance, reassurance from other people is usually sought, and symptoms worsen when alone. Many people who have astraphobia will look for extra shelter from the storm. They might hide underneath a bed, under the covers, in a closet, in a basement, or any other space where they feel safer. Efforts are usually made to smother the sound of the thunder; the person may cover their ears or curtain the windows.

A typical sign that someone has astraphobia is a very heightened interest in weather forecasts. An astraphobic person will be alert for news of incoming storms. They may watch the weather on television constantly during rainy bouts and may even track thunderstorms online. This can become severe enough that the person may not go outside without checking the weather first. This can lead to anxiety. In very extreme cases, astraphobia can lead to agoraphobia, the fear of leaving the home.


In 2007, scientists found astraphobia is the third most prevalent phobia in the US. It can occur in people of any age. It occurs in many children, and should not be immediately identified as a phobia because children naturally go through many fears as they mature. Their fear of thunder and lightning cannot be considered a fully developed phobia unless it persists for more than six months. In this case, the child's phobia should be addressed, for it may become a serious problem in adulthood.

To lessen a child's fear during thunderstorms, the child can be distracted by games and activities. A bolder approach is to treat the storm as an entertainment.


The most widely used and possibly the most effective treatment for astraphobia is exposure to thunderstorms and eventually building an immunity. Some other treatment methods include Cognitive behavioral therapy (CBT) and Dialectical behavioral therapy (DBT). The patient will in many cases be instructed to repeat phrases to himself or herself in order to become calm during a storm. Heavy breathing exercises can reinforce this effort.

Dogs and cats

Dogs may exhibit severe anxiety during thunderstorms; between 15 and 30 percent may be affected.[4] Research confirms high levels of cortisol - a hormone associated with stress - affects dogs during and after thunderstorms. Remedies include behavioral therapies such as counter conditioning and desensitization, anti-anxiety medications, and dog appeasing pheromone, a synthetic analogue of a hormone secreted by nursing canine mothers.

Studies have also shown that cats can be afraid of thunderstorms. Whilst it is less common, cats have been known to hide under a table or behind a couch during a thunderstorm.

Generally if any animal is anxious during a thunderstorm or any similar, practically harmless event (e.g. fireworks display), it is advised to simply continue behaving normally, instead of attempting to comfort animals.

Fear of Thunder and Lightning Phobia – Astraphobia

Some people actually enjoy adverse weather conditions consisting of rain, lightning or thunder. Some even take great risks to study hurricanes and storm patterns while others simply love to experience rain firsthand, every now and then.

In other cases though, animals and humans alike can develop an extreme fear of thunder, lightning or rainstorms. Such irrational fear of thunder or lightning is known by several names such as Astraphobia, Brontophobia, Tonitrophobia etc.

Causes of Astraphobia

Extremely common in children, most cases of excessive fear of thunder or lightning gradually diminish over the years. However, many adults are known to suffer excessively from Astraphobia mainly due to a prior traumatic event associated with such adverse weather.

In many cases of Astraphobia, the sufferer is known to have experienced an electric shock when there is lightning and thunder outside. This leads to a fear of storms which persists through adulthood.

Many a phobic is also known to fear flooding which usually results from heavy rain. Such a person might have been negatively impacted by floods, lost a dear one or has had property damaged by it.

People who are generally categorized as ‘high strung’ or ‘nervous’ and as having ‘general tendency towards fear and anxiety’ are more likely to develop an excessive fear of thunderstorms, lightning etc.

Symptoms of fear of thunder and lightning

An individual suffering from this phobia will constantly watch the weather channel to ensure all is fine with the weather. S/he might also install lightning rods on buildings for protection. In event of an adverse weather forecast, the Astraphobic might panic and experience severe anxiety. A variety of psychological and physical symptoms might be present including:

* Fainting/passing out for hours
* Sweating, trembling and shaking
* Rapid heart rate, shallow breathing
* Gasping, feeling like being choked
* Hiding in basement, bathroom, closet
* Constantly watching out for signs of storm, gluing oneself to the TV set particularly to the weather channel
* Crying or seeking constant assurance during a storm*
* Closing windows, doors and curtains and trying to block out sounds of the storm.
* Nausea, vomiting and gastrointestinal distress
* Freezing, refusing to move from the place due to the fear of thunder/lightning striking him.
* Having thoughts of death

Astraphobia can sometimes lead to Agoraphobia where the individual refuses to leave his home on account of his fear of lightning and thunder.

Diagnosis and treatment for fear of thunder/lightning phobia

Diagnosing Astraphobia requires psychiatric evaluation along with written tests. These tests generally require the sufferer to write down answers to a series of questions related to his/her fears which help the expert reach a conclusion about the phobia.

A combination of medications and psychotherapy can help treat Astraphobia. However, many a phobic has seen good results with self help techniques. These include deep breathing, positive visualizations, meditation and gradual exposure to thunder/lightning etc.

Having a pet or a friend along at the time of thunderstorms is also known to help individuals cope with anxiety experienced during the thunderstorms. Most Astraphobics also feel safer in larger buildings such as schools, libraries etc rather than in their own homes.

In case of Astraphobia in young children, it is important that parents soothe their child’s fear by remaining calm themselves. Reassurance and distraction in the form of stories, jokes or music etc can also help calm a child and ease its fear of lightning or thunder. That being said, if the fear has not eased after several months, parents must seek immediate treatment to prevent it from developing into a full blown phobia of thunderstorms.

Psychotherapy in the form of desensitization, cognitive behavior therapy and virtual reality simulations etc are a few effective techniques that are proven beneficial in treating Astraphobia.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1188 2021-11-12 17:49:06

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1165) Euro

Euro, monetary unit and currency of the European Union (EU). It was introduced as a noncash monetary unit in 1999, and currency notes and coins appeared in participating countries on January 1, 2002. After February 28, 2002, the euro became the sole currency of 12 EU member states, and their national currencies ceased to be legal tender. Other states subsequently adopted the currency. The euro is represented by the symbol €.

The euro’s origins lay in the Maastricht Treaty (1991), an agreement among the then 12 member countries of the European Community (now the European Union)—United Kingdom, France, Germany, Italy, Ireland, Belgium, Denmark, the Netherlands, Spain, Portugal, Greece, and Luxembourg—that included the creation of an economic and monetary union (EMU). The treaty called for a common unit of exchange, the euro, and set strict criteria for conversion to the euro and participation in the EMU. These requirements included annual budget deficits not exceeding 3 percent of gross domestic product (GDP), public debt under 60 percent of GDP, exchange rate stability, inflation rates within 1.5 percent of the three lowest inflation rates in the EU, and long-term inflation rates within 2 percent. Although several states had public debt ratios exceeding 60 percent—the rates topped 120 percent in Italy and Belgium—the European Commission (the executive branch of the EU) recommended their entry into the EMU, citing the significant steps each country had taken to reduce its debt ratio.

Supporters of the euro argued that a single European currency would boost trade by eliminating foreign exchange fluctuations and reducing prices. Although there were concerns regarding a single currency, including worries about counterfeiting and loss of national sovereignty and national identity, 11 countries (Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, the Netherlands, Portugal, and Spain) formally joined the EMU in 1998. Britain and Sweden delayed joining, though some businesses in Britain decided to accept payment in euros. Voters in Denmark narrowly rejected the euro in a September 2000 referendum. Greece initially failed to meet the economic requirements but was admitted in January 2001 after overhauling its economy.

In 2007 Slovenia became the first former communist country to adopt the euro. Having demonstrated fiscal stability since joining the EU in 2004, both Malta and the Greek Cypriot sector of Cyprus adopted the euro in 2008. Other countries that adopted the currency include Slovakia (2009), Estonia (2011), Latvia (2014), and Lithuania (2015). (The euro is also the official currency in several areas outside the EU, including Andorra, Montenegro, Kosovo, and San Marino.) The 19 participating EU countries are known as the euro area, euroland, or the euro zone.

In 1998 the European Central Bank (ECB) was established to manage the new currency. Based in Frankfurt, Germany, the ECB is an independent and neutral body headed by an appointed president who is approved by all member countries to serve an eight-year term. The euro was launched on January 1, 1999, replacing the precursor ecu at a 1:1 value. Until the circulation of currency notes and coins in 2002, the euro was used only by financial markets and certain businesses. Many experts predicted that the euro could eventually rival the U.S. dollar as an international currency.

Unlike most of the national currencies that they replaced, euro banknotes do not display famous national figures. The seven colourful bills, designed by the Austrian artist Robert Kalina and ranging in denomination from €5 to €500, symbolize the unity of Europe and feature a map of Europe, the EU’s flag, and arches, bridges, gateways, and windows. The eight euro coins range in denominations from one cent to two euros. The coins feature one side with a common design; the reverse sides’ designs differ in each of the individual participating countries.

The euro (symbol: €; code: EUR) is the official currency of 19 of the 27 member states of the European Union. This group of states is known as the eurozone or euro area and includes about 343 million citizens as of 2019. The euro, which is divided into 100 cents, is the second-largest and second-most traded currency in the foreign exchange market after the United States dollar.

The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members, the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro.

The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar. As of December 2019, with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world.

The name euro was officially adopted on 16 December 1995 in Madrid. The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies. While the euro dropped subsequently to US$0.83 within two years (26 October 2000), it has traded above the U.S. dollar since the end of 2002, peaking at US$1.60 on 18 July 2008 and since then returning near to its original issue rate. In late 2009, the euro became immersed in the European sovereign-debt crisis, which led to the creation of the European Financial Stability Facility as well as other reforms aimed at stabilising and strengthening the currency.


The euro is managed and administered by the Frankfurt-based European Central Bank (ECB) and the Eurosystem (composed of the central banks of the eurozone countries). As an independent central bank, the ECB has sole authority to set monetary policy. The Eurosystem participates in the printing, minting and distribution of notes and coins in all member states, and the operation of the eurozone payment systems.

The 1992 Maastricht Treaty obliges most EU member states to adopt the euro upon meeting certain monetary and budgetary convergence criteria, although not all states have done so. Denmark has negotiated exemptions, while Sweden (which joined the EU in 1995, after the Maastricht Treaty was signed) turned down the euro in a non-binding referendum in 2003, and has circumvented the obligation to adopt the euro by not meeting the monetary and budgetary requirements. All nations that have joined the EU since 1993 have pledged to adopt the euro in due course. The Maastricht Treaty was later amended by the Treaty of Nice, which closed the gaps and loopholes in the Maastricht and Rome Treaties.

Issuing modalities for banknotes

Since 1 January 2002, the national central banks (NCBs) and the ECB have issued euro banknotes on a joint basis. Eurosystem NCBs are required to accept euro banknotes put into circulation by other Eurosystem members and these banknotes are not repatriated. The ECB issues 8% of the total value of banknotes issued by the Eurosystem. In practice, the ECB's banknotes are put into circulation by the NCBs, thereby incurring matching liabilities vis-à-vis the ECB. These liabilities carry interest at the main refinancing rate of the ECB. The other 92% of euro banknotes are issued by the NCBs in proportion to their respective shares of the ECB capital key,[26] calculated using national share of European Union (EU) population and national share of EU GDP, equally weighted.


Coins and banknotes

The euro is divided into 100 cents (also referred to as euro cents, especially when distinguishing them from other currencies, and referred to as such on the common side of all cent coins). In Community legislative acts the plural forms of euro and cent are spelled without the s, notwithstanding normal English usage. Otherwise, normal English plurals are used, with many local variations such as centime in France.

All circulating coins have a common side showing the denomination or value, and a map in the background. Due to the linguistic plurality in the European Union, the Latin alphabet version of euro is used (as opposed to the less common Greek or Cyrillic) and Arabic numerals (other text is used on national sides in national languages, but other text on the common side is avoided). For the denominations except the 1-, 2- and 5-cent coins, the map only showed the 15 member states which were members when the euro was introduced. Beginning in 2007 or 2008 (depending on the country), the old map was replaced by a map of Europe also showing countries outside the EU like Norway, Ukraine, Belarus, Russia and Turkey.[citation needed] The 1-, 2- and 5-cent coins, however, keep their old design, showing a geographical map of Europe with the 15 member states of 2002 raised somewhat above the rest of the map. All common sides were designed by Luc Luycx. The coins also have a national side showing an image specifically chosen by the country that issued the coin. Euro coins from any member state may be freely used in any nation that has adopted the euro.

The coins are issued in denominations of €2, €1, 50c, 20c, 10c, 5c, 2c, and 1c. To avoid the use of the two smallest coins, some cash transactions are rounded to the nearest five cents in the Netherlands and Ireland (by voluntary agreement) and in Finland (by law). This practice is discouraged by the commission, as is the practice of certain shops of refusing to accept high-value euro notes.

Commemorative coins with €2 face value have been issued with changes to the design of the national side of the coin. These include both commonly issued coins, such as the €2 commemorative coin for the fiftieth anniversary of the signing of the Treaty of Rome, and nationally issued coins, such as the coin to commemorate the 2004 Summer Olympics issued by Greece. These coins are legal tender throughout the eurozone. Collector coins with various other denominations have been issued as well, but these are not intended for general circulation, and they are legal tender only in the member state that issued them.

The design for the euro banknotes has common designs on both sides. The design was created by the Austrian designer Robert Kalina. Notes are issued in €500, €200, €100, €50, €20, €10, €5. Each banknote has its own colour and is dedicated to an artistic period of European architecture. The front of the note features windows or gateways while the back has bridges, symbolising links between states in the union and with the future. While the designs are supposed to be devoid of any identifiable characteristics, the initial designs by Robert Kalina were of specific bridges, including the Rialto and the Pont de Neuilly, and were subsequently rendered more generic; the final designs still bear very close similarities to their specific prototypes; thus they are not truly generic. The monuments looked similar enough to different national monuments to please everyone.

The Europa series, or second series, consists of six denominations and no longer includes the €500 with issuance discontinued as of 27 April 2019. However, both the first and the second series of euro banknotes, including the €500, remain legal tender throughout the euro area.

Payments clearing, electronic funds transfer

Capital within the EU may be transferred in any amount from one state to another. All intra-Union transfers in euro are treated as domestic transactions and bear the corresponding domestic transfer costs. This includes all member states of the EU, even those outside the eurozone providing the transactions are carried out in euro. Credit/debit card charging and ATM withdrawals within the eurozone are also treated as domestic transactions; however paper-based payment orders, like cheques, have not been standardised so these are still domestic-based. The ECB has also set up a clearing system, TARGET, for large euro transactions.

Currency sign

A special euro currency sign (€) was designed after a public survey had narrowed the original ten proposals down to two. The European Commission then chose the design created by the Belgian Alain Billiet. Of the symbol, the Commission stated:

Inspiration for the € symbol itself came from the Greek epsilon (Є) – a reference to the cradle of European civilisation – and the first letter of the word Europe, crossed by two parallel lines to 'certify' the stability of the euro.
-  European Commission

The European Commission also specified a euro logo with exact proportions and foreground and background colour tones. Placement of the currency sign relative to the numeric amount varies from state to state, but for texts in English the symbol (or the ISO-standard "EUR") should precede the amount.

Direct usage

The euro is the sole currency of 19 EU member states: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the "eurozone", some 343 million people in total as of 2018.

With all but one (Denmark) EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue. Outside the EU, the euro is also the sole currency of Montenegro and Kosovo and several European microstates (Andorra, Monaco, San Marino and the Vatican City) as well as in three overseas territories of France that are not themselves part of the EU, namely Saint Barthélemy, Saint Pierre and Miquelon, and the French Southern and Antarctic Lands. Together this direct usage of the euro outside the EU affects nearly 3 million people.

The euro has been used as a trading currency in Cuba since 1998, Syria since 2006, and Venezuela since 2018. There are also various currencies pegged to the euro (see below). In 2009, Zimbabwe abandoned its local currency and used major currencies instead, including the euro and the United States dollar.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1189 2021-11-13 16:46:58

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1166) Ornithology

Ornithology, a branch of zoology dealing with the study of birds. Most of the early writings on birds are more anecdotal than scientific, but they represent a broad foundation of knowledge, including much folklore, on which later work was based. In the European Middle Ages many treatises dealt with the practical aspects of ornithology, particularly falconry and game-bird management. From the mid-18th to the late 19th century, the major thrust was the description and classification of new species, as scientific expeditions made collections in tropical areas rich in bird species. By the early 20th century the large majority of birds were known to science, although the biology of many species was virtually unknown. In the latter half of the 19th century much study was done on the internal anatomy of birds, primarily for its application to taxonomy. Anatomical study was overshadowed in the first half of the 20th century by the rising fields of ecology and ethology (the study of behaviour) but underwent a resurgence beginning in the 1960s with more emphasis on the functional adaptations of birds.

Ornithology is one of the few scientific fields in which nonprofessionals make substantial contributions. Much research is carried out at universities and museums, which house and maintain the collections of bird skins, skeletons, and preserved specimens upon which most taxonomists and anatomists depend. Field research, on the other hand, is conducted by both professionals and amateurs, the latter providing valuable information on behaviour, ecology, distribution, and migration.

Although much information about birds is gained through simple, direct field observation (usually aided only by binoculars), some areas of ornithology have benefited greatly from the introduction of such instruments and techniques as bird banding, radar, radio transmitters (telemeters), and high-quality, portable audio equipment.

Bird banding (or ringing), first performed early in the 19th century, is now a major means of gaining information on longevity and movements. Banding systems are conducted by a number of countries, and each year hundreds of thousands of birds are marked with numbered leg bands. The study of bird movements has also been greatly aided by the use of sensitive radar. Individual bird movements are also recorded on a day-to-day basis by the use of minute radio transmitters (telemeters) worn by or implanted inside the bird. Visual markings, such as plumage dyes and plastic tags on the legs or wings, allow visual recognition of an individual bird without the difficult task of trapping it and allow the researcher to be aided by amateur bird-watchers in recovering his marked birds. Research into the nature and significance of bird calls has burgeoned with the development of high-quality, portable audio equipment.

Ornithology is a branch of zoology that concerns the "methodological study and consequent knowledge of birds with all that relates to them." Several aspects of ornithology differ from related disciplines, due partly to the high visibility and the aesthetic appeal of birds. It has also been an area with a large contribution made by amateurs in terms of time, resources, and financial support. Studies on birds have helped develop key concepts in biology including evolution, behaviour and ecology such as the definition of species, the process of speciation, instinct, learning, ecological niches, guilds, island biogeography, phylogeography, and conservation.

While early ornithology was principally concerned with descriptions and distributions of species, ornithologists today seek answers to very specific questions, often using birds as models to test hypotheses or predictions based on theories. Most modern biological theories apply across life forms, and the number of scientists who identify themselves as "ornithologists" has therefore declined. A wide range of tools and techniques are used in ornithology, both inside the laboratory and out in the field, and innovations are constantly made. Most biologists who recognise themselves as “Ornithologists” study specific categories, such as Anatomy, Taxonomy, or Ecology lifestyles and behaviours. Though this can be applied to the range of all biological practises.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1190 2021-11-14 14:22:44

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1167) Lymph

Lymph is a clear-to-white fluid made of: White blood cells, especially lymphocytes, the cells that attack bacteria in the blood. Fluid from the intestines called chyle, which contains proteins and fats.

Lymph, pale fluid that bathes the tissues of an organism, maintaining fluid balance, and removes bacteria from tissues; it enters the blood system by way of lymphatic channels and ducts.

Prominent among the constituents of lymph are lymphocytes and macrophages, the primary cells of the immune system with which the body defends itself from invasion by foreign microorganisms. Lymph is conveyed from the tissues to the venous bloodstream via the lymphatic vessels. On the way, it is filtered through the lymphatic organs (spleen and thymus) and lymph nodes.

Pressure within the walls of lymph vessels is lower than that in blood vessels. Lymph flows more slowly than blood. The cell walls of lymph vessels are more permeable than those of the capillary walls of blood vessels. Thus, proteins that may have been delivered to the tissues by the bloodstream but that are too big to reenter the capillaries, along with waste products and large proteins synthesized in the local tissue cells, enter the lymphatic vessels for return to the bloodstream.

The lymphatic vessels of vertebrates generally empty into the bloodstream near the location at which the cardinal veins enter the heart. In mammals, lymph enters the bloodstream at the subclavian vein, via the thoracic duct. From their terminal ducts to their sources between the cells of the tissues, the lymph vessels divide and subdivide repeatedly, becoming narrower at each division. A system of valves in the larger vessels keeps the lymph flowing in one direction.

In mammals, lymph is driven through the lymphatic vessels primarily by the massaging effect of the activity of muscles surrounding the vessels. Animals lower than mammals have muscular swellings called lymph hearts at intervals of the lymphatic vessels to pump lymph through them.

All multicellular animals distinguish between their own cells and foreign microorganisms and attempt to neutralize or ingest the latter. Macrophages (literally, “big eaters”) are motile cells which surround and ingest foreign matter. All animals above the level of bony fishes have concentrations of lymphoid tissue, which consists of macrophages and lymphocytes (white blood cells that react to chemically neutralize foreign microorganisms). The spleen, thymus, and lymph nodes of mammals consist of lymphoid tissue; further concentrations of it are found throughout the body in places (such as the gut wall, or the tonsils and adenoids of humans) where foreign microorganisms might have easiest ingress.

Bacteria and other particles that find their way into body tissues are taken up by the lymph and carried into the lymph nodes, where the bands of lymphatic tissue crossing the lymph sinuses impede their passage. Lymphocytes proliferate in response to the foreign invader, some cells remaining in the node and others migrating to other nodes elsewhere in the body. Some of these cells produce antibodies against the invading bacteria, while others take part in a direct attack on the foreign material, surrounding and engulfing it.

Although the primary function of the lymphatic system is to return proteins and fluids to the blood, this immune function accounts for the tendency of many infections and other disease processes to cause swelling of the lymph nodes. Bacteria, allergenic particles, and cancerous cells from elsewhere in the body that have collected in the nodes stimulate lymphocyte proliferation, thereby greatly enlarging the node. Interference with lymphatic flow may cause an accumulation of fluid in the tissues that are drained by the blocked vessel, producing tissue swelling known as lymphedema.

Other and more serious conditions affecting the lymphatic system include various forms of malignancy, either lymphocytic leukemia or lymphoma, depending on the nature of lymphatic proliferation. Dramatic increases in circulating lymphocytes characterize acute lymphocytic leukemia, a highly fatal disease that occurs most frequently in children; less rapid increases in circulating lymph cells occur in chronic lymphocytic leukemia, which is more common in those over 45. In both conditions, the accumulation of lymphocytes in the bloodstream is accompanied by anemia. Gross enlargement of the lymph nodes through malignant proliferation of lymph cells characterizes Hodgkin’s disease and other forms of lymphoma.

Lymph node enlargement may occur in syphilis, infectious mononucleosis, amyloidosis, and tuberculosis, as may local lymph node swelling in other infectious processes.

Lymph (from Latin, lympha meaning "water") is the fluid that flows through the lymphatic system, a system composed of lymph vessels (channels) and intervening lymph nodes whose function, like the venous system, is to return fluid from the tissues to the central circulation. Interstitial fluid – the fluid between the cells in all body tissues – enters the lymph capillaries. This lymphatic fluid is then transported via progressively larger lymphatic vessels through lymph nodes, where substances are removed by tissue lymphocytes and circulating lymphocytes are added to the fluid, before emptying ultimately into the right or the left subclavian vein, where it mixes with central venous blood.

Since the lymph is derived from the interstitial fluid, its composition continually changes, because the blood and the surrounding cells continually exchange substances with the interstitial fluid. It is generally similar to blood plasma, which is the fluid component of blood. Lymph returns proteins and excess interstitial fluid to the bloodstream. Lymph also transports fats from the digestive system (beginning in the lacteals) to the blood via chylomicrons.

Bacteria may enter the lymph channels and be transported to lymph nodes, where the bacteria are destroyed. Metastatic cancer cells can also be transported via lymph.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1191 2021-11-15 16:07:02

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1168) Sugar

Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. Simple sugars, also called monosaccharides, include glucose, fructose, and galactose. Compound sugars, also called disaccharides or double sugars, are molecules made of two monosaccharides joined by a glycosidic bond. Common examples are sucrose (glucose + fructose), lactose (glucose + galactose), and maltose (two molecules of glucose). Table sugar, granulated sugar, and regular sugar refer to sucrose, a disaccharide composed of glucose and fructose. In the body, compound sugars are hydrolysed into simple sugars.

Longer chains of monosaccharides (>2) are not regarded as sugars, and are called oligosaccharides or polysaccharides. Starch is a glucose polymer found in plants, and is the most abundant source of energy in human food. Some other chemical substances, such as glycerol and sugar alcohols, may have a sweet taste, but are not classified as sugar.

Sugars are found in the tissues of most plants. Honey and fruit are abundant natural sources of simple sugars. Sucrose is especially concentrated in sugarcane and sugar beet, making them ideal for efficient commercial extraction to make refined sugar. In 2016, the combined world production of those two crops was about two billion tonnes. Maltose may be produced by malting grain. Lactose is the only sugar that cannot be extracted from plants. It can only be found in milk, including human breast milk, and in some dairy products. A cheap source of sugar is corn syrup, industrially produced by converting corn starch into sugars, such as maltose, fructose and glucose.

Sucrose is used in prepared foods (e.g. cookies and cakes), is sometimes added to commercially available processed food and beverages, and may be used by people as a sweetener for foods (e.g. toast and cereal) and beverages (e.g. coffee and tea). The average person consumes about 24 kilograms (53 lb) of sugar each year, with North and South Americans consuming up to 50 kilograms (110 lb) and Africans consuming under 20 kilograms (44 lb).

As sugar consumption grew in the latter part of the 20th century, researchers began to examine whether a diet high in sugar, especially refined sugar, was damaging to human health. Excessive consumption of sugar has been implicated in the onset of obesity, diabetes, cardiovascular disease, and tooth decay. Numerous studies have tried to clarify those implications, but with varying results, mainly because of the difficulty of finding populations for use as controls that consume little or no sugar. In 2015, the World Health Organization recommended that adults and children reduce their intake of free sugars to less than 10%, and encouraged a reduction to below 5%, of their total energy intake.

Sugar, any of numerous sweet, colourless, water-soluble compounds present in the sap of seed plants and the milk of mammals and making up the simplest group of carbohydrates. The most common sugar is sucrose, a crystalline tabletop and industrial sweetener used in foods and beverages.

As a chemical term, “sugar” usually refers to all carbohydrates of the general formula Cn(H2O)n. Sucrose is a disaccharide, or double sugar, being composed of one molecule of glucose linked to one molecule of fructose. Because one molecule of water (H2O) is lost in the condensation reaction linking glucose to fructose, sucrose is represented by the formula C12H22O11 (following the general formula Cn[H2O]n − 1).

Sucrose is found in almost all plants, but it occurs at concentrations high enough for economic recovery only in sugarcane (Saccharum officinarum) and sugar beets (Beta vulgaris). The former is a giant grass growing in tropical and subtropical areas; the latter is a root crop growing in temperate zones. Sugarcane ranges from 7 to 18 percent sugar by weight, while sugar beets are from 8 to 22 percent sugar by weight. Sucrose from either source (or from two relatively minor sources, the sugar maple tree and the date palm) is the same molecule, yielding 3.94 calories per gram as do all carbohydrates. Differences in sugar products come from other components isolated with sucrose.

The first cultivated sugar crop was sugarcane, developed from wild varieties in the East Indies—probably New Guinea. The sugar beet was developed as a crop in Europe in the 19th century during the Napoleonic Wars, when France sought an alternate homegrown source of sugar in order to save its ships from running blockades to sugarcane sources in the Caribbean. Sugarcane, once harvested, cannot be stored because of sucrose decomposition. For this reason, cane sugar is generally produced in two stages, manufacture of raw sugar taking place in the cane-growing areas and refining into food products occurring in the sugar-consuming countries. Sugar beets, on the other hand, can be stored and are therefore generally processed in one stage into white sugar.

How Is Sugar Cane Grown?

Sugar cane is a type of perennial grass, and it needs a warm climate to thrive. As such, it’s commonly grown in tropical climates like Brazil, as well as in some parts of the U.S., such as Florida, Louisiana and Texas. The American Sugar Refining Group, whose brands include Domino Sugar, is based in southern Florida.

While sugar cane can be grown from a seed, you can actually bury a mature stalk and watch 10 buds sprout up from it. How neat! Mature sugar cane looks similar to bamboo, with jointed stems. Once it’s fully grown, it’s then harvested and transported to a sugar mill for processing.

How Is Sugar Made?

When the sugar cane arrives at the mill, that’s where the real fun begins. First, the stalks are washed, cut into shreds, and pressed using big rollers. The juice is separated from the plant material, then the liquid is boiled until it crystallizes. Finally, the crystals are separated from the liquid using a centrifuge, yielding raw sugar.

Hold on, though: this isn’t the same “raw sugar” you can buy at the grocery store. At this point, it still has lots of impurities, so it’s sent to a cane sugar refinery to be filtered. From here, sugar goes through various treatments depending on what type of sugar it will ultimately become. Fun fact: all sugar is harvested the same; it’s not until it gets to the mill that it turns into different types of sugar, such as granulated sugar and brown sugar.

According to the American Sugar Refining Group, a sugar cane stalk is 72% water, 12% sugar, 13% fiber and 3% molasses. White granulated sugar is made by removing all of the molasses. Brown sugar retains some of the molasses, which gives it its darker color.

Types of Sugar

When you walk down the baking aisle of your local grocery store, you see several different varieties of sugar. Here are their differences.

* Granulated Sugar: Most people use this type of white sugar on a daily basis, and it’s most often used in baking. Granulated sugar has all of the molasses content removed, giving it the white color.
* Brown Sugar: Dark and light brown sugars retain much of the naturally occurring molasses—the more molasses, the darker the sugar.
* Golden Sugar: This is a brand-new sugar recently developed by Domino. It’s a less-processed version of granulated sugar. It retains some of the naturally occurring molasses, but it can be used cup for cup in place of white sugar.
* Powdered or Confectioners’ Sugar: This light, fluffy sugar is made by grinding up granulated sugar and adding a small amount of cornstarch to prevent clumping.
* Raw Sugar: Also called turbinado sugar, this product is usually light brown in color and has larger crystals. It’s filtered only minimally to retain much of its natural molasses content.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1192 2021-11-16 14:43:14

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

11169) Reflex action and movements

Reflex actions

Of the many kinds of neural activity, there is one simple kind in which a stimulus leads to an immediate action. This is reflex activity. The word reflex (from Latin reflexus, “reflection”) was introduced into biology by a 19th-century English neurologist, Marshall Hall, who fashioned the word because he thought of the muscles as reflecting a stimulus much as a wall reflects a ball thrown against it. By reflex, Hall meant the automatic response of a muscle or several muscles to a stimulus that excites an afferent nerve. The term is now used to describe an action that is an inborn central nervous system activity, not involving consciousness, in which a particular stimulus, by exciting an afferent nerve, produces a stereotyped, immediate response of muscle or gland.

The anatomical pathway of a reflex is called the reflex arc. It consists of an afferent (or sensory) nerve, usually one or more interneurons within the central nervous system, and an efferent (motor, secretory, or secreto-motor) nerve.

Most reflexes have several synapses in the reflex arc. The stretch reflex is exceptional in that, with no interneuron in the arc, it has only one synapse between the afferent nerve fibre and the motor neuron. The flexor reflex, which removes a limb from a noxious stimulus, has a minimum of two interneurons and three synapses.

Probably the best-known reflex is the pupillary light reflex. If a light is flashed near one eye, the pupils of both eyes contract. Light is the stimulus; impulses reach the brain via the optic nerve; and the response is conveyed to the pupillary musculature by autonomic nerves that supply the eye. Another reflex involving the eye is known as the lacrimal reflex. When something irritates the conjunctiva or cornea of the eye, the lacrimal reflex causes nerve impulses to pass along the fifth cranial nerve (trigeminal) and reach the midbrain. The efferent limb of this reflex arc is autonomic and mainly parasympathetic. These nerve fibres stimulate the lacrimal glands of the orbit, causing the outpouring of tears. Other reflexes of the midbrain and medulla oblongata are the cough and sneeze reflexes. The cough reflex is caused by an irritant in the trachea and the sneeze reflex by one in the nose. In both, the reflex response involves many muscles; this includes a temporary lapse of respiration in order to expel the irritant.

The first reflexes develop in the womb. By seven and a half weeks after conception, the first reflex can be observed; stimulation around the mouth of the fetus causes the lips to be turned toward the stimulus. By birth, sucking and swallowing reflexes are ready for use. Touching the baby’s lips induces sucking, and touching the back of its throat induces swallowing.

Although the word stereotyped is used in the above definition, this does not mean that the reflex response is invariable and unchangeable. When a stimulus is repeated regularly, two changes occur in the reflex response—sensitization and habituation. Sensitization is an increase in response; in general, it occurs during the first 10 to 20 responses. Habituation is a decrease in response; it continues until, eventually, the response is extinguished. When the stimulus is irregularly repeated, habituation does not occur or is minimal.

There are also long-term changes in reflexes, which may be seen in experimental spinal cord transections performed on kittens. Repeated stimulation of the skin below the level of the lesion, such as rubbing the same area for 20 minutes every day, causes a change in latency (the interval between the stimulus and the onset of response) of certain reflexes, with diminution and finally extinction of the response. Although this procedure takes several weeks, it shows that, with daily stimulation, one reflex response can be changed into another. Repeated activation of synapses increases their efficiency, causing a lasting change. When this repeated stimulation ceases, synaptic functions regress, and reflex responses return to their original form.

Reflex responses often are rapid; neurons that transmit signals about posture, limb position, or touch, for example, can fire signals at speeds of 80–120 metres per second (about 180–270 miles per hour). However, while many reflex responses are said to be rapid and immediate, some reflexes, called recruiting reflexes, can hardly be evoked by a single stimulus. Instead, they require increasing stimulation to induce a response. The reflex contraction of the bladder, for example, requires an increasing amount of urine to stretch the muscle and to obtain muscular contraction.

Reflexes can be altered by impulses from higher levels of the central nervous system. For example, the cough reflex can be suppressed easily, and even the gag reflex (the movements of incipient vomiting resulting from mechanical stimulation of the wall of the pharynx) can be suppressed with training.

The so-called conditioned reflexes are not reflexes at all but complicated acts of learned behaviour. Salivation is one such conditioned reflex; it occurs only when a person is conscious of the presence of food or when one imagines food.


Movements of the body are brought about by the harmonious contraction and relaxation of selected muscles. Contraction occurs when nerve impulses are transmitted across neuromuscular junctions to the membrane covering each muscle fibre. Most muscles are not continuously contracting but are kept in a state ready to contract. The slightest movement or even the intention to move results in widespread activity of the muscles of the trunk and limbs.

Movements may be intrinsic to the body itself and carried out by muscles of the trunk and body cavity. Examples are those involved in breathing, swallowing, laughing, sneezing, urinating, and defecating. Such movements are largely carried out by smooth muscles of the viscera (alimentary canal and bladder, for example); they are innervated by efferent sympathetic and parasympathetic nerves. Other movements relate the body to the environment, either for moving or for signaling to other individuals. These are carried out by the skeletal muscles of the trunk and limbs. Skeletal muscles are attached to bones and produce movement at the joints. They are innervated by efferent motor nerves and sometimes by efferent sympathetic and parasympathetic nerves.

Every movement of the body has to be correct for force, speed, and position. These aspects of movement are continuously reported to the central nervous system by receptors sensitive to position, posture, equilibrium, and internal conditions of the body. These receptors are called proprioceptors, and those proprioceptors that keep a continuous report on the position of limbs are the muscle spindles and tendon organs.

Movements can be organized at several levels of the nervous system. At the lowest level are movements of the viscera, some of which do not involve the central nervous system, being controlled by neurons of the autonomic nervous system within the viscera themselves. Movements of the trunk and limbs occur at the next level of the spinal cord. If the spinal cord is severed so that no nerve impulses arrive from the brain, certain movements of the trunk and limbs below the level of the injury can still occur. At a higher level, respiratory movements are controlled by the lower brainstem. The upper brainstem controls muscles of the eye, the bladder, and basic movements of walking and running. At the next level is the hypothalamus. It commands certain totalities of movement, such as those of vomiting, urinating and defecating, and curling up and falling asleep. At the highest level is gray matter of the cerebral hemispheres, both the cortex and the subcortical basal ganglia. This is the level of conscious control of movements.

Reflex action and its conduction

* An immediate or spontaneous response or reaction shown towards any stimulus by an organism is called reflex action.
* A reflex is a predictable involuntary response to a stimulus.
* eg. watering of mouth at the sight of tasty food, immediate withdrawal of the hand after touching a hot object, blinking of the eye in response to a foreign particle that enters the eye etc.
* A reflex involving the skeletal muscles is called somatic reflex and that involving responses of smooth muscles, cardiac muscles or a gland is called visceral reflex.
* Visceral reflexes influence the heart rate, respiratory rate, digestion etc.
* The reflex action is controlled and coordinated by our spinal cord without the involvement of the brain. i. e. in addition to linking the brain and most of the body, our spinal cord coordinates the reflex action.
* Both types of reflex actions allow the body to respond quickly to internal and external changes in order to maintain our body activities and homeostasis.
* It takes milliseconds for the response to be shown during a reflex action.

Reflex actions save time because the message being transmitted by the nerve impulse doesn’t have to travel from the stimulated receptor all the way to the brain.

Reflex arc:

* A complete or entire pathway followed by the sensory and motor impulses during a reflex action is called a reflex arc.
* The major components and pathway of a reflex arc are as follows:
a) Receptor (sensory organ): It receives a stimulus which then produces an electrical signal called nerve impulse.
b) Sensory nerve (afferent nerve): It coveys or carries the impulse from the receptor organs to the dorsal root ganglion of the spinal nerve.
c) The ganglia fibers then carry the impulse to the posterior horn of the spinal cord.
d) Relay neuron (inter-neuron): The impulse from posterior horn of spinal cord is then transmitted to the anterior horn of the spinal cord via inter-neuron or relay neuron. This impulse may directly be passed without relay neuron as well.
e) Motor nerve (efferent nerve): The motor nerve now receives the impulse from the anterior horn and transmits it to the effector organs or motor organs.
f) Effector (motor organs): These are the organs that carry out the final action like voluntary muscle contraction and glandular secretion.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1193 2021-11-17 15:14:24

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1170) Neuron

A neuron or nerve cell is an electrically excitable cell that communicates with other cells via specialized connections called synapses. It is the main component of nervous tissue in all animals except sponges and placozoa. Plants and fungi do not have nerve cells.

Neurons are typically classified into three types based on their function. Sensory neurons respond to stimuli such as touch, sound, or light that affect the cells of the sensory organs, and they send signals to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. A group of connected neurons is called a neural circuit.

A typical neuron consists of a cell body (soma), dendrites, and a single axon. The soma is usually compact. The axon and dendrites are filaments that extrude from it. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock, and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell. Neurons may lack dendrites or have no axon. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated.

Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to a dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite.

The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon, and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma.

In most cases, neurons are generated by neural stem cells during brain development and childhood. Neurogenesis largely ceases during adulthood in most areas of the brain.

Neurons (also called neurones or nerve cells) are the fundamental units of the brain and nervous system, the cells responsible for receiving sensory input from the external world, for sending motor commands to our muscles, and for transforming and relaying the electrical signals at every step in between.

Neuron, also called nerve cell, basic cell of the nervous system in vertebrates and most invertebrates from the level of the cnidarians (e.g., corals, jellyfish) upward. A typical neuron has a cell body containing a nucleus and two or more long fibres. Impulses are carried along one or more of these fibres, called dendrites, to the cell body; in higher nervous systems, only one fibre, the axon, carries the impulse away from the cell body. Bundles of fibres from neurons are held together by connective tissue and form nerves. Some nerves in large vertebrates are several feet long. A sensory neuron transmits impulses from a receptor, such as those in the eye or ear, to a more central location in the nervous system, such as the spinal cord or brain. A motor neuron transmits impulses from a central area of the nervous system to an effector, such as a muscle.


Neurons, also known as nerve cells, send and receive signals from your brain. While neurons have a lot in common with other types of cells, they’re structurally and functionally unique.

Specialized projections called axons allow neurons to transmit electrical and chemical signals to other cells. Neurons can also receive these signals via rootlike extensions known as dendrites.

At birth, the human brain consists of an estimated 100 billion neuronsTrusted Source. Unlike other cells, neurons don’t reproduce or regenerate. They aren’t replaced once they die.

The creation of new nerve cells is called neurogenesis. While this process isn’t well understood, it may occur in some parts of the brain after birth.

As researchers gain insight into both neurons and neurogenesis, many are also working to uncover links to neurodegenerative diseases such as Alzheimer’s and Parkinson’s.

Parts of a neuron

Neurons vary in size, shape, and structure depending on their role and location. However, nearly all neurons have three essential parts: a cell body, an axon, and dendrites.

Cell body

Also known as a soma, the cell body is the neuron’s core. The cell body carries genetic information, maintains the neuron’s structure, and provides energy to drive activities.

Like other cell bodies, a neuron’s soma contains a nucleus and specialized organelles. It’s enclosed by a membrane which both protects it and allows it to interact with its immediate surroundings.


An axon is a long, tail-like structure which joins the cell body at a specialized junction called the axon hillock. Many axons are insulated with a fatty substance called myelin. Myelin helps axons to conduct an electrical signal. Neurons generally have one main axon.


Dendrites are fibrous roots that branch out from the cell body. Like antennae, dendrites receive and process signals from the axons of other neurons. Neurons can have more than one set of dendrites, known as dendritic trees. How many they have generally depends on their role.

For instance, Purkinje cells are a special type of neuron found in the cerebellum. These cells have highly developed dendritic trees which allow them to receive thousands of signals.

Function of neurons

Neurons send signals using action potentials. An action potential is a shift in the neuron’s electric potential caused by the flow of ions in and out of the neural membrane.

Action potentials can trigger both chemical and electrical synapses.

Chemical synapses

In a chemical synapse, action potentials affect other neurons via a gap between neurons called a synapse. Synapses consist of a presynaptic ending, a synaptic cleft, and a postsynaptic ending.

When an action potential is generated, it’s carried along the axon to a presynaptic ending. This triggers the release of chemical messengers called neurotransmitters. These molecules cross the synaptic cleft and bind to receptors in the postsynaptic ending of a dendrite.

Neurotransmitters can excite the postsynaptic neuron, causing it to generate an action potential of its own. Alternatively, they can inhibit the postsynaptic neuron, in which case it doesn’t generate an action potential.

Electrical synapses

Electrical synapses can only excite. They occur when two neurons are connected via a gap junction. This gap is much smaller than a synapse, and includes ion channels which facilitate the direct transmission of a positive electrical signal. As a result, electrical synapses are much faster than chemical synapses. However, the signal diminishes from one neuron to the next, making them less effective at transmitting.

Types of neurons

Neurons vary in structure, function, and genetic makeup. Given the sheer number of neurons, there are thousands of different types, much like there are thousands of species of living organisms on Earth.

In terms of function, scientists classify neurons into three broad types: sensory, motor, and interneurons.

Sensory neurons

Sensory neurons help you:

* taste
* smell
* hear
* see
* feel things around you
Sensory neurons are triggered by physical and chemical inputs from your environment. Sound, touch, heat, and light are physical inputs. Smell and taste are chemical inputs.

For example, stepping on hot sand activates sensory neurons in the soles of your feet. Those neurons send a message to your brain, which makes you aware of the heat.

Motor neurons

Motor neurons play a role in movement, including voluntary and involuntary movements. These neurons allow the brain and spinal cord to communicate with muscles, organs, and glands all over the body.

There are two types of motor neurons: lower and upper. Lower motor neurons carry signals from the spinal cord to the smooth muscles and the skeletal muscles. Upper motor neurons carry signals between your brain and spinal cord.

When you eat, for instance, lower motor neurons in your spinal cord send signals to the smooth muscles in your esophagus, stomach, and intestines. These muscles contract, which allows food to move through your digestive tract.


Interneurons are neural intermediaries found in your brain and spinal cord. They’re the most common type of neuron. They pass signals from sensory neurons and other interneurons to motor neurons and other interneurons. Often, they form complex circuits that help you to react to external stimuli.

For instance, when you touch something hot, sensory neurons in your fingertips send a signal to interneurons in your spinal cord. Some interneurons pass the signal on to motor neurons in your hand, which allows you to move your hand away. Other interneurons send a signal to the pain center in your brain, and you experience pain.

Recent research

While research has advanced our understanding of neurons in the last century, there’s still much we don’t understand.

For instance, until recently, researchers believed that neuron creation occurred in adults in a region of the brain called the hippocampus. The hippocampus is involved in memory and learning.

But a recent study Trusted Source is calling beliefs about hippocampal neurogenesis into question. After analyzing hippocampus samples from 37 donors, researchers concluded that adults produce relatively few new hippocampal neurons.

Though the results have yet to be confirmed, they come as a significant setback. Many researchers in the field were hopeful that neurogenesis might help treat diseases such as Alzheimer’s and Parkinson’s, which cause neuron damage and death.

The takeaway

Nervous system cells are called neurons. They have three distinct parts, including a cell body, axon, and dendrites. These parts help them to send and receive chemical and electrical signals.

While there are billions of neurons and thousands of varieties of neurons, they can be classified into three basic groups based on function: motor neurons, sensory neurons, and interneurons.

There’s still a lot we don’t know about neurons and the role they play in the development of certain brain conditions.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1194 2021-11-18 14:35:47

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1171) Disneyland

Disneyland, amusement park in Anaheim, California, featuring characters, rides, and shows based on the creations of Walt Disney and the Disney Company. Though its central building, the Sleeping Beauty Castle, is modeled on Germany’s Neuschwanstein Castle, it is an unmistakable icon of American popular culture. Disneyland is the only Disney theme park designed and built under the supervision of Walt Disney.

By as early as the 1940s, Disney had begun planning themed experiences to complement his Burbank film studio: a backlot tour that would include a train ride through a “village” set and an amusement park that would cater to his employees and their children. By 1952 he had formed WED Enterprises, a corporate entity created to plan and build the amusement park on studio grounds. Eventually, he chose a plot of land in rural Anaheim, close to Los Angeles, for the park instead, largely as a result of the hostility of Burbank city officials toward the studio project. This much larger plot allowed Disney to reconceptualize his park into the public “giant movie set” that would become Disneyland.

Financing the endeavour proved difficult, but Disney was able to secure a significant portion of the funding from the American Broadcasting Company (ABC); ABC received in return the rights to produce a weekly Disney television program and a share of the park’s profits. Construction began on July 21, 1954, and was completed on July 17, 1955.

Disney’s disposition toward nostalgic sentiment and fantasy is evident in the park’s design and construction. The themed areas originally opened in Disneyland were Main Street, U.S.A., evoking a Midwestern American town at the turn of the 20th century and modeled on Disney’s hometown of Marceline, Missouri; Fantasyland, based partly on stories from Disney animated features; Adventureland, a jungle-themed area; Frontierland, featuring the Mark Twain Riverboat; and Tomorrowland, an optimistic vision of the future. Subsequent additions were New Orleans Square, based on the southern U.S. city of New Orleans; Bear Country, later renamed Critter Country, featuring the Country Bear Jamboree and the Splash Mountain ride; and Mickey’s Toontown, a colourful world modeled on cartoon animation. A short-lived Holidayland existed from 1957 to 1961. The Anaheim property also holds a sister park, Disney’s California Adventure, which opened in 2001; a separate shopping, dining, and entertainment area called Downtown Disney District; and three hotels.

Disneyland became a mecca for tourists from around the world. By the turn of the 21st century, more than 14 million people visited the park annually.

The Disneyland Park, originally Disneyland, is the first of two theme parks built at the Disneyland Resort in Anaheim, California, opened on July 17, 1955. It is the only theme park designed and built to completion under the direct supervision of Walt Disney. It was originally the only attraction on the property; its official name was changed to Disneyland Park to distinguish it from the expanding complex in the 1990s. It was the first Disney theme park.

Walt Disney came up with the concept of Disneyland after visiting various amusement parks with his daughters in the 1930s and 1940s. He initially envisioned building a tourist attraction adjacent to his studios in Burbank to entertain fans who wished to visit; however, he soon realized that the proposed site was too small. After hiring the Stanford Research Institute to perform a feasability study determining an appropriate site for his project, Disney bought a 160-acre (65 ha) site near Anaheim in 1953. The Park was designed by a creative team, hand-picked by Walt from internal and outside talent. They founded WED Enterprises, the precursor to today's Walt Disney Imagineering. Construction began in 1954 and the park was unveiled during a special televised press event on the ABC Television Network on July 17, 1955.

Since its opening, Disneyland has undergone expansions and major renovations, including the addition of New Orleans Square in 1966, Bear Country (now Critter Country) in 1972, Mickey's Toontown in 1993, and Star Wars: Galaxy's Edge in 2019. Opened in 2001, Disney California Adventure Park was built on the site of Disneyland's original parking lot.

Disneyland has a larger cumulative attendance than any other theme park in the world, with 726 million visits since it opened (as of December 2018). In 2018, the park had approximately 18.6 million visits, making it the second most visited amusement park in the world that year, behind only Magic Kingdom, the very park it inspired. According to a March 2005 Disney report, 65,700 jobs are supported by the Disneyland Resort, including about 20,000 direct Disney employees and 3,800 third-party employees (independent contractors or their employees). Disney announced "Project Stardust" in 2019, which included major structural renovations to the park to account for higher attendance numbers.

The United States Federal Aviation Administration has declared a zone of prohibited airspace around both Disneyland and some of the surrounding areas centered at Sleeping Beauty Castle. No aircraft, including recreational and commercial drones, are permitted to fly within this zone; this level is only shared with Walt Disney World, other pieces of critical infrastructure (military bases, Pantex) in the United States and whenever the President of the United States travels outside of Washington, D.C.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1195 2021-11-19 12:43:21

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1172) Cryptocurrency

A cryptocurrency, crypto-currency, or crypto is a collection of binary data which is designed to work as a medium of exchange. Individual coin ownership records are stored in a ledger, which is a computerized database using strong cryptography to secure transaction records, to control the creation of additional coins, and to verify the transfer of coin ownership. Cryptocurrencies are generally fiat currencies, as they are not backed by or convertible into a commodity. Some crypto schemes use validators to maintain the cryptocurrency. In a proof-of-stake model, owners put up their tokens as collateral. In return, they get authority over the token in proportion to the amount they stake. Generally, these token stakers get additional ownership in the token over time via network fees, newly minted tokens or other such reward mechanisms. Cryptocurrency does not exist in physical form (like paper money) and is typically not issued by a central authority. Cryptocurrencies typically use decentralized control as opposed to a central bank digital currency (CBDC). When a cryptocurrency is minted or created prior to issuance or issued by a single issuer, it is generally considered centralized. When implemented with decentralized control, each cryptocurrency works through distributed ledger technology, typically a blockchain, that serves as a public financial transaction database.

Bitcoin, first released as open-source software in 2009, is the first decentralized cryptocurrency. Since the release of bitcoin, many other cryptocurrencies have been created.

Formal definition

According to Jan Lansky, a cryptocurrency is a system that meets six conditions:

* The system does not require a central authority; its state is maintained through distributed consensus.
* The system keeps an overview of cryptocurrency units and their ownership.
* The system defines whether new cryptocurrency units can be created. If new cryptocurrency units can be created, the system defines the circumstances of their origin and how to determine the ownership of these new units.
* Ownership of cryptocurrency units can be proved exclusively cryptographically.
* The system allows transactions to be performed in which ownership of the cryptographic units is changed. A transaction statement can only be issued by an entity proving the current ownership of these units.
* If two different instructions for changing the ownership of the same cryptographic units are simultaneously entered, the system performs at most one of them.

In March 2018, the word cryptocurrency was added to the Merriam-Webster Dictionary.


Tokens, cryptocurrencies, and other types of digital assets that are not bitcoin are collectively known as alternative cryptocurrencies, typically shortened to "altcoins" or "alt coins". Paul Vigna of The Wall Street Journal also described altcoins as "alternative versions of bitcoin" given its role as the model protocol for altcoin designers. The term is commonly used to describe coins and tokens created after bitcoin. A list of some cryptocurrencies can be found in the List of cryptocurrencies article.

Altcoins often have underlying differences with bitcoin. For example, Litecoin aims to process a block every 2.5 minutes, rather than bitcoin's 10 minutes, which allows Litecoin to confirm transactions faster than bitcoin. Another example is Ethereum, which has smart contract functionality that allows decentralized applications to be run on its blockchain. Ethereum was the most used blockchain in 2020, according to Bloomberg News. In 2016, it had the largest "following" of any altcoin, according to the New York Times.

Significant rallies across altcoin markets are often referred to as an "altseason".


Stablecoins are altcoins that are designed to maintain a stable level of purchasing power.

Digital Currency

When you think of “currency,” you are probably thinking of the cash you hand over at a store, or the credit cards or money transfers that can be used in place of physical money. In general, currency is a system of money backed by a government. Typically made up of coins and paper notes, currency is a medium of exchange, meaning that it is the basis for various kinds of trade and transactions. As technology has grown more sophisticated, digital currencies (which have no physical form) have grown as more financial transactions have gone online. Digital currency is simply a payment method that does not exist outside of its electronic form.

Within the past decade, a new particularly popular kind of digital currency has emerged: cryptocurrency. Although this new system is unlikely to replace the more traditional forms of currency any time soon, it has made a significant impact in less than 10 years.

What is Cryptocurrency?

The prefix “crypto” originally comes from the Greek word meaning “hidden.” This does not mean that cryptocurrency is secret, but rather that this “hidden” money is digital, and is kept secure with digital-code encryption. These digital currencies are the heart of systems that allow secure, direct payment for online transactions. “Crypto-” actually refers to the cryptographic data encryption that keeps the transactions protected from hackers or other digital eyes. It also makes cryptocurrency difficult to counterfeit.

Cryptocurrency has gained popularity because it offers a straightforward way to transfer funds entirely online, without involving third parties like banks or credit card companies (and paying the fees they often charge for processing transactions). Instead of physical coins or paper notes, cryptocurrencies have digital “tokens,” or different digital denominations (think of one- or five-dollar bill, etc.). For example, one bitcoin is equivalent to 100,000,000 satoshis, the smallest denomination of a bitcoin and named for bitcoin’s supposed inventor, Satoshi Nakamoto. This enables transactions smaller than a full coin. The transfer of funds involves “public” and “private” keys, which are lines of code that need to match on both sides, so the transaction can be completed. Cryptocurrency is saved in the user’s “wallet,” or a URL or internet account address that can only be accessed by the owner. The wallet has a public key, and the private key is used to sign a transaction, much like you would sign a check or a credit card slip.

Would-be cryptocurrency users can use particular websites to exchange different currency types (like euros or dollars) for cryptocurrency tokens. The system that supports cryptocurrencies online is called blockchain, which is essentially a digital ledger that tracks transactions across the internet. There is a blockchain for each kind of digital cryptocurrency, which records all transactions using that particular cryptocurrency. What helps make cryptocurrency unique is that there is no central bank or processing center. Instead, the blockchain is made up of “distributed ledger” technology, which is a database shared across a network of sites and servers. By involving a collection of different networks throughout a transfer, it creates a traceable trail, and reduces the chances that the transactions can be disrupted by a cyberattack or data breach by adding “witnesses” along the way.

Different types of cryptocurrency (sometimes also referred to as “altcoin”) include bitcoin, Litecoin, Ethereum, Zcash, Dash, Ripple, Monero, NEO, Cardano, and EOS. Because of the high-tech nature of cryptocurrency, new forms are emerging all the time.

The Bitcoin Revolution

The most popular form of cryptocurrency is bitcoin, created by a developer (or group of developers) under the pseudonym “Satoshi Nakamoto” in 2009. Bitcoin’s origins are shrouded in mystery—no one has been able to confirm the identity (or identities) of programmer Satoshi Nakamoto.

Like most digital currencies, bitcoins are not issued or regulated by a government. Instead of a central point of creation (like the United States Mint), bitcoins are created by “mining,” or allowing individuals to contribute their own computers to the transaction network in exchange for bitcoins and access to bitcoin transactions. The supply of bitcoins available is fixed by its developers at 21 million, with the value and the mining rate adjusted with that cap in mind. Bitcoin is by far the most popular cryptocurrency in circulation, although it is not considered legal tender (issued or backed by a government).

Currency of the Future?

The benefits and drawbacks of digital and cryptocurrencies (particularly bitcoin) have become a hot topic of debate. The security and anonymity of these digital-only currencies and the blockchain make it an appealing option for users who want to make discreet transactions, or avoid the fees and bureaucracy of traditional banks and financial systems. Still, some experts believe that the popularity of these cryptocurrencies is more of a trend than a sure bet for the future.

There are also legal and security issues at play here, with anonymous cryptocurrency transactions potentially being used to cover up criminal activity. However, it is likely that the unregulated, Wild West days of cryptocurrency are numbered, with governments looking for ways to regulate and monitor cryptocurrency transactions much like standard financial transactions.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1196 2021-11-20 13:05:07

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1173) Supernova

Supernova, plural supernovae or supernovas, any of a class of violently exploding stars whose luminosity after eruption suddenly increases many millions of times its normal level.

The term supernova is derived from nova (Latin: “new”), the name for another type of exploding star. Supernovae resemble novae in several respects. Both are characterized by a tremendous, rapid brightening lasting for a few weeks, followed by a slow dimming. Spectroscopically, they show blue-shifted emission lines, which imply that hot gases are blown outward. But a supernova explosion, unlike a nova outburst, is a cataclysmic event for a star, one that essentially ends its active (i.e., energy-generating) lifetime. When a star “goes supernova,” considerable amounts of its matter, equaling the material of several Suns, may be blasted into space with such a burst of energy as to enable the exploding star to outshine its entire home galaxy.

Supernovae explosions release not only tremendous amounts of radio waves and X-rays but also cosmic rays. Some gamma-ray bursts have been associated with supernovae. Supernovae also release many of the heavier elements that make up the components of the solar system, including Earth, into the interstellar medium. Spectral analyses show that abundances of the heavier elements are greater than normal, indicating that these elements do indeed form during the course of the explosion. The shell of a supernova remnant continues to expand until, at a very advanced stage, it dissolves into the interstellar medium.

Historical supernovae

Historically, only seven supernovae are known to have been recorded before the early 17th century. The most famous of them occurred in 1054 and was seen in one of the horns of the constellation Taurus. The remnants of this explosion are visible today as the Crab Nebula, which is composed of glowing ejecta of gases flying outward in an irregular fashion and a rapidly spinning, pulsating neutron star, called a pulsar, in the centre. The supernova of 1054 was recorded by Chinese and Korean observers; it also may have been seen by southwestern American Indians, as suggested by certain rock paintings discovered in Arizona and New Mexico. It was bright enough to be seen during the day, and its great luminosity lasted for weeks. Other prominent supernovae are known to have been observed from Earth in 185, 393, 1006, 1181, 1572, and 1604.

The closest and most easily observed of the hundreds of supernovae that have been recorded since 1604 was first sighted on the morning of Feb. 24, 1987, by the Canadian astronomer Ian K. Shelton while working at the Las Campanas Observatory in Chile. Designated SN 1987A, this formerly extremely faint object attained a magnitude of 4.5 within just a few hours, thus becoming visible to the unaided eye. The newly appearing supernova was located in the Large Magellanic Cloud at a distance of about 160,000 light-years. It immediately became the subject of intense observation by astronomers throughout the Southern Hemisphere and was observed by the Hubble Space Telescope. SN 1987A’s brightness peaked in May 1987, with a magnitude of about 2.9, and slowly declined in the following months.

Types of supernovae

Supernovae may be divided into two broad classes, Type I and Type II, according to the way in which they detonate. Type I supernovae may be up to three times brighter than Type II; they also differ from Type II supernovae in that their spectra contain no hydrogen lines and they expand about twice as rapidly.

Type II supernovae

The so-called classic explosion, associated with Type II supernovae, has as progenitor a very massive star (a Population I star) of at least eight solar masses that is at the end of its active lifetime. (These are seen only in spiral galaxies, most often near the arms.) Until this stage of its evolution, the star has shone by means of the nuclear energy released at and near its core in the process of squeezing and heating lighter elements such as hydrogen or helium into successively heavier elements—i.e., in the process of nuclear fusion. Forming elements heavier than iron absorbs rather than produces energy, however, and, since energy is no longer available, an iron core is built up at the centre of the aging, heavyweight star. When the iron core becomes too massive, its ability to support itself by means of the outward explosive thrust of internal fusion reactions fails to counteract the tremendous pull of its own gravity. Consequently, the core collapses. If the core’s mass is less than about three solar masses, the collapse continues until the core reaches a point at which its constituent nuclei and free electrons are crushed together into a hard, rapidly spinning core. This core consists almost entirely of neutrons, which are compressed in a volume only 20 km (12 miles) across but whose combined weight equals that of several Suns. A teaspoonful of this extraordinarily dense material would weigh 50 billion tons on Earth. Such an object is called a neutron star.

The supernova detonation occurs when material falls in from the outer layers of the star and then rebounds off the core, which has stopped collapsing and suddenly presents a hard surface to the infalling gases. The shock wave generated by this collision propagates outward and blows off the star’s outer gaseous layers. The amount of material blasted outward depends on the star’s original mass.

If the core mass exceeds three solar masses, the core collapse is too great to produce a neutron star; the imploding star is compressed into an even smaller and denser body—namely, a black hole. Infalling material disappears into the black hole, the gravitational field of which is so intense that not even light can escape. The entire star is not taken in by the black hole, since much of the falling envelope of the star either rebounds from the temporary formation of a spinning neutron core or misses passing through the very centre of the core and is spun off instead.

Type I supernovae

Type I supernovae can be divided into three subgroups—Ia, Ib, and Ic—on the basis of their spectra. The exact nature of the explosion mechanism in Type I generally is still uncertain, although Ia supernovae, at least, are thought to originate in binary systems consisting of a moderately massive star and a white dwarf, with material flowing to the white dwarf from its larger companion. A thermonuclear explosion results if the flow of material is sufficient to raise the mass of the white dwarf above the Chandrasekhar limit of 1.44 solar masses. Unlike the case of an ordinary nova, for which the mass flow is less and only a superficial explosion results, the white dwarf in a Ia supernova explosion is presumably destroyed completely. Radioactive elements, notably nickel-56, are formed. When nickel-56 decays to cobalt-56 and the latter to iron-56, significant amounts of energy are released, providing perhaps most of the light emitted during the weeks following the explosion.

Type Ia supernovae are useful probes of the structure of the universe, since they all have the same luminosity. By measuring the apparent brightness of these objects, one also measures the expansion rate of the universe and that rate’s variation with time. Dark energy, a repulsive force that is the dominant component (73 percent) of the universe, was discovered in 1998 with this method. Type Ia supernovae that exploded when the universe was only two-thirds of its present size were fainter and thus farther away than they would be in a universe without dark energy. This implies that the expansion rate of the universe is faster now than it was in the past, a result of the current dominance of dark energy. (Dark energy was negligible in the early universe.)


A supernova is a powerful and luminous stellar explosion. This transient astronomical event occurs during the last evolutionary stages of a massive star or when a white dwarf is triggered into runaway nuclear fusion. The original object, called the progenitor, either collapses to a neutron star or black hole, or is completely destroyed. The peak optical luminosity of a supernova can be comparable to that of an entire galaxy before fading over several weeks or months.

Supernovae are more energetic than novae. In Latin, nova means "new", referring astronomically to what appears to be a temporary new bright star. Adding the prefix "super-" distinguishes supernovae from ordinary novae, which are far less luminous. The word supernova was coined by Walter Baade and Fritz Zwicky in 1929.

The most recent directly observed supernova in the Milky Way was Kepler's Supernova in 1604, but the remnants of more recent supernovae have been found. Observations of supernovae in other galaxies suggest they occur in the Milky Way on average about three times every century. These supernovae would almost certainly be observable with modern astronomical telescopes. The most recent naked-eye supernova was SN 1987A, the explosion of a blue supergiant star in the Large Magellanic Cloud, a satellite of the Milky Way.

Theoretical studies indicate that most supernovae are triggered by one of two basic mechanisms: the sudden re-ignition of nuclear fusion in a degenerate star such as a white dwarf, or the sudden gravitational collapse of a massive star's core. In the first class of events, the object's temperature is raised enough to trigger runaway nuclear fusion, completely disrupting the star. Possible causes are an accumulation of material from a binary companion through accretion, or a stellar merger. In the massive star case, the core of a massive star may undergo sudden collapse, releasing gravitational potential energy as a supernova. While some observed supernovae are more complex than these two simplified theories, the astrophysical mechanics are established and accepted by the astronomical community.

Supernovae can expel several solar masses of material at speeds up to several percent of the speed of light. This drives an expanding shock wave into the surrounding interstellar medium, sweeping up an expanding shell of gas and dust observed as a supernova remnant. Supernovae are a major source of elements in the interstellar medium from oxygen to rubidium. The expanding shock waves of supernovae can trigger the formation of new stars. Supernova remnants might be a major source of cosmic rays. Supernovae might produce gravitational waves, though thus far, gravitational waves have been detected only from the mergers of black holes and neutron stars.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1197 2021-11-21 11:20:07

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1174) Insomnia

Insomnia, also known as sleeplessness, is a sleep disorder in which people have trouble sleeping. They may have difficulty falling asleep, or staying asleep as long as desired. Insomnia is typically followed by daytime sleepiness, low energy, irritability, and a depressed mood. It may result in an increased risk of motor vehicle collisions, as well as problems focusing and learning. Insomnia can be short term, lasting for days or weeks, or long term, lasting more than a month.

Insomnia can occur independently or as a result of another problem. Conditions that can result in insomnia include psychological stress, chronic pain, heart failure, hyperthyroidism, heartburn, restless leg syndrome, menopause, certain medications, and drugs such as caffeine, nicotine, and alcohol. Other risk factors include working night shifts and sleep apnea. Diagnosis is based on sleep habits and an examination to look for underlying causes. A sleep study may be done to look for underlying sleep disorders. Screening may be done with two questions: "do you experience difficulty sleeping?" and "do you have difficulty falling or staying asleep?"

Although their efficacy as first line treatments is not unequivocally established, sleep hygiene and lifestyle changes are typically the first treatment for insomnia. Sleep hygiene includes a consistent bedtime, a quiet and dark room, exposure to sunlight during the day and regular exercise. Cognitive behavioral therapy may be added to this. While sleeping pills may help, they are sometimes associated with injuries, dementia, and addiction. These medications are not recommended for more than four or five weeks. The effectiveness and safety of alternative medicine is unclear.

Between 10% and 30% of adults have insomnia at any given point in time and up to half of people have insomnia in a given year. About 6% of people have insomnia that is not due to another problem and lasts for more than a month. People over the age of 65 are affected more often than younger people. Females are more often affected than males. Descriptions of insomnia occur at least as far back as ancient Greece.

Insomnia, the inability to sleep adequately. Causes may include poor sleeping conditions, circulatory or brain disorders, a respiratory disorder known as apnea, stress, or other physical or mental disorders. Insomnia is not harmful if it is only occasional; the body is readily restored by a few hours of extra sleep. If, however, it is regular or frequent, insomnia may have harmful effects on other systems and functions of the body.

Treatment of mild insomnia may involve simple improvement of sleeping conditions or such traditional remedies as warm baths, warm milk, or relaxation. Chronic insomnia may require the temporary use of sedatives, hypnosis, or psychotherapy; apnea and its associated insomnia may be treated surgically. The prolonged use of sleeping pills as a relief from frequent or recurring insomnia can have harmful effects. The body tends to build up a tolerance to the medication, necessitating a more potent dosage in order to fall asleep; habitual use can lead to addiction.

What Is Insomnia?

Insomnia is a sleep disorder in which you have trouble falling and/or staying asleep.

The condition can be short-term (acute) or can last a long time (chronic). It may also come and go.

Acute insomnia lasts from 1 night to a few weeks. Insomnia is chronic when it happens at least 3 nights a week for 3 months or more.

Types of Insomnia

There are two types of insomnia: primary and secondary.

* Primary insomnia: This means your sleep problems aren’t linked to any other health condition or problem.
* Secondary insomnia: This means you have trouble sleeping because of a health condition (like asthma, depression, arthritis, cancer, or heartburn); pain; medication; or substance use (like alcohol).

You might also hear about:

* Sleep-onset insomnia: This means you have trouble getting to sleep.
* Sleep-maintenance insomnia: This happens when you have trouble staying asleep through the night or wake up too early.
* Mixed insomnia: With this type of insomnia, you have trouble both falling asleep and staying asleep through the night. 
* Paradoxical insomnia: When you have paradoxical insomnia, you underestimate the time you're asleep. It feels like you sleep a lot less than you really do.

Insomnia Causes

Primary causes of insomnia include:

* Stress related to big life events, like a job loss or change, the death of a loved one, divorce, or moving
* Things around you like noise, light, or temperature
* Changes to your sleep schedule like jet lag, a new shift at work, or bad habits you picked up when you had other sleep problems
* Your genes. Research has found that a tendency for insomnia may run in families.

Secondary causes of insomnia include:

* Mental health issues like depression and anxiety
* Medications for colds, allergies, depression, high blood pressure, and asthma.
* Pain or discomfort at night
* Caffeine, tobacco, or alcohol use, as well as use of illicit drugs.
* Hyperthyroidism and other endocrine problems
* Other sleep disorders, like sleep apnea or restless legs syndrome
* Pregnancy
* Alzheimer's disease and other types of dementia
* PMS and menopause

Insomnia Risk Factors

Insomnia affects women more than men and older people more than younger ones. Young and middle-age African Americans also have a higher risk.

Other risk factors include:

* Long-term illness
* Mental health issues
* Working night shifts or shifts that rotate

Insomnia Symptoms

Symptoms of insomnia include:

* Sleepiness during the day
* Fatigue
* Grumpiness
* Problems with concentration or memory

Insomnia Diagnosis

Your doctor will do a physical exam and ask about your medical history and sleep history.

They might tell you to keep a sleep diary for a week or two, keeping track of your sleep patterns and how you feel during the day. They may talk to your bed partner about how much and how well you’re sleeping. You might also have special tests at a sleep center.

Insomnia Treatment

Acute insomnia may not need treatment.

If it’s hard for you to do everyday activities because you’re tired, your doctor may prescribe sleeping pills for a short time. Medicines that work quickly but briefly can help you avoid problems like drowsiness the next day.

Don’t use over-the-counter sleeping pills for insomnia. They might have side effects, and they tend to work less well over time.

For chronic insomnia, you’ll need treatment for the conditions or health problems that are keeping you awake. Your doctor might also suggest behavioral therapy. This can help you change the things you do that make insomnia worse and learn what you can do to promote sleep.

Insomnia Complications

Our bodies and brains need sleep so they can repair themselves. It’s also crucial for learning and keeping memories. If insomnia is keeping you awake, you could have:

* A higher risk of health problems like high blood pressure, obesity, and depression
* A higher risk of falling, if you’re an older woman
* Trouble focusing
* Anxiety
* Grumpiness

Insomnia Prevention

Good sleep habits, also called sleep hygiene, can help you beat insomnia. Here are some tips:

* Go to sleep at the same time each night, and get up at the same time each morning. Try not to take naps during the day, because they may make you less sleepy at night.
* Don’t use phones or e-books before bed. Their light can make it harder to fall asleep.
* Avoid caffeine, nicotine, and alcohol late in the day. Caffeine and nicotine are stimulants and can keep you from falling asleep. Alcohol can make you wake up in the middle of the night and hurt your sleep quality.
* Get regular exercise. Try not to work out close to bedtime, because it may make it hard to fall asleep. Experts suggest exercising at least 3 to 4 hours before bed.
* Don't eat a heavy meal late in the day. But a light snack before bedtime may help you sleep.
* Make your bedroom comfortable: dark, quiet, and not too warm or too cold. If light is a problem, use a sleeping mask. To cover up sounds, try earplugs, a fan, or a white noise machine.
* Follow a routine to relax before bed. Read a book, listen to music, or take a bath.
* Don’t use your bed for anything other than sleep and gender.
* If you can't fall asleep and aren’t drowsy, get up and do something calming, like reading until you feel sleepy.
* If you tend to lie awake and worry about things, make a to-do list before you go to bed. This may help you put your concerns aside for the night.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1198 2021-11-22 10:59:22

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1175) Islets of Langerhans

Islets of Langerhans, also called islands of Langerhans, irregularly shaped patches of endocrine tissue located within the pancreas of most vertebrates. They are named for the German physician Paul Langerhans, who first described them in 1869. The normal human pancreas contains about 1 million islets. The islets consist of four distinct cell types, of which three (alpha, beta, and delta cells) produce important hormones; the fourth component (C cells) has no known function.

The most common islet cell, the beta cell, produces insulin, the major hormone in the regulation of carbohydrate, fat, and protein metabolism. Insulin is crucial in several metabolic processes: it promotes the uptake and metabolism of glucose by the body’s cells; it prevents release of glucose by the liver; it causes muscle cells to take up amino acids, the basic components of protein; and it inhibits the breakdown and release of fats. The release of insulin from the beta cells can be triggered by growth hormone (somatotropin) or by glucagon, but the most important stimulator of insulin release is glucose; when the blood glucose level increases—as it does after a meal—insulin is released to counter it. The inability of the islet cells to make insulin or the failure to produce amounts sufficient to control blood glucose level are the causes of diabetes mellitus.

The alpha cells of the islets of Langerhans produce an opposing hormone, glucagon, which releases glucose from the liver and fatty acids from fat tissue. In turn, glucose and free fatty acids favour insulin release and inhibit glucagon release. The delta cells produce somatostatin, a strong inhibitor of somatotropin, insulin, and glucagon; its role in metabolic regulation is not yet clear. Somatostatin is also produced by the hypothalamus and functions there to inhibit secretion of growth hormone by the pituitary gland.

Pancreatic islets

The pancreatic islets or islets of Langerhans are the regions of the pancreas that contain its endocrine (hormone-producing) cells, discovered in 1869 by German pathological anatomist Paul Langerhans. The pancreatic islets constitute 1–2% of the pancreas volume and receive 10–15% of its blood flow. The pancreatic islets are arranged in density routes throughout the human pancreas, and are important in the metabolism of glucose.


There are about 1 million islets distributed in the form of density routes throughout the pancreas of a healthy adult human, each of which measures an average of about 0.2 mm in diameter. Each is separated from the surrounding pancreatic tissue by a thin fibrous connective tissue capsule which is continuous with the fibrous connective tissue that is interwoven throughout the rest of the pancreas.


Hormones produced in the pancreatic islets are secreted directly into the blood flow by (at least) five types of cells. In rat islets, endocrine cell types are distributed as follows:

* Alpha cells producing glucagon (20% of total islet cells)
* Beta cells producing insulin and amylin (≈70%)
* Delta cells producing somatostatin (<10%)
* Epsilon cells producing ghrelin (<1%)
* PP cells (gamma cells or F cells) producing pancreatic polypeptide (<5%)

It has been recognized that the cytoarchitecture of pancreatic islets differs between species. In particular, while rodent islets are characterized by a predominant proportion of insulin-producing beta cells in the core of the cluster and by scarce alpha, delta and PP cells in the periphery, human islets display alpha and beta cells in close relationship with each other throughout the cluster.The proportion of beta cells in islets varies depending on the species, in humans it is about 40–50%. In addition to endocrine cells, there are stromal cells (fibroblasts), vascular cells (endothelial cells, pericytes), immune cells (granulocytes, lymphocytes, macrophages, dendritic cells) and neural cells.

A large amount of blood flows through the islets, 5–6 mL/min per 1 g of islet. It is up to 15 times more than in exocrine tissue of the pancreas.

Islets can influence each other through paracrine and autocrine communication, and beta cells are coupled electrically to six to seven other beta cells, but not to other cell types.


The paracrine feedback system of the pancreatic islets has the following structure:

* Glucose/Insulin: activates beta cells and inhibits alpha cells
* Glycogen/Glucagon: activates alpha cells which activates beta cells and delta cells
* Somatostatin: inhibits alpha cells and beta cells

A large number of G protein-coupled receptors (GPCRs) regulate the secretion of insulin, glucagon and somatostatin from pancreatic islets, and some of these GPCRs are the targets of drugs used to treat type-2 diabetes (ref GLP-1 receptor agonists, DPPIV inhibitors).

Electrical activity

Electrical activity of pancreatic islets has been studied using patch clamp techniques. It has turned out that the behavior of cells in intact islets differs significantly from the behavior of dispersed cells.

Clinical significance:


The beta cells of the pancreatic islets secrete insulin, and so play a significant role in diabetes. It is thought that they are destroyed by immune assaults. However, there are also indications that beta cells have not been destroyed but have only become non-functional.


Because the beta cells in the pancreatic islets are selectively destroyed by an autoimmune process in type 1 diabetes, clinicians and researchers are actively pursuing islet transplantation as a means of restoring physiological beta cell function, which would offer an alternative to a complete pancreas transplant or artificial pancreas. Islet transplantation emerged as a viable option for the treatment of insulin requiring diabetes in the early 1970s with steady progress over the following three decades. Recent clinical trials have shown that insulin independence and improved metabolic control can be reproducibly obtained after transplantation of cadaveric donor islets into patients with unstable type 1 diabetes.

People with high body mass index (BMI) are unsuitable pancreatic donors due to greater technical complications during transplantation. However, it is possible to isolate a larger number of islets because of their larger pancreas, and therefore they are more suitable donors of islets.

Islet transplantation only involves the transfer of tissue consisting of beta cells that are necessary as a treatment of this disease. It thus represents an advantage over whole pancreas transplantation, which is more technically demanding and poses a risk of, for example, pancreatitis leading to organ loss. Another advantage is that patients do not require general anesthesia.

Islet transplantation for type 1 diabetes currently requires potent immunosuppression to prevent host rejection of donor islets.

The islets are transplanted into a portal vein, which is then implanted in the liver. There is a risk of portal venous branch thrombosis and the low value of islet survival a few minutes after transplantation, because the vascular density at this site is after the surgery several months lower than in endogenous islets. Thus, neovascularization is key to islet survival, that is supported, for example, by VEGF produced by islets and vascular endothelial cells. However, intraportal transplantation has some other shortcomings, and so other alternative sites that would provide better microenvironment for islets implantation are being examined. Islet transplant research also focuses on islet encapsulation, CNI-free (calcineurin-inhibitor) immunosuppression, biomarkers of islet damage or islet donor shortage.

An alternative source of beta cells, such insulin-producing cells derived from adult stem cells or progenitor cells would contribute to overcoming the shortage of donor organs for transplantation. The field of regenerative medicine is rapidly evolving and offers great hope for the nearest future. However, type 1 diabetes is the result of the autoimmune destruction of beta cells in the pancreas. Therefore, an effective cure will require a sequential, integrated approach that combines adequate and safe immune interventions with beta cell regenerative approaches. It has also been demonstrated that alpha cells can spontaneously switch fate and transdifferentiate into beta cells in both healthy and diabetic human and mouse pancreatic islets, a possible future source for beta cell regeneration. In fact, it has been found that islet morphology and endocrine differentiation are directly related. Endocrine progenitor cells differentiate by migrating in cohesion and forming bud-like islet precursors, or "peninsulas", in which alpha cells constitute the peninsular outer layer and beta cells form later beneath them.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1199 2021-11-23 13:19:16

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1176) Relay race

Relay race, also called Relay, a track-and-field sport consisting of a set number of stages (legs), usually four, each leg run by a different member of a team. The runner finishing one leg is usually required to pass on a baton to the next runner while both are running in a marked exchange zone.

In most relays, team members cover equal distances: Olympic events for both men and women are the 400-metre (4 × 100-metre) and 1,600-metre (4 × 400-metre) relays. Some non-Olympic relays are held at distances of 800 m, 3,200 m, and 6,000 m. In the less frequently run medley relays, however, the athletes cover different distances in a prescribed order—as in a sprint medley of 200, 200, 400, 800 metres or a distance medley of 1,200, 400, 800, 1,600 metres.

The relay method of racing was started in the United States about 1883. The original method was for the men running the second quarter of the course each to take over a small flag from the first man as he arrived, before departing on their own stage of the race, at the end of which they, in their turn, handed on their flags to the awaiting next runners. The flags, however, were considered cumbersome, and for a time it was sufficient for the outgoing runner to touch or be touched by his predecessor.

The baton, a hollow cylinder of wood or plastic, was introduced in 1893. It is carried by the runner and must be exchanged between lines drawn at right angles to the side of the track 10 metres or 11 yards on each side of the starting line for each leg of the relay. In sprint relays (400 and 800 metres) a 1964 rule change permitted the runner receiving the baton to start his run 10 metres or 11 yards before the zone, but he had to take the baton within the zone itself.


A relay race is a racing competition where members of a team take turns completing parts of racecourse or performing a certain action. Relay races take the form of professional races and amateur games. Relay races are common in running, orienteering, swimming, cross-country skiing, biathlon, or ice skating (usually with a baton in the fist). In the Olympic Games, there are several types of relay races that are part of track and field. Relay race, also called Relay, a track-and-field sport consisting of a set number of stages (legs), usually four, each leg run by a different member of a team. The runner finishing one leg is usually required to pass the next runner a stick-like object known as a "baton" while both are running in a marked exchange zone. In most relays, team members cover equal distances: Olympic events for both men and women are the 400-metre (4 × 100-metre) and 1,600-metre (4 × 400-metre) relays. Some non-Olympic relays are held at distances of 800 m, 3,200 m, and 6,000 m. In the less frequently run medley relays, however, the athletes cover different distances in a prescribed order—as in a sprint medley of 200, 200, 400, 800 metres or a distance medley of 1,200, 400, 800, 1,600 metres.

Relays in swimming

A swimming relay of four swimmers usually follows this strategy: second-fastest, third-fastest, slowest, then fastest (anchor). However, it is not uncommon to see either the slowest swimmer racing in the second slot (creating an order of second-fastest, slowest, third-fastest, and then fastest), or an order from slowest to fastest (an order of slowest, third-fastest, second-fastest, fastest).

FINA rules require that a foot of the second, third or fourth swimmer must be contacting the platform while (and before) the incoming teammate is touching the wall; the starting swimmer may already be in motion, however, which saves 0.6–1.0 seconds compared to a regular start. Besides, many swimmers perform better in a relay than in an individual race owing to a team spirit atmosphere. As a result, relay times are typically 2–3 seconds faster than the sum of best times of individual swimmers.

In medley swimming, each swimmer uses a different stroke (in this order): backstroke, breaststroke, butterfly, and freestyle, with the added limitation that the freestyle swimmer cannot use any of the first three strokes. At competitive levels, essentially all freestyle swimmers use the front crawl. Note that this order is different from that for the individual medley, in which a single swimmer swims butterfly, backstroke, breaststroke, and freestyle in a single race, in that order.

The three standard relays raced at the Olympics are the 4 × 100 m freestyle relay, 4 × 200 m freestyle relay and 4 × 100 m medley relay.

Mixed-gendered relays were introduced at the 2014 FINA World Swimming Championships (25 m) (4 × 50 m freestyle and medley) and the 2015 World Aquatics Championships (4 × 100 m freestyle and medley). The event will debut at the 2020 Summer Olympics (4 × 100 m medley).

In open water swimming, mixed-gendered relays were introduced at the 2011 World Aquatics Championships (4 × 1250 m).

Relays in athletics

In athletics, the two standard relays are the 4 × 100 metres relay and the 4 × 400 metres relay. 4 × 200, 4 × 800, and 4 × 1500 m relays exist as well, but they are rarer. Mixed-gendered 4 × 400 metres relays were introduced at the 2017 IAAF World Relays, repeated at the 2018 Asian Games, the 2019 World Championships in Athletics and were added to the 2020 Summer Olympics. In addition, a 2 × 2 × 400 m and shuttle hurdles mixed relay races were introduced at the 2019 IAAF World Relays.

Traditionally, the 4 × 400 m relay finals are the last event of a track meet,[citation needed] and is often met with a very enthusiastic crowd, especially if the last leg is a close race.[A] It is hard to measure exact splits in a 4 × 400 (or a 4 × 100) relay. For example, if a team ran a 3-minute 4 × 400, it does not mean every runner on the team has to run a 45-second open 400, because a person starts accelerating before they have the baton, therefore allowing for slightly slower overall open 400 times. A 4 × 400 relay generally starts in lanes for the first leg, including the handoff. The second leg then proceeds to run in lanes for the first 100 metres, after which point the runners are allowed to break into the first lane on the backstretch, as long as they do not interfere with other runners. A race organizer then puts the third-leg runners into a line depending on the order in which they are running (with the first place closest to the inside). The faster teams pass first, while the slower teams have to slide in to the inside lanes as they come available.

According to the IAAF rules, world records in relays can only be set if all team members have the same nationality. Several superior marks were established by teams from a mixture of countries and were thus never ratified.

Major USA Track and Field events, f.e. the Penn Relays, Drake Relays, Kansas Relays, Mt. SAC Relays, Modesto Relays, Texas Relays, West Coast Relays, include different types of relays.

Rules and strategy

Each runner must hand off the baton to the next runner within a certain zone, usually marked by triangles on the track. In sprint relays, runners typically use a "blind handoff", where the second runner stands on a spot predetermined in practice and starts running when the first runner hits a visual mark on the track (usually a smaller triangle). The second runner opens their hand behind them after a few strides, by which time the first runner should be caught up and able to hand off the baton. Usually a runner will give an auditory signal, such as "Stick!" repeated several times, for the recipient of the baton to put out his hand. In middle-distance relays or longer, runners begin by jogging while looking back at the incoming runner and holding out a hand for the baton.

A team may be disqualified from a relay for:

* Losing the baton (dropping the baton shall not result in disqualification.
* Making an improper baton pass, especially when not passing in the exchange zone
* False starting (usually once but sometimes twice)
* Improperly overtaking another competitor
* Preventing another competitor from passing
* Wilfully impeding, improperly crossing the course, or in any other way interfering with another competitor

Based on the speed of the runners, the generally accepted strategy used in setting up a four-person relay team is: second-fastest, third-fastest, slowest, then fastest (anchor); however some teams (usually middle school or young high school) use second-fastest, slowest, third-fastest, then the fastest (anchor). But if a runner is better in the starting blocks than the others, he is sometimes moved to the first spot because it is the only spot that uses starting blocks.


The largest relay event in the world is the Norwegian Holmenkollstafetten, 2,944 teams of 15 starting and ending at Bislett Stadium in Oslo which had a total of 44,160 relay-competitors on May 10, 2014.

Another large relay event is the Penn Relays, which attracts over 15,000 competitors annually on the high-school, collegiate and professional levels, and over its three days attracts upwards of 100,000 spectators. It is credited with popularizing relay racing in the sport of track & field.

Long-distance relays

Long-distance relays have become increasingly popular with runners of all skill levels. These relays typically have 5 to 36 legs, each usually between 5 and 10 km (3.1 and 6.2 miles) long, though sometimes as long as 16 km (9.9 mi).

The IAAF World Road Relay Championships was held from 1986 to 1998, with six-member teams covering the classic 42.195-kilometre (26.219 mi) marathon distance.

Races under 100 kilometres (62 mi) are run in a day, with each runner covering one or two legs. Longer relays are run overnight, with each runner typically covering three legs.

The world's longest relay race was Japan's Prince Takamatsu Cup Nishinippon Round-Kyūshū Ekiden, which begins in Nagasaki and continues for 1,064 kilometres (661 mi).

Cross-country relays

For the 2017 IAAF World Cross Country Championships, a mixed relay race was added (4 × 2 km).

The Crusader Team Sprint Cross Country Relay Race is a fun and unique venue specifically designed to get runners familiar with distance running and excited for the rest of the cross country season.  Teams will be pairs of runners.  The team will run four loops of a 1-mile course.  Runner “A” will run loop 1 and hand off to Runner “B.”  Runner “B” will run the same loop and hand off back to Runner “A.”  “A” runs one more loop, hands off to “B,” and “B” finishes. 3 race categories: boys, girls, and co-ed.  Awards will be given in each of the three categories.

Shuttle hurdle relay

The Shuttle hurdle relay is a Men's and Women's competition that is part of Relay meetings like Drake Relays or Penn Relays. A mixed version was introduced at the 2019 IAAF World Relays, it consist of a race in which two men and two women on each team, are running a 110 m hurdles.

Medley relay

Medley relay events are also occasionally held in track meets, usually consisting of teams of four runners running progressively longer distances. The distance medley relay consists of four legs run at distances of 1200, 400, 800, and 1,600 metres, in that order. The sprint medley relay usually consists of four legs run at distances of 400, 200, 200, and 800 metres, though a more uncommon variant of 200, 100, 100 and 400 metres (sometimes called a short sprint medley) also exists. See also Swedish relay.

Relays on coinage

Relay race events have been selected as a main motif in numerous collectors' coins. One of the recent samples is the €10 Greek Relays commemorative coin, minted in 2003 to commemorate the 2004 Summer Olympics. In the obverse of the coin three modern athletes run, holding their batons while in the background three ancient athletes are shown running a race known as the dolichos (a semi-endurance race of approximately 3,800 metres' distance).

Relays in skiing:

Cross-country skiing

The FIS Nordic World Ski Championships features a relay race since 1933, and a women's race since 1954. Each team has four skiers, each of whom must complete 10 kilometres / 6.2 miles (men) or 5 kilometres / 3.1 miles (women).


In biathlon, the relay race features a mass start, with teams consist of four biathletes. Each competitor must complete 7.5 kilometres / 4.66 miles (men) or 6.0 kilometres / 3.73 miles (women). Each leg is held over three laps, with two shooting rounds; one prone, one standing.

A mixed biathlon relay race was first held at the Biathlon World Championships 2005 in Khanty-Mansiysk, and it was added to the 2014 Winter Olympics.

Relays in orienteering

There are two major relays in orienteering:

* Tiomila in April/May in Sweden
* Jukola and Venla relay in June in Finland

There are other relays in autumn with requirements about the age and gender distributions:

* Halikko relay, near Salo, Finland
* 25-manna, near Stockholm, Sweden

Other relays

The World Triathlon Mixed Relay Championships is a mixed-gendered relay triathlon race held since 2009. Previously, the Triathlon Team World Championships were held in 2003, 2006 and 2007. Also, the triathlon at the Youth Olympic Games has a mixed relay race since 2010, and the event was introduced at the 2020 Summer Olympics. As in standard triathlons, each triathlon competitor must do a segment of swimming, cycling and running.

The madison is a track cycling event where two riders take turns to complete the race. Riders can alternate at any moment by touching the partner with the hand. The madison is featured at the UCI Track Cycling World Championships since 1995 and the Olympics since 2000. The format has been used in six-day racing. In road racing, the Duo Normand is a two-man time trial relay held annually in Normandy, France. In mountain biking, the UCI Mountain Bike World Championships has a mixed team relay race since 1999.

The game show Triple Threat had a bonus round called the "Triple Threat Relay Round" which was played like a relay race. The winning team had to take turns matching song titles to its corresponding musical artists.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1200 2021-11-24 19:24:34

Registered: 2005-06-28
Posts: 34,872

Re: Miscellany

1177) Beeswax

Beeswax, commercially useful animal wax secreted by the worker bee to make the cell walls of the honeycomb. Beeswax ranges from yellow to almost black in colour, depending on such factors as the age and diet of the bees, and it has a somewhat honeylike odour and a faint balsamic taste. It is soft to brittle, with a specific gravity of about 0.95 and a melting point of more than 140° F (60° C), and it consists mainly of free cerotic acid and myricin (myricyl palmitate), with some high-carbon paraffins. Although insoluble in water, it can be dissolved in such substances as carbon tetrachloride, chloroform, or warm ether. Wax obtained from bees of East Asia may be somewhat different from that of the common, or Western, honeybee.

It is estimated that a bee consumes 6 to 10 pounds (3 to 4.5 kg) of honey for each pound of the wax that it secretes in small flakes from glands on the underside of its abdomen. The beeswax is obtained, after removal of the honey, by melting the honeycomb, straining the wax to remove impurities, and pressing the residue to extract any remaining wax. The purified wax is then poured into molds to solidify. Colour and quality are preserved by melting the wax in water, avoiding direct heat; the wax may also be bleached.

Beeswax is used for candles (religious ordinances often specify its use for church ceremonial candles), for artificial fruit and flowers, and for modeling wax. It is also an ingredient in the manufacture of furniture and floor waxes, leather dressings, waxed paper, lithographic inks, cosmetics, and ointments.

Beeswax (cera alba) is a natural wax produced by honey bees of the genus Apis. The wax is formed into scales by eight wax-producing glands in the abdominal segments of worker bees, which discard it in or at the hive. The hive workers collect and use it to form cells for honey storage and larval and pupal protection within the beehive. Chemically, beeswax consists mainly of esters of fatty acids and various long-chain alcohols.

Beeswax has been used since prehistory as the first plastic, as a lubricant and waterproofing agent, in lost wax casting of metals and glass, as a polish for wood and leather, for making candles, as an ingredient in cosmetics and as an artistic medium in encaustic painting.

Beeswax is edible, having similarly negligible toxicity to plant waxes, and is approved for food use in most countries and in the European Union under the E number E901.


The wax is formed by worker bees, which secrete it from eight wax-producing mirror glands on the inner sides of the sternites (the ventral shield or plate of each segment of the body) on abdominal segments 4 to 7. The sizes of these wax glands depend on the age of the worker, and after many daily flights, these glands gradually begin to atrophy.

The new wax is initially glass-clear and colorless, becoming opaque after chewing and being contaminated with pollen by the hive worker bees, becoming progressively yellower or browner by incorporation of pollen oils and propolis. The wax scales are about three millimetres (0.12 in) across and 0.1 mm (0.0039 in) thick, and about 1100 are needed to make a gram of wax. Worker bees use the beeswax to build honeycomb cells. For the wax-making bees to secrete wax, the ambient temperature in the hive must be 33 to 36 °C (91 to 97 °F).

The book Beeswax Production, Harvesting, Processing and Products suggests one kilogram (2.2 lb) of beeswax is sufficient to store 22 kg (49 lb) of honey.  Another study estimated that one kilogram (2.2 lb) of wax can store 24 to 30 kg (53 to 66 lb) of honey.

Sugars from honey are metabolized in wax-gland-associated fat cells into beeswax. The amount of honey used by bees to produce wax has not been accurately determined, but according to Whitcomb's 1946 experiment, 6.66 to 8.80 kg (14.7 to 19.4 lb) of honey yields one kilogram (2.2 lb) of wax.


When beekeepers extract the honey, they cut off the wax caps from each honeycomb cell with an uncapping knife or machine.

Beeswax may arise from such cappings, or from an old comb that is scrapped, or from the beekeeper removing unwanted burr comb and brace comb and suchlike. Its color varies from nearly white to brownish, but most often is a shade of yellow, depending on purity, the region, and the type of flowers gathered by the bees. The wax from the brood comb of the honey bee hive tends to be darker than wax from the honeycomb because impurities accumulate more quickly in the brood comb. Due to the impurities, the wax must be rendered before further use. The leftovers are called slumgum, and is derived from old breeding rubbish (pupa casings, cocoons, shed larva skins, etc), bee droppings, propolis, and general rubbish.

The wax may be clarified further by heating in water. As with petroleum waxes, it may be softened by dilution with mineral oil or vegetable oil to make it more workable at room temperature.

Bees reworking old wax

When bees, needing food, uncap honey, they drop the removed cappings and let them fall to the bottom of the hive. It is known for bees to rework such an accumulation of fallen old cappings into strange formations.

Physical characteristics

Beeswax is a fragrant solid at room temperature. The colors are light yellow, medium yellow, or dark brown and white. Beeswax is a tough wax formed from a mixture of several chemical compounds. An approximate chemical formula for beeswax is C15H31COOC30H61. Its main constituents are palmitate, palmitoleate, and oleate esters of long-chain (30–32 carbons) aliphatic alcohols, with the ratio of triacontanyl palmitate CH3(CH2)29O-CO-(CH2)14CH3 to cerotic acid CH3(CH2)24COOH, the two principal constituents, being 6:1. Beeswax can be classified generally into European and Oriental types. The saponification value is lower (3–5) for European beeswax, and higher (8–9) for Oriental types.The analytical characterization can be done by high-temperature gas chromatography.

Beeswax has a relatively low melting point range of 62 to 64 °C (144 to 147 °F). If beeswax is heated above 85 °C (185 °F) discoloration occurs. The flash point of beeswax is 204.4 °C (400 °F).

Triacontanyl palmitate, a wax ester, is a major component of beeswax.

When natural beeswax is cold, it is brittle, and its fracture is dry and granular. At room temperature (conventionally taken as about 20 °C (68 °F)), it is tenacious and it softens further at human body temperature (37 °C (99 °F)). The specific gravity of beeswax at 15 °C (59 °F) is from 0.958 to 0.975; that of melted beeswax at 98 to 99 °C (208.4 to 210.2 °F) (compared with water at 15.5 °C (59.9 °F)) is 0.9822.


Candle-making has long involved the use of beeswax, which burns readily and cleanly, and this material was traditionally prescribed for the making of the Paschal candle or "Easter candle". Beeswax candles are purported to be superior to other wax candles, because they burn brighter and longer, do not bend, and burn cleaner. It is further recommended for the making of other candles used in the liturgy of the Roman Catholic Church. Beeswax is also the candle constituent of choice in the Eastern Orthodox Church.

Refined beeswax plays a prominent role in art materials both as a binder in encaustic paint and as a stabilizer in oil paint to add body.

Beeswax is an ingredient in surgical bone wax, which is used during surgery to control bleeding from bone surfaces; shoe polish and furniture polish can both use beeswax as a component, dissolved in turpentine or sometimes blended with linseed oil or tung oil; modeling waxes can also use beeswax as a component; pure beeswax can also be used as an organic surfboard wax. Beeswax blended with pine rosin is used for waxing, and can serve as an adhesive to attach reed plates to the structure inside a squeezebox. It can also be used to make Cutler's resin, an adhesive used to glue handles onto cutlery knives. It is used in Eastern Europe in egg decoration; it is used for writing, via resist dyeing, on batik eggs (as in pysanky) and for making beaded eggs. Beeswax is used by percussionists to make a surface on tambourines for thumb rolls. It can also be used as a metal injection moulding binder component along with other polymeric binder materials.

Beeswax was formerly used in the manufacture of phonograph cylinders. It may still be used to seal formal legal or royal decree and academic parchments such as placing an awarding stamp imprimatur of the university upon completion of postgraduate degrees.

Purified and bleached beeswax is used in the production of food, cosmetics, and pharmaceuticals. The three main types of beeswax products are yellow, white, and beeswax absolute. Yellow beeswax is the crude product obtained from the honeycomb, white beeswax is bleached or filtered yellow beeswax, and beeswax absolute is yellow beeswax treated with alcohol. In food preparation, it is used as a coating for cheese; by sealing out the air, protection is given against spoilage (mold growth). Beeswax may also be used as a food additive E901, in small quantities acting as a glazing agent, which serves to prevent water loss, or used to provide surface protection for some fruits. Soft gelatin capsules and tablet coatings may also use E901. Beeswax is also a common ingredient of natural chewing gum. The wax monoesters in beeswax are poorly hydrolysed in the guts of humans and other mammals, so they have insignificant nutritional value. Some birds, such as honeyguides, can digest beeswax. Beeswax is the main diet of wax moth larvae.

The use of beeswax in skin care and cosmetics has been increasing. A German study found beeswax to be superior to similar barrier creams (usually mineral oil-based creams such as petroleum jelly), when used according to its protocol. Beeswax is used in lip balm, lip gloss, hand creams, salves, and moisturizers; and in cosmetics such as eye shadow, blush, and eye liner. Beeswax is also an important ingredient in moustache wax and hair pomades, which make hair look sleek and shiny.

In oil spill control, beeswax is processed to create Petroleum Remediation Product (PRP). It is used to absorb oil or petroleum-based pollutants from water.


It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. 

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


Board footer

Powered by FluxBB