You are not logged in.
2301) Thermal Printing
Gist
Thermal printing (or direct thermal printing) is a digital printing process which produces a printed image by passing paper with a thermochromic coating, commonly known as thermal paper, over a print head consisting of tiny electrically heated elements.
Summary
Thermal printing is a technique in which an image is printed on thermal paper by means of heat.
The heat is applied by a controlled printhead. The thermal paper, on the other hand, has a heat-sensitive coating that allows the heat to produce the marking.
Types of thermal printing
There are two types of thermal printing methods: direct thermal printing and thermal transfer printing.
In the case of direct thermal printing, the heat from the printhead is applied directly to the thermal paper. This causes a chemical reaction in the special heat-sensitive layer of the laminate material, turning the paper black. It is worth being noted that modern thermal paper is much more resistant to environmental influences than before so that the colour stays fresh for several years.
Thermal transfer printing, on the other hand, uses colour foil, or thermal transfer foil, to get the print on the paper. The printhead is equipped with hundreds of tiny heating elements that can be computer-control activated. The foil runs between the printhead and paper and is melted by the printhead where the heating elements are activated. The plain surface of the foil achieves a fine and sharp print with a slight shine. The advantage of the thermal-transfer system is the longer shelf life of its prints compared to the direct-thermal prints.
Thermal printing and inkjet printing systems
Thermal printers that have to print directly on the product packaging often use inkjet printing systems. The main advantage is that these systems are able to print on various surfaces like paper, cardboard packaging, synthetic materials or metal. Inkjet printing is especially used for pharmaceutical, cosmetic and food packaging, as well as in the postal service.
Thermal printers powered with inkjet printing systems use ink cartridges with lots of tiny chambers that can be heated by an electric impulse. The heat provokes the building of a small vapour bubble which pushes the ink through the nozzle. The tension of the vapour bubble, as well as the surface tension of the ink drop, cause the ink to hurl back within a fraction of a second. Precise and high quality prints of text, barcodes and graphics can therefore be achieved.
Another combination of thermal printing with inkjet printing technology is the so-called piezoelectric inkjet printing process, or piezo inkjet printing. In this case, the walls of the ink chamber are heated by an electrical pulse, so that they extend through the heat. The ink is pressed out of the nozzle and onto the object. By stopping the electric impulse, the walls return to their original position. In the chamber a vacuum is created, pulling the rest of the ink that has not been used for the print back into the cartridge.
Details
Thermal printing (or direct thermal printing) is a digital printing process which produces a printed image by passing paper with a thermochromic coating, commonly known as thermal paper, over a print head consisting of tiny electrically heated elements. The coating turns black in the areas where it is heated, producing an image.
Most thermal printers are monochrome (black and white) although some two-color designs exist.
Thermal-transfer printing is a different method, using plain paper with a heat-sensitive ribbon instead of heat-sensitive paper, but using similar print heads.
Design
A thermal printer typically contains at least these components:
* Thermal head: Produces heat to create an image on the paper
* Platen: A rubber roller which moves the paper
* Spring: Applies pressure to hold the paper and printhead together
Thermal paper is impregnated with a solid-state mixture of a dye and a suitable matrix, for example, a fluoran leuco dye and an octadecylphosphonic acid. When the matrix is heated above its melting point, the dye reacts with the acid, shifts to its colored form, and the changed form is then conserved in metastable state when the matrix solidifies back quickly enough, a process known as thermochromism.
This process is usually monochrome, but some two-color designs exist, which can print both black and an additional color (often red) by applying heat at two different temperatures.
In order to print, the thermal paper is inserted between the thermal head and the platen and pressed against the head. The printer sends an electric current to the heating elements of the thermal head. The heat generated activates the paper's thermochromic layer, causing it to turn a certain color (for example, black).
Thermal print heads can have a resolution of up to 1,200 dots per inch (dpi). The heating elements are usually arranged as a line of small closely spaced dots.
Early formulations of the thermo-sensitive coating used in thermal paper were sensitive to incidental heat, abrasion, friction (which can cause heat, thus darkening the paper), light (which can fade printed images), and water. Later thermal coating formulations are far more stable; in practice, thermally printed text should remain legible for at least 50 days.
Applications
Thermal printers print more quietly and usually faster than impact dot matrix printers. They are also smaller, lighter and consume less power, making them ideal for portable and retail applications.
Commercial use
Commercial applications of thermal printers include filling station pumps, information kiosks, point of sale systems, voucher printers in slot machines, print on demand labels for shipping and products, and for recording live rhythm strips on hospital cardiac monitors.
Record-keeping in microcomputers
Many popular microcomputer systems from the late 1970s and early 1980s had first-party and aftermarket thermal printers available for them, such as the Atari 822 printer for the Atari 8-bit computers, the Apple Silentype for the Apple II, and the Alphacom 32 for the ZX Spectrum and ZX81. They often use unusually-sized supplies (10CM wide rolls for the Alphacom 32 for instance) and were often used for making permanent records of information in the computer (graphics, program listings etc.), rather than for correspondence.
Fax machines
A fax machine from Panasonic with integrated answering machine, beginning of the 1990s. The thermal paper was sold in rolls which were inserted into a compartment in the device. After a completed transmission, the printed document was automatically cut off from the roll and remained in front of the machine.
Through the 1990s, many fax machines used thermal printing technology. Toward the beginning of the 21st century, however, thermal wax transfer, laser, and inkjet printing technology largely supplanted thermal printing technology in fax machines, allowing printing on plain paper.
Seafloor Exploration
Thermal printers are commonly used in seafloor exploration and engineering geology due to their portability, speed, and ability to create continuous reels or sheets. Typically, thermal printers found in offshore applications are used to print realtime records of side scan sonar and sub-seafloor seismic imagery. In data processing, thermal printers are sometimes used to quickly create hard copies of continuous seismic or hydrographic records stored in digital SEG Y or XTF form.
Other uses
Flight progress strips used in air traffic control (ACARS) typically use thermal printing technology.
In many hospitals in the United Kingdom, many common ultrasound sonogram devices output the results of the scan onto thermal paper. This can cause problems if the parents wish to preserve the image by laminating it, as the heat of most laminators will darken the entire page—this can be tested beforehand on an unimportant thermal print. An option is to make and laminate a permanent ink duplicate of the image.
The Game Boy Printer, released in 1998, was a small thermal printer used to print out certain elements from some Game Boy games.
Health concerns
Reports began surfacing of studies in the 2000s finding the oestrogen-related chemical bisphenol A ("BPA") mixed in with thermal (and some other) papers. While the health concerns are very uncertain[citation needed], various health and science oriented political pressure organizations, such as the Environmental Working Group, have pressed for these versions to be pulled from market.
Additional Information
Thermal printers are dot-matrix printers that operate by driving heated pins against special heat-sensitive paper to “burn” the image onto the paper. They are quiet, but many people don't like the feel of thermal paper, and the images tend to fade.
Thermal printers are favored for their simplicity, speed, and reliability. They're used extensively in industries like retail, healthcare, logistics, and manufacturing for their cost-effectiveness and ability to generate high-resolution prints very quickly.
Thermal printers are dot-matrix printers that operate by driving heated pins against special heat-sensitive paper to “burn” the image onto the paper. They are quiet, but many people don't like the feel of thermal paper, and the images tend to fade.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2302) Scientist
Gist
A scientist is someone who systematically gathers and uses research and evidence, to make hypotheses and test them, to gain and share understanding and knowledge. A scientist can be further defined by: how they go about this, for instance by use of statistics (statisticians) or data (data scientists).
Summary
A scientist is a person who studies or has mastered the field in science. A scientist tries to understand how our world, or other things, work. Scientists make observations, ask questions and do extensive research work in finding the answers to many questions.
Scientists may work in laboratories for governments, companies, schools and research institutions. Some scientists teach at universities and other places and train people to become scientists. Scientists often do experiments to find out more about reality, and sometimes may repeat experiments or use control groups. Scientists who are doing applied science try to use scientific knowledge to improve the world.
Scientists use the Scientific method to test theories and hypotheses.
Details
A scientist is a person who researches to advance knowledge in an area of the natural sciences.
In classical antiquity, there was no real ancient analog of a modern scientist. Instead, philosophers engaged in the philosophical study of nature called natural philosophy, a precursor of natural science. Though Thales (c. 624–545 BC) was arguably the first scientist for describing how cosmic events may be seen as natural, not necessarily caused by gods, it was not until the 19th century that the term scientist came into regular use after it was coined by the theologian, philosopher, and historian of science William Whewell in 1833.
History
"No one in the history of civilization has shaped our understanding of science and natural philosophy more than the great Greek philosopher and scientist Aristotle (384-322 BC), who exerted a profound and pervasive influence for more than two thousand years" —Gary B. Ferngren
Georgius Agricola gave chemistry its modern name. Generally referred to as the father of mineralogy and the founder of geology as a scientific discipline.
Johannes Kepler, one of the founders and fathers of modern astronomy, the scientific method, natural and modern science.
Alessandro Volta, the inventor of the electrical battery and discoverer of methane, is widely regarded as one of the greatest scientists in history.
Francesco Redi, referred to as the "father of modern parasitology", is the founder of experimental biology.
Isaac Newton, who is regarded as "the towering figure of the Scientific Revolution", and who achieved the first great unification in physics, created classical mechanics, calculus and refined the scientific method.
Mary Somerville, for whom the word "scientist" was coined.
Physicist Albert Einstein developed the general theory of relativity and made many substantial contributions to physics.
Physicist Enrico Fermi is credited with the creation of the world's first atomic bomb and nuclear reactor.
Atomic physicist Niels Bohr made fundamental contributions to understanding atomic structure and quantum theory.
Marine Biologist Rachel Carson launched the 20th century environmental movement.
The roles of "scientists", and their predecessors before the emergence of modern scientific disciplines, have evolved considerably over time. Scientists of different eras (and before them, natural philosophers, mathematicians, natural historians, natural theologians, engineers, and others who contributed to the development of science) have had widely different places in society, and the social norms, ethical values, and epistemic virtues associated with scientists—and expected of them—have changed over time as well. Accordingly, many different historical figures can be identified as early scientists, depending on which characteristics of modern science are taken to be essential.
Some historians point to the Scientific Revolution that began in 16th century as the period when science in a recognizably modern form developed. It was not until the 19th century that sufficient socioeconomic changes had occurred for scientists to emerge as a major profession.
Classical antiquity
Knowledge about nature in classical antiquity was pursued by many kinds of scholars. Greek contributions to science—including works of geometry and mathematical astronomy, early accounts of biological processes and catalogs of plants and animals, and theories of knowledge and learning—were produced by philosophers and physicians, as well as practitioners of various trades. These roles, and their associations with scientific knowledge, spread with the Roman Empire and, with the spread of Christianity, became closely linked to religious institutions in most European countries. Astrology and astronomy became an important area of knowledge, and the role of astronomer/astrologer developed with the support of political and religious patronage. By the time of the medieval university system, knowledge was divided into the trivium—philosophy, including natural philosophy—and the quadrivium—mathematics, including astronomy. Hence, the medieval analogs of scientists were often either philosophers or mathematicians. Knowledge of plants and animals was broadly the province of physicians.
Middle Ages
Science in medieval Islam generated some new modes of developing natural knowledge, although still within the bounds of existing social roles such as philosopher and mathematician. Many proto-scientists from the Islamic Golden Age are considered polymaths, in part because of the lack of anything corresponding to modern scientific disciplines. Many of these early polymaths were also religious priests and theologians: for example, Alhazen and al-Biruni were mutakallimiin; the physician Avicenna was a hafiz; the physician Ibn al-Nafis was a hafiz, muhaddith and ulema; the botanist Otto Brunfels was a theologian and historian of Protestantism; the astronomer and physician Nicolaus Copernicus was a priest. During the Italian Renaissance scientists like Leonardo da Vinci, Michelangelo, Galileo Galilei and Gerolamo Cardano have been considered the most recognizable polymaths.
Renaissance
During the Renaissance, Italians made substantial contributions in science. Leonardo da Vinci made significant discoveries in paleontology and anatomy. The Father of modern Science, Galileo Galilei, made key improvements on the thermometer and telescope which allowed him to observe and clearly describe the solar system. Descartes was not only a pioneer of analytic geometry but formulated a theory of mechanics and advanced ideas about the origins of animal movement and perception. Vision interested the physicists Young and Helmholtz, who also studied optics, hearing and music. Newton extended Descartes's mathematics by inventing calculus (at the same time as Leibniz). He provided a comprehensive formulation of classical mechanics and investigated light and optics. Fourier founded a new branch of mathematics — infinite, periodic series — studied heat flow and infrared radiation, and discovered the greenhouse effect. Girolamo Cardano, Blaise Pascal Pierre de Fermat, Von Neumann, Turing, Khinchin, Markov and Wiener, all mathematicians, made major contributions to science and probability theory, including the ideas behind computers, and some of the foundations of statistical mechanics and quantum mechanics. Many mathematically inclined scientists, including Galileo, were also musicians.
There are many compelling stories in medicine and biology, such as the development of ideas about the circulation of blood from Galen to Harvey. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution.
Age of Enlightenment
During the age of Enlightenment, Luigi Galvani, the pioneer of bioelectromagnetics, discovered animal electricity. He discovered that a charge applied to the spinal cord of a frog could generate muscular spasms throughout its body. Charges could make frog legs jump even if the legs were no longer attached to a frog. While cutting a frog leg, Galvani's steel scalpel touched a brass hook that was holding the leg in place. The leg twitched. Further experiments confirmed this effect, and Galvani was convinced that he was seeing the effects of what he called animal electricity, the life force within the muscles of the frog. At the University of Pavia, Galvani's colleague Alessandro Volta was able to reproduce the results, but was sceptical of Galvani's explanation.
Lazzaro Spallanzani is one of the most influential figures in experimental physiology and the natural sciences. His investigations have exerted a lasting influence on the medical sciences. He made important contributions to the experimental study of bodily functions and animal reproduction.
Francesco Redi discovered that microorganisms can cause disease.
19th century
Until the late 19th or early 20th century, scientists were still referred to as "natural philosophers" or "men of science".
English philosopher and historian of science William Whewell coined the term scientist in 1833, and it first appeared in print in Whewell's anonymous 1834 review of Mary Somerville's On the Connexion of the Physical Sciences published in the Quarterly Review. Whewell wrote of "an increasing proclivity of separation and dismemberment" in the sciences; while highly specific terms proliferated—chemist, mathematician, naturalist—the broad term "philosopher" was no longer satisfactory to group together those who pursued science, without the caveats of "natural" or "experimental" philosopher. Whewell compared these increasing divisions with Somerville's aim of "[rendering] a most important service to science" "by showing how detached branches have, in the history of science, united by the discovery of general principles." Whewell reported in his review that members of the British Association for the Advancement of Science had been complaining at recent meetings about the lack of a good term for "students of the knowledge of the material world collectively." Alluding to himself, he noted that "some ingenious gentleman proposed that, by analogy with artist, they might form [the word] scientist, and added that there could be no scruple in making free with this term since we already have such words as economist, and atheist—but this was not generally palatable".
Whewell proposed the word again more seriously (and not anonymously) in his 1840 The Philosophy of the Inductive Sciences:
The terminations ize (rather than ise), ism, and ist, are applied to words of all origins: thus we have to pulverize, to colonize, Witticism, Heathenism, Journalist, Tobacconist. Hence we may make such words when they are wanted. As we cannot use physician for a cultivator of physics, I have called him a Physicist. We need very much a name to describe a cultivator of science in general. I should incline to call him a Scientist. Thus we might say, that as an Artist is a Musician, Painter, or Poet, a Scientist is a Mathematician, Physicist, or Naturalist.
He also proposed the term physicist at the same time, as a counterpart to the French word physicien. Neither term gained wide acceptance until decades later; scientist became a common term in the late 19th century in the United States and around the turn of the 20th century in Great Britain. By the twentieth century, the modern notion of science as a special brand of information about the world, practiced by a distinct group and pursued through a unique method, was essentially in place.
20th century
Marie Curie became the first woman to win the Nobel Prize and the first person to win it twice. Her efforts led to the development of nuclear energy and Radiotherapy for the treatment of cancer. In 1922, she was appointed a member of the International Commission on Intellectual Co-operation by the Council of the League of Nations. She campaigned for scientist's right to patent their discoveries and inventions. She also campaigned for free access to international scientific literature and for internationally recognized scientific symbols.
Profession
As a profession, the scientist of today is widely recognized. However, there is no formal process to determine who is a scientist and who is not a scientist. Anyone can be a scientist in some sense. Some professions have legal requirements for their practice (e.g. licensure) and some scientists are independent scientists meaning that they practice science on their own, but to practice science there are no known licensure requirements.
Education
In modern times, many professional scientists are trained in an academic setting (e.g., universities and research institutes), mostly at the level of graduate schools. Upon completion, they would normally attain an academic degree, with the highest degree being a doctorate such as a Doctor of Philosophy (PhD). Although graduate education for scientists varies among institutions and countries, some common training requirements include specializing in an area of interest, publishing research findings in peer-reviewed scientific journals and presenting them at scientific conferences, giving lectures or teaching,[44] and defending a thesis (or dissertation) during an oral examination. To aid them in this endeavor, graduate students often work under the guidance of a mentor, usually a senior scientist, which may continue after the completion of their doctorates whereby they work as postdoctoral researchers.
Career
After the completion of their training, many scientists pursue careers in a variety of work settings and conditions. In 2017, the British scientific journal Nature published the results of a large-scale survey of more than 5,700 doctoral students worldwide, asking them which sectors of the economy they would like to work in. A little over half of the respondents wanted to pursue a career in academia, with smaller proportions hoping to work in industry, government, and nonprofit environments.
Other motivations are recognition by their peers and prestige. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, and chemistry.
Some scientists have a desire to apply scientific knowledge for the benefit of people's health, the nations, the world, nature, or industries (academic scientist and industrial scientist). Scientists tend to be less motivated by direct financial reward for their work than other careers. As a result, scientific researchers often accept lower average salaries when compared with many other professions which require a similar amount of training and qualification.
Research interests
Scientists include experimentalists who mainly perform experiments to test hypotheses, and theoreticians who mainly develop models to explain existing data and predict new results. There is a continuum between the two activities and the division between them is not clear-cut, with many scientists performing both tasks.
Those considering science as a career often look to the frontiers. These include cosmology and biology, especially molecular biology and the human genome project. Other areas of active research include the exploration of matter at the scale of elementary particles as described by high-energy physics, and materials science, which seeks to discover and design new materials. Others choose to study brain function and neurotransmitters, which is considered by many to be the "final frontier". There are many important discoveries to make regarding the nature of the mind and human thought, much of which still remains unknown.
Additional Information:
Introduction
Through the ages, people have sought to better understand how and why things happen in the universe. Scientists developed an approach to keep track of what was learned and to make sure it was true. Findings were tested and recorded so that others could use those ideas to solve new problems. Over time, scientific discoveries have changed the way people live and think.
The dates given for the scientists in this section are for their most notable scientific achievement or for the time period when they were most active in their scientific studies.
Ancient Times
Plato (387 bce)
Aristotle (300s bce)
Euclid (300s–200s bce)
Aristarchus of Samos (200s bce)
Archimedes (200s bce)
Ptolemy (100s ce)
Hypatia (late 300s)
al-Khwarizmi (800s)
1400s Through 1700s
Leonardo da Vinci (1490s–1510)
Nicolaus Copernicus (1508–14)
Andreas Vesalius (1543)
Galileo (1580s–1630s)
Johannes Kepler (1596–1611)
Blaise Pascal (1640s–50s)
Isaac Newton (1665–1704)
Benjamin Franklin (1752)
Benjamin Banneker (1790s)
Caroline Herschel (1787–1828)
1800s
Sophie Germain (1800s–20s)
Mary Anning (1820s–40s)
Maria Mitchell (1847)
Louis Pasteur (1847–85)
Charles Darwin (1858)
Mary Edwards Walker (1863–65)
Thomas Edison (1869–1920s)
Alexander Graham Bell (1876)
Nikola Tesla (1882–1910s)
Elizabeth Blackwell (late 1800s)
Susan La Flesche Picotte (1889–1913)
Lewis Latimer (1890)
Wassaja (1890s)
Pierre Curie (1890s–1906)
Marie Curie (1890s–1911)
George Washington Carver (1890s–1940s)
1900s and Beyond
Eugène Marais (early 1900s)
Guglielmo Marconi (1901)
Bertha Van Hoosen (1902–51)
Albert Einstein (1905)
Charles Henry Turner (1907–10)
Niels Bohr (1910s–40s)
Alice Ball (1915–16)
Joan Beauchamp Procter (1920s)
Margaret Chung (1920s–40s)
Frederick Grant Banting (1921–23)
Te Rangi Hīroa (1922–27)
Raymond Dart (1924)
Alexander Fleming (1928)
Margaret Mead (1928–60s)
Frédéric Joliot (1934)
Irène Joliot (1934)
Percy Julian (1935)
Max Theiler (1937)
Enrico Fermi (1942)
Ruth Benedict (1930s–40s)
Charles Richard Drew (1930s–40s)
Alan Turing (1940s)
Maria Goeppert Mayer (1940s–50s)
Edward Teller (1940s–50s)
Dorothy Crowfoot Hodgkin (1940s–60s)
J. Robert Oppenheimer (1940s–60s)
Chien-Shiung Wu (1940s–60s)
Marie Tharp (1940s–70s)
Jonas Salk (1942–55)
Robert Broom (1947)
Mary Douglas Leakey (1948 and 1978)
Eugenie Clark (1948–92)
Stephanie Kwolek (1950s–60s)
Ruth Benerito (1950s–80s)
Aaron Klug (1950s–80s)
Nancy Grace Roman (1950s–90s)
Francis Crick (1951–62)
James Watson (1951–62)
Virginia Apgar (1952)
Jane Goodall (1960–75)
Allan Cormack (1960s)
Mary Jackson (1960s)
Katherine Johnson (1960s)
Louis Leakey (1960s)
Jacques Piccard (1960s)
Dorothy Vaughan (1960s)
Richard Leakey (1960s–70s)
Gladys West (1960s–80s)
Rachel Carson (1962)
Phillip Tobias (1964 and 1995)
Dian Fossey (1966–83)
Robert Ballard (1970s–80s)
Christine Darden (1970s–80s)
Fred Hollows (1970s–80s)
Sally Ride (1970s–80s)
Richard Dawkins (1976)
Victor Chang (1980s)
Steven Chu (1980s)
Mae Jemison (1980s–90s)
Charles Bolden (1980s–90s)
Ronald McNair (1984)
Alexa Canady (1984–2000s)
Benjamin Carson (1987)
Patricia Bath (1988)
Temple Grandin (1990–)
Joycelyn Elders (1993–94)
Meave Leakey (1994)
Regina Benjamin (2009–13)
Anthony Fauci (1980s–2020s)
Scientists by Subject Studied
Some scientists, including Aristotle, Hypatia, al-Khwarizmi, Galileo, and Marie Curie, studied more than one subject of science in depth. Most others focused on a particular subject. The lists here group scientists by the subject that played a main role in their studies.
Astronomy
Aristarchus of Samos
Benjamin Banneker
Nicolaus Copernicus
Galileo
Caroline Herschel
Johannes Kepler
Maria Mitchell
Ptolemy
Nancy Grace Roman
Biology
Aristotle
Alexa Canady
Rachel Carson
Eugenie Clark
Charles Darwin
Richard Dawkins
Dian Fossey
Jane Goodall
Temple Grandin
Leonardo da Vinci
Eugène Marais
Joan Beauchamp Procter
Charles Henry Turner
Chemistry
Alice Ball
Ruth Benerito
George Washington Carver
Dorothy Crowfoot Hodgkin
Mae Jemison
Frédéric Joliot
Irène Joliot
Percy Julian
Aaron Klug
Stephanie Kwolek
Louis Pasteur
Earth Sciences
Robert Ballard
Jacques Piccard
Marie Tharp
Mathematics
al-Khwarizmi
Archimedes
Christine Darden
Euclid
Sophie Germain
Hypatia
Mary Jackson
Katherine Johnson
Blaise Pascal
Plato
Alan Turing
Dorothy Vaughan
Gladys West
Medicine
Virginia Apgar
Frederick Grant Banting
Patricia Bath
Regina Benjamin
Elizabeth Blackwell
Benjamin Carson
Victor Chang
Margaret Chung
Francis Crick
Charles Richard Drew
Joycelyn Elders
Anthony Fauci
Alexander Fleming
Fred Hollows
Susan La Flesche Picotte
Jonas Salk
Max Theiler
Bertha Van Hoosen
Andreas Vesalius
Mary Edwards Walker
Wassaja
James Watson
Paleontology and Anthropology
Mary Anning
Ruth Benedict
Robert Broom
Raymond Dart
Te Rangi Hīroa
Louis Leakey
Mary Douglas Leakey
Meave Leakey
Richard Leakey
Margaret Mead
Phillip Tobias
Physics
Alexander Graham Bell
Niels Bohr
Steven Chu
Allan Cormack
Marie Curie
Pierre Curie
Thomas Edison
Albert Einstein
Enrico Fermi
Lewis Latimer
Guglielmo Marconi
Maria Goeppert Mayer
Ronald McNair
Isaac Newton
J. Robert Oppenheimer
Sally Ride
Edward Teller
Nikola Tesla
Chien-Shiung Wu
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2303) Kinesiology
Gist
Kinesiology means 'the study of movement'. The term is also used by complementary medicine practitioners to describe a form of therapy that uses muscle monitoring (biofeedback) to look at what may be causing 'imbalances' in the body and attempts to relieve these imbalances.
Summary
Kinesiology is the Study of the mechanics and anatomy of human movement and their roles in promoting health and reducing disease. Kinesiology has direct applications to fitness and health, including developing exercise programs for people with and without disabilities, preserving the independence of older people, preventing disease due to trauma and neglect, and rehabilitating people after disease or injury. Kinesiologists also develop more accessible furniture and environments for people with limited movement and find ways to enhance individual and team efficiency. Kinesiology research encompasses the biochemistry of muscle contraction and tissue fluids, bone mineralization, responses to exercise, how physical skills are developed, work efficiency, and the anthropology of play.
Details
Kinesiology is the scientific study of human body movement. Kinesiology addresses physiological, anatomical, biomechanical, pathological, neuropsychological principles and mechanisms of movement. Applications of kinesiology to human health include biomechanics and orthopedics; strength and conditioning; sport psychology; motor control; skill acquisition and motor learning; methods of rehabilitation, such as physical and occupational therapy; and sport and exercise physiology. Studies of human and animal motion include measures from motion tracking systems, electrophysiology of muscle and brain activity, various methods for monitoring physiological function, and other behavioral and cognitive research techniques.
Basics
Kinesiology studies the science of human movement, performance, and function by applying the fundamental sciences of Cell Biology, Molecular Biology, Chemistry, Biochemistry, Biophysics, Biomechanics, Biomathematics, Biostatistics, Anatomy, Physiology, Exercise Physiology, Pathophysiology, Neuroscience, and Nutritional science. A bachelor's degree in kinesiology can provide strong preparation for graduate study in medical school, biomedical research, as well as in professional programs.
The term "kinesiologist" is not a licensed nor professional designation in many countries, with the notable exception of Canada. Individuals with training in this area can teach physical education, work as personal trainers and sports coaches, provide consulting services, conduct research and develop policies related to rehabilitation, human motor performance, ergonomics, and occupational health and safety. In North America, kinesiologists may study to earn a Bachelor of Science, Master of Science, or Doctorate of Philosophy degree in Kinesiology or a Bachelor of Kinesiology degree, while in Australia or New Zealand, they are often conferred an Applied Science (Human Movement) degree (or higher). Many doctoral-level faculty in North American kinesiology programs received their doctoral training in related disciplines, such as neuroscience, mechanical engineering, psychology, and physiology.
In 1965, the University of Massachusetts Amherst created the United States' first Department of Exercise Science (kinesiology) under the leadership of visionary researchers and academicians in the field of exercise science. In 1967, the University of Waterloo launched Canada's first kinesiology department.
Principles:
Adaptation through exercise
Adaptation through exercise is a key principle of kinesiology that relates to improved fitness in athletes as well as health and wellness in clinical populations. Exercise is a simple and established intervention for many movement disorders and musculoskeletal conditions due to the neuroplasticity of the brain and the adaptability of the musculoskeletal system. Therapeutic exercise has been shown to improve neuromotor control and motor capabilities in both normal and pathological populations.
There are many different types of exercise interventions that can be applied in kinesiology to athletic, normal, and clinical populations. Aerobic exercise interventions help to improve cardiovascular endurance. Anaerobic strength training programs can increase muscular strength, power, and lean body mass. Decreased risk of falls and increased neuromuscular control can be attributed to balance intervention programs. Flexibility programs can increase functional range of motion and reduce the risk of injury.
As a whole, exercise programs can reduce symptoms of depression and risk of cardiovascular and metabolic diseases. Additionally, they can help to improve quality of life, sleeping habits, immune system function, and body composition.
The study of the physiological responses to physical exercise and their therapeutic applications is known as exercise physiology, which is an important area of research within kinesiology.
Neuroplasticity
Adaptive plasticity along with practice in three levels. In behavior level, performance (e.g., successful rate, accuracy) improved after practice. In cortical level, motor representation areas of the acting muscles enlarged; functional connectivity between primary motor cortex (M1) and supplementary motor area (SMA) is strengthened. In neuronal level, the number of dendrites and neurotransmitter increase with practice.
Neuroplasticity is also a key scientific principle used in kinesiology to describe how movement and changes in the brain are related. The human brain adapts and acquires new motor skills based on this principle. The brain can be exposed to new stimuli and experiences and therefore learn from them and create new neural pathways hence leading to brain adaptation. These new adaptations and skills include both adaptive and maladaptive brain changes.
Adaptive plasticity
Recent[when?] empirical evidence indicates the significant impact of physical activity on brain function; for example, greater amounts of physical activity are associated with enhanced cognitive function in older adults. The effects of physical activity can be distributed throughout the whole brain, such as higher gray matter density and white matter integrity after exercise training, and/or on specific brain areas, such as greater activation in prefrontal cortex and hippocampus. Neuroplasticity is also the underlying mechanism of skill acquisition. For example, after long-term training, pianists showed greater gray matter density in sensorimotor cortex and white matter integrity in the internal capsule compared to non-musicians.
Maladaptive plasticity
Maladaptive plasticity is defined as neuroplasticity with negative effects or detrimental consequences in behavior. Movement abnormalities may occur among individuals with and without brain injuries due to abnormal remodeling in central nervous system. Learned non-use is an example commonly seen among patients with brain damage, such as stroke. Patients with stroke learned to suppress paretic limb movement after unsuccessful experience in paretic hand use; this may cause decreased neuronal activation at adjacent areas of the infarcted motor cortex.
There are many types of therapies that are designed to overcome maladaptive plasticity in clinic and research, such as constraint-induced movement therapy (CIMT), body weight support treadmill training (BWSTT) and virtual reality therapy. These interventions are shown to enhance motor function in paretic limbs and stimulate cortical reorganization in patients with brain damage.
Motor redundancy
* Motor redundancy is a widely used concept in kinesiology and motor control which states that, for any task the human body can perform, there are effectively an unlimited number of ways the nervous system could achieve that task. This redundancy appears at multiple levels in the chain of motor execution:
* Kinematic redundancy means that for a desired location of the endpoint (e.g. the hand or finger), there are many configurations of the joints that would produce the same endpoint location in space.
* Muscle redundancy means that the same net joint torque could be generated by many different relative contributions of individual muscles.
* Motor unit redundancy means that for the same net muscle force could be generated by many different relative contributions of motor units within that muscle.
The concept of motor redundancy is explored in numerous studies, usually with the goal of describing the relative contribution of a set of motor elements (e.g. muscles) in various human movements, and how these contributions can be predicted from a comprehensive theory. Two distinct (but not incompatible) theories have emerged for how the nervous system coordinates redundant elements: simplification and optimization. In the simplification theory, complex movements and muscle actions are constructed from simpler ones, often known as primitives or synergies, resulting in a simpler system for the brain to control. In the optimization theory, motor actions arise from the minimization of a control parameter, such as the energetic cost of movement or errors in movement performance.
Additional Information
Like a well-tuned instrument, your body needs precision, balance and optimal functioning to achieve greatness.
This is where the role of a kinesiologist becomes analogous to that of a skilled conductor orchestrating a symphony of movements. Just as a conductor harmonizes each instrument to create a masterpiece, a kinesiologist meticulously assesses the body’s movements, fine-tuning its mechanics to ensure peak performance.
If you’re intrigued by the captivating field of kinesiology and are wondering, “What is kinesiology?” then come along as we examine the historical roots, diverse applications and techniques of kinesiology: the science of movement.
Definition and History of Kinesiology
Kinesiology, rooted in the Greek term “kinesis,” signifying movement, and the suffix “-ology,” denoting a science or branch of knowledge, is fundamentally the study of human movement. Its historical roots trace back to the ancient philosopher Aristotle, often called the “Father of Kinesiology.” His work, “On the Motion of Animals” or “De Motu Animalium,” marked a pivotal moment by providing a geometric analysis of muscle actions, laying the foundation for studying movement.
Later on, in the 16th century, anatomist Andreas Vesalius laid the foundation for modern kinesiology by producing detailed drawings and descriptions of the human musculoskeletal system. Then, in the early 20th century, orthopedic surgeon R.W. Lovett laid the groundwork for muscle strength testing, developing a system that later found advancement in Henry and Florence Kendall’s 1949 book, “Muscle Testing and Function.”
The evolution of kinesiology as we recognize it today took another significant leap in 1964 when chiropractor George Goodheart introduced Applied Kinesiology. This approach involved studying muscle response, aligning with the term “kinesiology,” which denotes the study of movement. Goodheart’s innovative methods, effective in addressing complex health issues, were adopted across various healthcare fields.
Today, kinesiology is a thriving and multidisciplinary field, encompassing expertise from improving athletic performance and preventing injuries to developing assistive technologies for individuals with disabilities. Its trajectory highlights a continuous commitment to exploration and growth, leveraging centuries of research and innovation.
Principles of Kinesiology
Kinesiology refers to the study of movement. In American higher education, the term is used to describe a multifaceted field of study in which movement or physical activity is the intellectual focus. Physical activity includes exercising to improve health and fitness, learning movement skills, and engaging in activities of daily living, work, sport, dance, and play. Kinesiology is all-encompassing of the general population and relevant to everyone, not just sports enthusiasts or athletes. Special groups such as children and older adults as well as people with disabilities, injuries, or diseases can benefit from learning the principles of kinesiology and applying them to their daily activities and lives.
Learning and Practicing Kinesiology
Kinesiology is a common name for college and university academic departments that include many specialized areas of study in which the causes, processes, and consequences and contexts of physical activity are examined from different perspectives. The specialized areas of study apply knowledge, methods of inquiry, and principles from areas of study in the arts, humanities, sciences, and professional disciplines. These specialized areas include (but are not limited to) biomechanics, psychology of physical activity, exercise physiology, history of physical activity, measurement of physical activity, motor development, motor learning, and control, philosophy of physical activity, physical activity and public health, physical education pedagogy, sport management, sports medicine, and the sociology of physical activity. An interdisciplinary approach involving several of these areas is often used in addressing problems of importance to society.
Educational Path
A Kinesiology degree is an academic program that studies human movement, performance, and function. It integrates knowledge from various disciplines, including anatomy, physiology, biomechanics, psychology, neuroscience, and nutrition, to comprehensively understand how the human body moves and operates. This degree is designed to explore the principles of physical activity and its impact on health, wellness, and disease prevention.
Kinesiology programs often offer a variety of specializations, such as exercise science, physical education, rehabilitation, public health education, and sport management. These specializations allow students to tailor their education towards specific career goals in health promotion, fitness and wellness consulting, sports coaching, rehabilitation services (PT/OT/AT/ST/PA), and beyond.
The curriculum typically includes both theoretical coursework and practical experiences. Students engage in laboratory work, internships, and research projects that apply classroom knowledge to real-world settings. This hands-on approach not only enhances learning but also prepares graduates for professional practice in diverse settings, including hospitals, wellness centers, schools, sports organizations, and research institutions.
Graduates with a Kinesiology degree possess a deep understanding of the biomechanics of movement, the physiological responses to exercise, and strategies for promoting physical and mental well-being. They are equipped to pursue careers in health and fitness industries, rehabilitative sciences, sports coaching and performance analysis, health policy, and education, among others.
A Kinesiology degree offers a holistic and interdisciplinary approach to studying the body and its movements, aiming to improve human health, performance, and quality of life through physical activity.
Examples of courses in Kinesiology include:
* Introduction of Kinesiology: Explores the study of human movement, integrating principles from anatomy, physiology, biomechanics, and psychology to understand and enhance physical activity and health.
* Anatomy and Physiology: Detailed study of the human body's structure and function.
* Biomechanics: Examines the mechanical principles that govern human movement.
* Exercise Physiology: Looks at how the body responds and adapts to physical activity.
* Motor Learning and Control: Focuses on how we learn and refine movements.
* Sports Nutrition: Studies the role of diet in athletic performance and health.
* Strength and Conditioning: Teaches training principles to improve performance and prevent injuries.
* Health and Wellness Promotion: Focuses on strategies to encourage healthy lifestyles.
* Teaching Methods in Primary/Secondary Physical & Health Education: Curriculum and instructional models, national curriculum standards, teaching methods in PE.
* Motor Development: Knowledge and practice about physical growth, biological maturation, and motor development and their interrelationship in human performers.
Particular emphasis is placed on assessing and developing basic movement skills through programming strategies for individuals and large groups.
* Sports Psychology: Explores psychological factors that affect performance and physical activity.
* Esports: Introduction to esports, esports coaching, esports health and wellness, and moral principles in esports.
* Research Methods in Kinesiology: Provides an understanding of research design and analysis in studying human movement.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2304) Hotelier
Gist
A hotelier is a person who runs or owns a hotel. If you stay at a hotel, you may never see the hotelier, who is responsible for hiring and managing staff and keeping things running smoothly.
They are responsible for ensuring the smooth running of the hotel, from guest services to facility maintenance, and are involved in a range of tasks such as hiring staff, managing finances, and promoting the hotel to potential guests.
Summary
A hotelier is a person who runs or owns a hotel. If you stay at a hotel, you may never see the hotelier, who is responsible for hiring and managing staff and keeping things running smoothly.
It's probably more common to use the term "hotel manager," but hotelier is a fancy way to refer to the person in charge of a hotel's operation. If you've got a complaint about your room, you might angrily demand to speak to the hotelier immediately. The word hotelier comes from the French hôtelier, "hotelkeeper or hotel proprietor," and its Old French root hostel, "a lodging."
Details
A hotel manager, hotelier, or lodging manager is a person who manages the operation of a hotel, motel, resort, or other lodging-related establishment. Management of a hotel operation includes, but is not limited to management of hotel staff, business management, upkeep and sanitary standards of hotel facilities, guest satisfaction and customer service, marketing management, sales management, revenue management, financial accounting, purchasing, and other functions. The title "hotel manager" or "hotelier" often refers to the hotel's general manager who serves as a hotel's head executive, though their duties and responsibilities vary depending on the hotel's size, purpose, and expectations from ownership. The hotel's general manager is often supported by subordinate department managers that are responsible for individual departments and key functions of the hotel operations.
Hotel management structure
The size and complexity of a hotel management organizational structure varies significantly depending on the size, features, and function of the hotel or resort. A small hotel operation normally may consist of a small core management team consisting of a hotel manager and a few key department supervisors who directly handle day-to-day operations. On the other hand, a large full-service hotel or resort complex often operates more similarly to a large corporation with an executive board headed by the general manager and consisting of key directors serving as heads of individual hotel departments. Each department at the large hotel or resort complex may normally consist of subordinate line-level managers and supervisors who handle day-to-day operations.
Administrative functions for a small-scale hotel such as accounting, payroll, and human resources may normally be handled by a centralized corporate office or solely by the hotel manager. Additional auxiliary functions such as security may be handled by third-party vendor services contracted by the hotel on an as-needed basis. Hotel management is necessary to implement standard operating procedures and actions as well as handling day-to-day operations.
Typical qualifications
The background and training required varies by the type of management position, size of operation, and duties involved. Industry experience has proven to be a basic qualification for nearly any management occupation within the lodging industry. A BS and a MS degree in Hospitality Management/or an equivalent Business degree is often strongly preferred by most employers in the industry but not always required.
A higher level graduate degree may be desired for a general manager type position, but is often not required with sufficient management experience and industry tenure. A graduate degree may however be required for a higher level corporate executive position or above such as a Regional Vice President who oversees multiple hotel properties and general managers.
Working conditions
Hotel managers are generally exposed to long shifts that include late hours, weekends, and holidays due to the 24-hour operation of a hotel. The common workplace environment in hotels is fast-paced, with high levels of interaction with guests, employees, investors, and other managers.
Upper management consisting of senior managers, department heads, and general managers may sometimes enjoy a more desirable work schedule consisting of a more traditional business day with occasional weekends and holidays off.
Depending on the size of the hotel, a typical hotel manager's day may include assisting with operational duties, managing employee performance, handling dissatisfied guests, managing work schedules, purchasing supplies, interviewing potential job candidates, conducting physical walks and inspections of the hotel facilities and public areas, and additional duties. These duties may vary each day depending on the needs of the property. The manager's responsibility also includes knowing about all current local events as well as the events being held on the hotel property. Managers are often required to attend regular department meetings, management meetings, training seminars for professional development, and additional functions. A hotel/casino property may require additional duties regarding special events being held on property for casino complimentary guests.
2020 coronavirus pandemic
Working conditions were increasingly difficult during the 2020 coronavirus pandemic. One CEO of a major hotel owner, Monty Bennett of Ashford Inc., told CBS News that he had to lay off or furlough 95% of his 7,000 U.S. workers. To save money, hotel management are compelled to reduce all discretionary operational and capital costs, and review or postpone maintenance and other capital investments whenever possible. By the second week of the major outbreak of the virus in the U.S., the industry asked Congress for $250 billion in bailouts for owners and employees because of financial setbacks and mass layoffs.
Salary expectations
The median annual wage in 2015 of the 48,400 lodging managers in the United States was $49,720.
Additional Information
Hotel Management involves the implementation of access control measures within a hotel building to regulate the entry of various individuals such as owners, staff, guests, visitors, and service providers, while ensuring security and convenience for legitimate occupants.
Hotel Building Access Control
There are many different people who may, at any one time, wish to enter a hotel building. They include hotel owners and management staff, hotel contractors (such as elevator technicians and engineering, maintenance, security, janitorial, and parking personnel), guests, visitors, salespersons, tradespeople (including construction workers, electricians, plumbers, carpenters, gardeners, telecommunications repair persons, persons replenishing vending machines, and others who service equipment within the hotel), building inspectors, couriers, delivery persons, solicitors,♦ sightseers, people who are lost, vagrants or homeless people, mentally disturbed individuals, vandals, suicidal persons, protestors, and daredevils. There may also be others who try to enter hotel parking areas, retail shops, restaurants, health clubs, business centers, recreational facilities, gyms, exercise rooms, function rooms, meeting rooms, or an individual guest room, with the sole purpose of committing a crime.
It is primarily the hotel owner and operator who determine the access control measures for this wide spectrum of persons. These measures aim to screen out unwanted persons or intruders and at the same time provide a minimum of inconvenience to hotel guests and legitimate visitors. Varying degrees of access control can be achieved using security staff—in some hotels known as a security officer, a security guard, a doorman, a concierge, or by another title that differs according to the respective duties and responsibilities—and various security measures.
Building access controls include vehicle access to parking lots, garages, and loading dock/shipping and receiving areas; pedestrian access to building lobbies, elevator lobbies, and passenger and freight/service elevators; and access routes to retail spaces, restaurants, promenades, mezzanines, atria, and maintenance areas.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2305) Geologist
Gist
A geologist is a scientist who studies the structure, composition, and history of Earth. Geologists incorporate techniques from physics, chemistry, biology, mathematics, and geography to perform research in the field and the laboratory. Geologists work in the energy and mining sectors to exploit natural resources.
The word geology means 'Study of the Earth'. Also known as geoscience or earth science, Geology is the primary Earth science and looks at how the earth formed, its structure and composition, and the types of processes acting on it.
Summary
Geologists are employed in a diverse range of jobs in many different industries. Some work in the field, some in offices and others have a mixture of both. In a nutshell, Geologists work to better understand the Earth, but what do they actually do?
Below are some examples of the tasks Geologists carry out in their respective industries.
Mapping & Fieldwork
This is a field-based task many geologists undertake. Different types of field mapping will look for different aspects of the rocks of a particular area.
* Field mapping looks at the particular rock types and geological structures of an area and how they all relate to one another – the aim is to produce a ‘geological map’. It is undertaken by geology students and geoscientists who work for universities, mining and exploration companies or some oil and gas companies.
* Sampling trips are common for researchers and geological exploration companies.
Logging
Again, this is often a field-based activity undertaken with geological drilling. Geologists describe rock extracted by drills to understand the geology below the surface. Logging of sedimentary or volcanic rocks above ground is also used to study past environmental changes or accurately record sampling locations.
Some types of logging include:
* Rock core logging (or rock chip logging) for mining and exploration companies
* Mud logging is undertaken for oil and gas exploration
* Geotechnical logging – this assesses how strong or weak rocks are below the ground using rock core.
Laboratory Work
Many geologists undertake laboratory work in their careers. A lot of what we know about the geology of the world and other planets has been discovered in laboratories. Researchers and those who work for some geology-related companies work in laboratories. There are also some geoscientists employed specifically in commercial laboratories that a huge number of geology-related companies (e.g. mining, oil & gas, engineering and environmental companies) use to acquire data.
Laboratory work can include:
* Geochemical analyses – using chemical methods to reveal details about samples (such as their metal content or the quality of oil).
* Geomechanical tests – testing the strength of rocks.
Computer-based work
All geologists will do a lot of their work on computer, often using specialist software, mostly in offices but field-based computer work is becoming more common. This can include:
* Geographical Information Systems (GIS) – essentially, this is field mapping on computers – producing a digital database of the field data acquired by geologists.
* Database management – Geologists spend a lot of time ensuring databases are up to date. This can be vital for the modelling processes described below.
* Modelling programs – this has become increasingly important for geologists, both those who do research and in commercial companies. This means many geologists are trained in specialist software or programming.
Details
A geologist is a scientist who studies the structure, composition, and history of Earth. Geologists incorporate techniques from physics, chemistry, biology, mathematics, and geography to perform research in the field and the laboratory. Geologists work in the energy and mining sectors to exploit natural resources. They monitor environmental hazards such as earthquakes, volcanoes, tsunamis and landslides. Geologists are also important contributors to climate change discussions.
History
James Hutton is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795 (Vol. 1, Vol. 2). Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time.
The first geological map of the United States was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him; the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labors were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks.
Sir Charles Lyell first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time.
Education
For an aspiring geologist, training typically includes significant coursework in physics, mathematics, and chemistry, in addition to classes offered through the geology department; historical and physical geology, igneous and metamorphic petrology and petrography, hydrogeology, sedimentology, stratigraphy, mineralogy, palaeontology, physical geography and structural geology are among the many required areas of study. Most geologists also need skills in GIS and other mapping techniques. Geology students often spend portions of the year, especially the summer though sometimes during a January term, living and working under field conditions with faculty members (often referred to as "field camp"). Many non-geologists often take geology courses or have expertise in geology that they find valuable to their fields; this is common in the fields of geography, engineering, chemistry, urban planning, environmental studies, among others.
Specialization
Geologists, can be generally identified as a specialist in one or more of the various geoscience disciplines, such as a geophysicist or geochemist. Geologists may concentrate their studies or research in one or more of the following disciplines:
* Economic geology: the study of ore genesis, and the mechanisms of ore creation, geostatistics.
* Engineering geology: application of the geologic sciences to engineering practice for the purpose of assuring that the geologic factors affecting the location, design, construction, operation and maintenance of engineering works are recognized and adequately provided for;
* Geophysics: the applied branch deals with the application of physical methods such as gravity, seismicity, electricity, magnetic properties to study the earth.
* Geochemistry: the applied branch deals with the study of the chemical makeup and behaviour of rocks, and the study of the behaviour of their minerals.
* Geochronology: the study of isotope geology specifically toward determining the date within the past of rock formation, metamorphism, mineralization and geological events (notably, meteorite impacts).
* Geomorphology: the study of landforms and the processes that create them.
* Hydrogeology: the study of the origin, occurrence and movement of groundwater water in a subsurface geological system.
* Igneous petrology: the study of igneous processes such as igneous differentiation, fractional crystallization, intrusive and volcanological phenomena.
* Isotope geology: the case of the isotopic composition of rocks to determine the processes of rock and planetary formation.
* Metamorphic petrology: the study of the effects of metamorphism on minerals and rocks.
Marine geology: the study of the seafloor; involves geophysical, geochemical, sedimentological and paleontological investigations of the ocean floor and coastal margins. Marine geology has strong ties to physical oceanography and plate tectonics.
* Mineralogy: the study of the chemistry, crystal structure, and physical (including optical) properties of minerals and mineralized artifacts. Specific studies within mineralogy include the processes of mineral origin and formation, classification of minerals, their geographical distribution, as well as their utilization.
* Palaeoclimatology: the application of geological science to determine the climatic conditions present in the Earth's atmosphere within the Earth's history.
* Palaeontology: the classification and taxonomy of fossils within the geological record and the construction of a palaeontological history of the Earth.
* Pedology: the study of soil, soil formation, and regolith formation.
* Petroleum geology: the study of sedimentary basins applied to the search for hydrocarbons (oil exploration).
* Planetary geology: the study of geosciences as it relates to other celestial bodies, namely planets and their moons. This includes the subdisciplines of lunar geology, selenology, and martian geology, areology.
* Sedimentology: the study of sedimentary rocks, strata, formations, eustasy and the processes of modern-day sedimentary and erosive systems.
* Seismology: the study of earthquakes.
* Structural geology: the study of folds, faults, foliation and rock microstructure to determine the deformational history of rocks and regions.
* Volcanology: the study of volcanoes, their eruptions, lavas, magma processes and hazards.
Employment
Professional geologists may work in the mining industry or in the associated area of mineral exploration. They may also work in oil and gas industry.
Some geologists also work for a wide range of government agencies, private firms, and non-profit and academic institutions. They are usually hired on a contract basis or hold permanent positions within private firms or official agencies (such as the Geological Survey and Mineral Exploration of Iran).
Local, state, and national governments hire geologists to work on geological projects that are of interest to the public community. The investigation of a country's natural resources is often a key role when working for government institutions; the work of the geologist in this field can be made publicly available to help the community make more informed decisions related to the exploitation of resources, management of the environment and the safety of critical infrastructure - all of which is expected to bring greater wellbeing to the country. This 'wellbeing' is often in the form of greater tax revenues from new or extended mining projects or through better infrastructure and/or natural disaster planning.
An engineering geologist is employed to investigate geologic hazards and geologic constraints for the planning, design and construction of public and private engineering projects, forensic and post-mortem studies, and environmental impact analysis. Exploration geologists use all aspects of geology and geophysics to locate and study natural resources. In many countries or U.S. states without specialized environmental remediation licensure programs, the environmental remediation field is often dominated by professional geologists, particularly hydrogeologists, with professional concentrations in this aspect of the field. Petroleum and mining companies use mudloggers, and large-scale land developers use the skills of geologists and engineering geologists to help them locate oil and minerals, adapt to local features such as karst topography or earthquake risk, and comply with environmental regulations.
Geologists in academia usually hold an advanced degree in a specialized area within their geological discipline and are employed by universities.
Additional Information
Geology is the fields of study concerned with the solid Earth. Included are sciences such as mineralogy, geodesy, and stratigraphy.
An introduction to the geochemical and geophysical sciences logically begins with mineralogy, because Earth’s rocks are composed of minerals—inorganic elements or compounds that have a fixed chemical composition and that are made up of regularly aligned rows of atoms. Today one of the principal concerns of mineralogy is the chemical analysis of the some 3,000 known minerals that are the chief constituents of the three different rock types: sedimentary (formed by diagenesis of sediments deposited by surface processes); igneous (crystallized from magmas either at depth or at the surface as lavas); and metamorphic (formed by a recrystallization process at temperatures and pressures in the Earth’s crust high enough to destabilize the parent sedimentary or igneous material). Geochemistry is the study of the composition of these different types of rocks.
During mountain building, rocks became highly deformed, and the primary objective of structural geology is to elucidate the mechanism of formation of the many types of structures (e.g., folds and faults) that arise from such deformation. The allied field of geophysics has several subdisciplines, which make use of different instrumental techniques. Seismology, for example, involves the exploration of the Earth’s deep structure through the detailed analysis of recordings of elastic waves generated by earthquakes and man-made explosions. Earthquake seismology has largely been responsible for defining the location of major plate boundaries and of the dip of subduction zones down to depths of about 700 kilometres at those boundaries. In other subdisciplines of geophysics, gravimetric techniques are used to determine the shape and size of underground structures; electrical methods help to locate a variety of mineral deposits that tend to be good conductors of electricity; and paleomagnetism has played the principal role in tracking the drift of continents.
Geomorphology is concerned with the surface processes that create the landscapes of the world—namely, weathering and erosion. Weathering is the alteration and breakdown of rocks at the Earth’s surface caused by local atmospheric conditions, while erosion is the process by which the weathering products are removed by water, ice, and wind. The combination of weathering and erosion leads to the wearing down or denudation of mountains and continents, with the erosion products being deposited in rivers, internal drainage basins, and the oceans. Erosion is thus the complement of deposition. The unconsolidated accumulated sediments are transformed by the process of diagenesis and lithification into sedimentary rocks, thereby completing a full cycle of the transfer of matter from an old continent to a young ocean and ultimately to the formation of new sedimentary rocks. Knowledge of the processes of interaction of the atmosphere and the hydrosphere with the surface rocks and soils of the Earth’s crust is important for an understanding not only of the development of landscapes but also (and perhaps more importantly) of the ways in which sediments are created. This in turn helps in interpreting the mode of formation and the depositional environment of sedimentary rocks. Thus the discipline of geomorphology is fundamental to the uniformitarian approach to the Earth sciences according to which the present is the key to the past.
Geologic history provides a conceptual framework and overview of the evolution of the Earth. An early development of the subject was stratigraphy, the study of order and sequence in bedded sedimentary rocks. Stratigraphers still use the two main principles established by the late 18th-century English engineer and surveyor William Smith, regarded as the father of stratigraphy: (1) that younger beds rest upon older ones and (2) different sedimentary beds contain different and distinctive fossils, enabling beds with similar fossils to be correlated over large distances. Today biostratigraphy uses fossils to characterize successive intervals of geologic time, but as relatively precise time markers only to the beginning of the Cambrian Period, about 540,000,000 years ago. The geologic time scale, back to the oldest rocks, some 4,280,000,000 years ago, can be quantified by isotopic dating techniques. This is the science of geochronology, which in recent years has revolutionized scientific perception of Earth history and which relies heavily on the measured parent-to-daughter ratio of radiogenic isotopes.
Paleontology is the study of fossils and is concerned not only with their description and classification but also with an analysis of the evolution of the organisms involved. Simple fossil forms can be found in early Precambrian rocks as old as 3,500,000,000 years, and it is widely considered that life on Earth must have begun before the appearance of the oldest rocks. Paleontological research of the fossil record since the Cambrian Period has contributed much to the theory of evolution of life on Earth.
Several disciplines of the geologic sciences have practical benefits for society. The geologist is responsible for the discovery of minerals (such as lead, chromium, nickel, and tin), oil, gas, and coal, which are the main economic resources of the Earth; for the application of knowledge of subsurface structures and geologic conditions to the building industry; and for the prevention of natural hazards or at least providing early warning of their occurrence.
Astrogeology is important in that it contributes to understanding the development of the Earth within the solar system. The U.S. Apollo program of manned missions to the Moon, for example, provided scientists with firsthand information on lunar geology, including observations on such features as meteorite craters that are relatively rare on Earth. Unmanned space probes have yielded significant data on the surface features of many of the planets and their satellites. Since the 1970s even such distant planetary systems as those of Jupiter, Saturn, and Uranus have been explored by probes.
Study of the composition of the Earth:
Mineralogy
As a discipline, mineralogy has had close historical ties with geology. Minerals as basic constituents of rocks and ore deposits are obviously an integral aspect of geology. The problems and techniques of mineralogy, however, are distinct in many respects from those of the rest of geology, with the result that mineralogy has grown to be a large, complex discipline in itself.
About 3,000 distinct mineral species are recognized, but relatively few are important in the kinds of rocks that are abundant in the outer part of the Earth. Thus a few minerals such as the feldspars, quartz, and mica are the essential ingredients in granite and its near relatives. Limestones, which are widely distributed on all continents, consist largely of only two minerals, calcite and dolomite. Many rocks have a more complex mineralogy, and in some the mineral particles are so minute that they can be identified only through specialized techniques.
It is possible to identify an individual mineral in a specimen by examining and testing its physical properties. Determining the hardness of a mineral is the most practical way of identifying it. This can be done by using the Mohs scale of hardness, which lists 10 common minerals in their relative order of hardness: talc (softest with the scale number 1), gypsum (2), calcite (3), fluorite (4), apatite (5), orthoclase (6), quartz (7), topaz (8), corundum (9), and diamond (10). Harder minerals scratch softer ones, so that an unknown mineral can be readily positioned between minerals on the scale. Certain common objects that have been assigned hardness values roughly corresponding to those of the Mohs scale (e.g., fingernail [2.5], pocketknife blade [5.5], steel file [6.5]) are usually used in conjunction with the minerals on the scale for additional reference.
Other physical properties of minerals that aid in identification are crystal form, cleavage type, fracture, streak, lustre, colour, specific gravity, and density. In addition, the refractive index of a mineral can be determined with precisely calibrated immersion oils. Some minerals have distinctive properties that help to identify them. For example, carbonate minerals effervesce with dilute acids; halite is soluble in water and has a salty taste; fluorite (and about 100 other minerals) fluoresces in ultraviolet light; and uranium-bearing minerals are radioactive.
The science of crystallography is concerned with the geometric properties and internal structure of crystals. Because minerals are generally crystalline, crystallography is an essential aspect of mineralogy. Investigators in the field may use a reflecting goniometer that measures angles between crystal faces to help determine the crystal system to which a mineral belongs. Another instrument that they frequently employ is the X-ray diffractometer, which makes use of the fact that X-rays, when passing through a mineral specimen, are diffracted at regular angles. The paths of the diffracted rays are recorded on photographic film, and the positions and intensities of the resulting diffraction lines on the film provide a particular pattern. Every mineral has its own unique diffraction pattern, so crystallographers are able to determine not only the crystal structure of a mineral but the type of mineral as well.
When a complex substance such as a magma crystallizes to form igneous rock, the grains of different constituent minerals grow together and mutually interfere, with the result that they do not retain their externally recognizable crystal form. To study the minerals in such a rock, the mineralogist uses a petrographic microscope constructed for viewing thin sections of the rock, which are ground uniformly to a thickness of about 0.03 millimetre, in light polarized by two polarizing prisms in the microscope. If the rock is crystalline, its essential minerals can be determined by their peculiar optical properties as revealed in transmitted light under magnification, provided that the individual crystal grains can be distinguished. Opaque minerals, such as those with a high content of metallic elements, require a technique employing reflected light from polished surfaces. This kind of microscopic analysis has particular application to metallic ore minerals. The polarizing microscope, however, has a lower limit to the size of grains that can be distinguished with the eye; even the best microscopes cannot resolve grains less than about 0.5 micrometre (0.0005 millimetre) in diameter. For higher magnifications the mineralogist uses an electron microscope, which produces images with diameters enlarged tens of thousands of times.
The methods described above are based on a study of the physical properties of minerals. Another important area of mineralogy is concerned with the chemical composition of minerals. The primary instrument used is the electron microprobe. Here a beam of electrons is focused on a thin section of rock that has been highly polished and coated with carbon. The electron beam can be narrowed to a diameter of about one micrometre and thus can be focused on a single grain of a mineral, which can be observed with an ordinary optical microscope system. The electrons cause the atoms in the mineral under examination to emit diagnostic X-rays, the intensity and concentration of which are measured by a computer. Besides spot analysis, this method allows a mineral to be traversed for possible chemical zoning. Moreover, the concentration and relative distribution of elements such as magnesium and iron across the boundary of two coexisting minerals like garnet and pyroxene can be used with thermodynamic data to calculate the temperature and pressure at which minerals of this type crystallize.
Although the major concern of mineralogy is to describe and classify the geometrical, chemical, and physical properties of minerals, it is also concerned with their origin. Physical chemistry and thermodynamics are basic tools for understanding mineral origin. Some of the observational data of mineralogy are concerned with the behaviour of solutions in precipitating crystalline materials under controlled conditions in the laboratory. Certain minerals can be created synthetically under conditions in which temperature and concentration of solutions are carefully monitored. Other experimental methods include study of the transformation of solids at high temperatures and pressures to yield specific minerals or assemblages of minerals. Experimental data obtained in the laboratory, coupled with chemical and physical theory, enable the conditions of origin of many naturally occurring minerals to be inferred.
Petrology
Petrology is the study of rocks, and, because most rocks are composed of minerals, petrology is strongly dependent on mineralogy. In many respects mineralogy and petrology share the same problems; for example, the physical conditions that prevail (pressure, temperature, time, and presence or absence of water) when particular minerals or mineral assemblages are formed. Although petrology is in principle concerned with rocks throughout the crust, as well as with those of the inner depths of the Earth, in practice the discipline deals mainly with those that are accessible in the outer part of the Earth’s crust. Rock specimens obtained from the surface of the Moon and from other planets are also proper considerations of petrology. Fields of specialization in petrology correspond to the aforementioned three major rock types—igneous, sedimentary, and metamorphic.
Igneous petrology
Igneous petrology is concerned with the identification, classification, origin, evolution, and processes of formation and crystallization of the igneous rocks. Most of the rocks available for study come from the Earth’s crust, but a few, such as eclogites, derive from the mantle. The scope of igneous petrology is very large because igneous rocks make up the bulk of the continental and oceanic crusts and of the mountain belts of the world, which range in age from early Archean to Neogene, and they also include the high-level volcanic extrusive rocks and the plutonic rocks that formed deep within the crust. Of utmost importance to igneous petrologic research is geochemistry, which is concerned with the major- and trace-element composition of igneous rocks as well as of the magmas from which they arose. Some of the major problems within the scope of igneous petrology are: (1) the form and structure of igneous bodies, whether they be lava flows or granitic intrusions, and their relations to surrounding rocks (these are problems studied in the field); (2) the crystallization history of the minerals that make up igneous rocks (this is determined with the petrographic polarizing microscope); (3) the classification of rocks based on textural features, grain size, and the abundance and composition of constituent minerals; (4) the fractionation of parent magmas by the process of magmatic differentiation, which may give rise to an evolutionary sequence of genetically related igneous products; (5) the mechanism of generation of magmas by partial melting of the lower continental crust, suboceanic and subcontinental mantle, and subducting slabs of oceanic lithosphere; (6) the history of formation and the composition of the present oceanic crust determined on the basis of data from the Integrated Ocean Drilling Program (IODP); (7) the evolution of igneous rocks through geologic time; (8) the composition of the mantle from studies of the rocks and mineral chemistry of eclogites brought to the surface in kimberlite pipes; (9) the conditions of pressure and temperature at which different magmas form and at which their igneous products crystallize (determined from high-pressure experimental petrology).
The basic instrument of igneous petrology is the petrographic polarizing microscope, but the majority of instruments used today have to do with determining rock and mineral chemistry. These include the X-ray fluorescence spectrometer, equipment for neutron activation analysis, induction-coupled plasma spectrometer, electron microprobe, ionprobe, and mass spectrometer. These instruments are highly computerized and automatic and produce analyses rapidly (see below Geochemistry). Complex high-pressure experimental laboratories also provide vital data.
With a vast array of sophisticated instruments available, the igneous petrologist is able to answer many fundamental questions. Study of the ocean floor has been combined with investigation of ophiolite complexes, which are interpreted as slabs of ocean floor that have been thrust onto adjacent continental margins. An ophiolite provides a much deeper section through the ocean floor than is available from shallow drill cores and dredge samples from the extant ocean floor. These studies have shown that the topmost volcanic layer consists of tholeiitic basalt or mid-ocean ridge basalt that crystallized at an accreting rift or ridge in the middle of an ocean. A combination of mineral chemistry of the basalt minerals and experimental petrology of such phases allows investigators to calculate the depth and temperature of the magma chambers along the mid-ocean ridge. The depths are close to six kilometres, and the temperatures range from 1,150 °C to 1,279 °C. Comprehensive petrologic investigation of all the layers in an ophiolite makes it possible to determine the structure and evolution of the associated magma chamber.
In 1974 B.W. Chappell and A.J.R. White discovered two major and distinct types of granitic rock—namely, I- and S-type granitoids. The I-type has strontium-87/strontium-86 ratios lower than 0.706 and contains magnetite, titanite, and allanite but no muscovite. These rocks formed above subduction zones in island arcs and active (subducting) continental margins and were ultimately derived by partial melting of mantle and subducted oceanic lithosphere. In contrast, S-type granitoids have strontium-87/strontium-86 ratios higher than 0.706 and contain muscovite, ilmenite, and monazite. These rocks were formed by partial melting of lower continental crust. Those found in the Himalayas were formed during the Miocene Epoch some 20,000,000 years ago as a result of the penetration of India into Asia, which thickened the continental crust and then caused its partial melting.
In the island arcs and active continental margins that rim the Pacific Ocean, there are many different volcanic and plutonic rocks belonging to the calc-alkaline series. These include basalt; andesite; dacite; rhyolite; ignimbrite; diorite; granite; peridotite; gabbro; and tonalite, trondhjemite, and granodiorite (TTG). They occur typically in vast batholiths, which may reach several thousand kilometres in length and contain more than 1,000 separate granitic bodies. These TTG calc-alkaline rocks represent the principal means of growth of the continental crust throughout the whole of geologic time. Much research is devoted to them in an effort to determine the source regions of their parent magmas and the chemical evolution of the magmas. It is generally agreed that these magmas were largely derived by the melting of a subducted oceanic slab and the overlying hydrated mantle wedge. One of the major influences on the evolution of these rocks is the presence of water, which was derived originally from the dehydration of the subducted slab.
Sedimentary petrology
The field of sedimentary petrology is concerned with the description and classification of sedimentary rocks, interpretation of the processes of transportation and deposition of the sedimentary materials forming the rocks, the environment that prevailed at the time the sediments were deposited, and the alteration (compaction, cementation, and chemical and mineralogical modification) of the sediments after deposition.
There are two main branches of sedimentary petrology. One branch deals with carbonate rocks, namely limestones and dolomites, composed principally of calcium carbonate (calcite) and calcium magnesium carbonate (dolomite). Much of the complexity in classifying carbonate rocks stems partly from the fact that many limestones and dolomites have been formed, directly or indirectly, through the influence of organisms, including bacteria, lime-secreting algae, various shelled organisms (e.g., mollusks and brachiopods), and by corals. In limestones and dolomites that were deposited under marine conditions, commonly in shallow warm seas, much of the material initially forming the rock consists of skeletons of lime-secreting organisms. In many examples, this skeletal material is preserved as fossils. Some of the major problems of carbonate petrology concern the physical and biological conditions of the environments in which carbonate material has been deposited, including water depth, temperature, degree of illumination by sunlight, motion by waves and currents, and the salinity and other chemical aspects of the water in which deposition occurred.
The other principal branch of sedimentary petrology is concerned with the sediments and sedimentary rocks that are essentially noncalcareous. These include sands and sandstones, clays and claystones, siltstones, conglomerates, glacial till, and varieties of sandstones, siltstones, and conglomerates (e.g., the graywacke-type sandstones and siltstones). These rocks are broadly known as clastic rocks because they consist of distinct particles or clasts. Clastic petrology is concerned with classification, particularly with respect to the mineral composition of fragments or particles, as well as the shapes of particles (angular versus rounded), and the degree of homogeneity of particle sizes. Other main concerns of clastic petrology are the mode of transportation of sedimentary materials, including the transportation of clay, silt, and fine sand by wind; and the transportation of these and coarser materials through suspension in water, through traction by waves and currents in rivers, lakes, and seas, and sediment transport by ice.
Sedimentary petrology also is concerned with the small-scale structural features of sediments and sedimentary rocks. Features that can be conveniently seen in a specimen held in the hand are within the domain of sedimentary petrology. These features include the geometrical attitude of mineral grains with respect to each other, small-scale cross stratification, the shapes and interconnections of pore spaces, and the presence of fractures and veinlets.
Instruments and methods used by sedimentary petrologists include the petrographic microscope for description and classification, X-ray mineralogy for defining fabrics and small-scale structures, physical model flume experiments for studying the effects of flow as an agent of transport and the development of sedimentary structures, and mass spectrometry for calculating stable isotopes and the temperatures of deposition, cementation, and diagenesis. Wet-suit diving permits direct observation of current processes on coral reefs, and manned submersibles enable observation at depth on the ocean floor and in mid-oceanic ridges.
The plate-tectonic theory has given rise to much interest in the relationships between sedimentation and tectonics, particularly in modern plate-tectonic environments—e.g., spreading-related settings (intracontinental rifts, early stages of intercontinental rifting such as the Red Sea, and late stages of intercontinental rifting such as the margins of the present Atlantic Ocean), mid-oceanic settings (ridges and transform faults), subduction-related settings (volcanic arcs, fore-arcs, back-arcs, and trenches), and continental collision-related settings (the Alpine-Himalayan belt and late orogenic basins with molasse [i.e., thick association of clastic sedimentary rocks consisting chiefly of sandstones and shales]). Today many subdisciplines of sedimentary petrology are concerned with the detailed investigation of the various sedimentary processes that occur within these plate-tectonic environments.
Metamorphic petrology
Metamorphism means change in form. In geology the term is used to refer to a solid-state recrystallization of earlier igneous, sedimentary, or metamorphic rocks. There are two main types of metamorphism: (1) contact metamorphism, in which changes induced largely by increase in temperature are localized at the contacts of igneous intrusions; and (2) regional metamorphism, in which increased pressure and temperature have caused recrystallization over extensive regions in mountain belts. Other types of metamorphism include local effects caused by deformation in fault zones, burning oil shales, and thrusted ophiolite complexes; extensive recrystallization caused by high heat flow in mid-ocean ridges; and shock metamorphism induced by high-pressure impacts of meteorites in craters on the Earth and Moon.
Metamorphic petrology is concerned with field relations and local tectonic environments; the description and classification of metamorphic rocks in terms of their texture and chemistry, which provides information on the nature of the premetamorphic material; the study of minerals and their chemistry (the mineral assemblages and their possible reactions), which yields data on the temperatures and pressures at which the rocks recrystallized; and the study of fabrics and the relations of mineral growth to deformation stages and major structures, which provides information about the tectonic conditions under which regional metamorphic rocks formed.
A supplement to metamorphism is metasomatism: the introduction and expulsion of fluids and elements through rocks during recrystallization. When new crust is formed and metamorphosed at a mid-oceanic ridge, seawater penetrates into the crust for a few kilometres and carries much sodium with it. During formation of a contact metamorphic aureole around a granitic intrusion, hydrothermal fluids carrying elements such as iron, boron, and fluorine pass from the granite into the wall rocks. When the continental crust is thickened, its lower part may suffer dehydration and form granulites. The expelled fluids, carrying such heat-producing elements as rubidium, uranium, and thorium migrate upward into the upper crust. Much petrologic research is concerned with determining the amount and composition of fluids that have passed through rocks during these metamorphic processes.
The basic instrument used by the metamorphic petrologist is the petrographic microscope, which allows detailed study and definition of mineral types, assemblages, and reactions. If a heating/freezing stage is attached to the microscope, the temperature of formation and composition of fluid inclusions within minerals can be calculated. These inclusions are remnants of the fluids that passed through the rocks during the final stages of their recrystallization. The electron microprobe is widely used for analyzing the composition of the component minerals. The petrologist can combine the mineral chemistry with data from experimental studies and thermodynamics to calculate the pressures and temperatures at which the rocks recrystallized. By obtaining information on the isotopic age of successive metamorphic events with a mass spectrometer, pressure–temperature–time curves can be worked out. These curves chart the movement of the rocks over time as they were brought to the surface from deep within the continental crust; this technique is important for understanding metamorphic processes. Some continental metamorphic rocks that contain diamonds and coesites (ultrahigh pressure minerals) have been carried down subduction zones to a depth of at least 100 kilometres (60 miles), brought up, and often exposed at the present surface within resistant eclogites of collisional orogenic belts—such as the Swiss Alps, the Himalayas, the Kokchetav metamorphic terrane in Kazakhstan, and the Variscan belt in Germany. These examples demonstrate that metamorphic petrology plays a key role in unraveling tectonic processes in mountain belts that have passed through the plate-tectonic cycle of events.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2306) Grassland
Gist:
Description. Grasslands are generally open and continuous, fairly flat areas of grass. They are often located between temperate forests at high latitudes and deserts at subtropical latitudes.
Grasslands go by many names. In the United States Midwest, they're often called prairies. In South America, they're known as pampas. Central Eurasian grasslands are referred to as steppes, while African grasslands are savannas.
Summary
Grassland is an area in which the vegetation is dominated by a nearly continuous cover of grasses. Grasslands occur in environments conducive to the growth of this plant cover but not to that of taller plants, particularly trees and shrubs. The factors preventing establishment of such taller, woody vegetation are varied.
Grasslands are one of the most widespread of all the major vegetation types of the world. This is so, however, only because human manipulation of the land has significantly altered the natural vegetation, creating artificial grasslands of cereal crops, pastures, and other areas that require some form of repetitious, unnatural disturbance such as cultivation, heavy grazing, burning, or mowing to persist. This discussion, however, concentrates on natural and nearly natural grasslands.
Origin
The most extensive natural grasslands can be thought of as intermediates in an environmental gradient, with forests at one end and deserts at the other. Forests occupy the most favourable environments, where moisture is adequate for growth and survival of a tall, dense vegetation dominated by trees. Deserts are found where moisture is so lacking that a continuous, permanent vegetation cover cannot be maintained. Grasslands lie between these two extremes.
Like the savannas, deserts, and scrublands into which they commonly blend, grasslands arose during the period of cooling and drying of the global climate, which occurred during the Cenozoic Era (65.5 million years ago to the present). Indeed, the grass family itself (Poaceae or Gramineae) evolved only early in this era. The date of earliest appearance of grasslands varies from region to region. In several regions a succession of vegetation types can be recognized in the Cenozoic fossil record, as climate dried out progressively. For example, in central Australia during the past 50 million years tropical rainforest gave way successively to savanna, grassland, and, finally, desert. In some places expansion of grasslands to something approaching their modern extent occurred only during the extremely cold, dry intervals—called ice ages in north temperate regions—of the past two million years.
A dynamic balance commonly exists between grasslands and related vegetation types. Droughts, fires, or episodes of heavy grazing favour grassland at some times, and wet seasons and an absence of significant disturbances favour woody vegetation at others. Changes in the severity or frequency of these factors can cause a change from one vegetation type to another.
Other grassland types occur in places too cold for trees to grow—i.e., beyond the forest limits of high mountains or at high latitudes. A characteristic type of grassland in cool, moist parts of the Southern Hemisphere is tussock grassland, dominated by tussock or bunch grasses that develop pedestals of matted stems, giving the vegetation a lumpy appearance. Tussock grasslands occur at various latitudes. In the tropics they are found above the forest limit on some high mountains—e.g., in New Guinea and East Africa. At the higher latitudes of the Southern Ocean they form the main vegetation of subantarctic islands. They are also typical of the drier, colder parts of New Zealand and the southernmost regions of South America.
Not all natural grasslands, however, arise from climate-related circumstances. Woody plants may be prevented from growing in certain areas for other reasons, allowing grasses to dominate. One cause is seasonal flooding or waterlogging, which is responsible for the creation and maintenance of large grasslands in parts of the highly seasonal subtropics and in smaller areas of other regions. One of the best examples of a seasonally flooded subtropical grassland is the Pantanal in the Mato Grosso region of Brazil. Across an area of 140,000 square kilometres (54,000 square miles), dry grasslands prevail for half of each year and shallow wetlands for the other, with small forest patches restricted to low rises that do not flood during the wet season. In many other areas where climate is suitable for forest growth, very shallow or infertile soils may prevent tree growth and result in development of grassland.
The largest areas of natural grassland—those resulting from climatic dryness—can be classified into two broad categories: tropical grasslands, which generally lie between the belts of tropical forest and desert; and temperate grasslands, which generally lie between deserts and temperate forests. Tropical grasslands occur in the same regions as savannas, and the distinction between these two vegetation types is rather arbitrary, depending on whether there are few or many trees. Likewise, temperate grasslands may have a scattering of shrubs or trees that blurs their boundaries when they occur adjacent to scrublands or temperate forests.
Tropical grasslands are found mainly in the Sahel south of the Sahara, in East Africa, and in Australia. Temperate grasslands principally occur in North America, Argentina, and across a broad band from Ukraine to China, but in most of these regions they have been substantially altered by agricultural activities.
Many grasslands formerly supposed to be natural are now recognized as having once been forests that grew in a marginally dry climate. Early human disturbance is responsible for their transformation. For example, almost the entire extensive lowland grasslands of the eastern part of the South Island, New Zealand, are believed to have been created by forest-burning carried out by the Polynesians—the country’s first colonists—during the eight centuries before European settlement in the 18th century.
Seminatural grasslands may occur where woody vegetation was once cleared for agricultural purposes that have since been abandoned; a return to the original vegetation is prevented by repeated burning or grazing. In wet tropical regions these types of grasslands may be very dense, such as those in East Africa that are dominated by elephant grass (Pennisetum purpureum) or in New Guinea by pit-pit grass (Miscanthus floridulus), both of which grow 3 metres (9.8 feet) tall.
All areas of grassland may owe something of their area and character to a long history of interaction with humans, particularly through the medium of fire.
Environment
Grassland climates are varied, but all large regions of natural grassland are generally hot, at least in summer, and dry, though not to the extent that deserts are. In general, tropical grasslands receive 500 to 1,500 millimetres (20 to 60 inches) of rain in an average year and in every season experience temperatures of about 15 to 35 °C (59 to 95 °F). The dry season may last as long as eight months. An excess of rainfall over evaporation, leading to ephemeral river flow, occurs only during the wet season. The tropical grassland climate overlaps very broadly with that of savanna. As previously stated, these vegetation types differ little from each other, a savanna being merely a grassland with scattered trees. Small changes in management and usage can convert one to the other.
Temperate grasslands are somewhat drier than tropical grasslands and also colder, at least for part of the year. Seasonal temperature variation may be slight in tropical grasslands but may vary by as much as 40 °C (72 °F) in temperate grassland areas. Mean annual rainfall in the North American grassland areas is 300 to 600 millimetres. Mean temperatures in January range from −18 °C (0 °F) in the north to 10 °C (50 °F) in the south, with corresponding values in July being 18 °C (64 °F) and 28 °C (82 °F). Mean annual temperature in the most northerly areas of the North American grassland zone is below 0 °C (32 °F).
Occurring as they do across a wide range of climatic and geologic conditions, grasslands are associated with many different types of soil. The grassland ecosystem itself influences soil formation, and this causes grassland soils to differ from other soils. The nature of grass litter and its pattern of decomposition commonly result in the development of a dark, organically rich upper soil layer that can reach 300 millimetres below the surface. This layer is absent from desert soils and is different from the surface layer of rotting leaf litter typical of forest soils. It is friable in structure and rich in plant nutrients. Lower soil layers are typically pale and yellowish, especially at depths close to two metres.
Details
A grassland is an area where the vegetation is dominated by grasses (Poaceae). However, sedge (Cyperaceae) and rush (Juncaceae) can also be found along with variable proportions of legumes, such as clover, and other herbs. Grasslands occur naturally on all continents except Antarctica and are found in most ecoregions of the Earth. Furthermore, grasslands are one of the largest biomes on Earth and dominate the landscape worldwide. There are different types of grasslands: natural grasslands, semi-natural grasslands, and agricultural grasslands. They cover 31–69% of the Earth's land area.
Definitions
Included among the variety of definitions for grasslands are:
* "...any plant community, including harvested forages, in which grasses and/or legumes make up the dominant vegetation."
* "...terrestrial ecosystems dominated by herbaceous and shrub vegetation, and maintained by fire, grazing, drought and/or freezing temperatures." (Pilot Assessment of Global Ecosystems, 2000)
* "A region with sufficient average annual precipitation (25-75 cm) to support grass..." (Stiling, 1999)
Semi-natural grasslands are a very common subcategory of the grasslands biome. These can be defined as:
* Grassland existing as a result of human activity (mowing or livestock grazing), where environmental conditions and the species pool are maintained by natural processes.
They can also be described as the following:
* "Semi-natural grasslands are one of the world's most biodiverse habitats on a small spatial scales."
* "Semi-natural grasslands belong to the most species rich ecosystems in the world."
* "...have been formed over the course of centuries through extensive grazing and mowing."
* "...without the use of pesticides or fertilisers in modern times."
There are many different types of semi-natural grasslands, e.g. hay meadows.
Evolutionary history
The graminoids are among the most versatile life forms. They became widespread toward the end of the Cretaceous period, and coprolites of fossilized dinosaur feces have been found containing phytoliths of a variety of grasses that include grasses that are related to modern rice and bamboo.
The appearance of mountains in the western United States during the Miocene and Pliocene epochs, a period of some 25 million years, created a continental climate favourable to the evolution of grasslands.
Around 5 million years ago during the Late Miocene in the New World and the Pliocene in the Old World, the first true grasslands occurred. Existing forest biomes declined, and grasslands became much more widespread. It is known that grasslands have existed in Europe throughout the Pleistocene (the last 1.8 million years). Following the Pleistocene ice ages (with their glacials and interglacials), grasslands expanded in the hotter, drier climates, and began to become the dominant land feature worldwide. Since the grasslands have existed for over 1.8 million years, there is high variability. For example steppe-tundra dominated in Northern and Central Europe whereas a higher amount of xerothermic grasslands occurred in the Mediterranean area. Within temperate Europe, the range of types is quite wide and also became unique due to the exchange of species and genetic material between different biomes.
The semi-natural grasslands first appeared when humans started farming. So for the use of agriculture, forests got cleared in Europe. Ancient meadows and pastures were the parts that were suitable for cultivation. The semi-natural grasslands were formed from these areas.[9] However, there's also evidence for the local persistence of natural grasslands in Europe, originally maintained by wild herbivores, throughout the pre-Neolithic Holocene. The removal of the plants by the grazing animals and later the mowing farmers led to co-existence of other plant species around. In the following, the biodiversity of the plants evolve. Also, the species that already lived there adapted to the new conditions.
Most of the grassland areas have been turned to arable fields and disappeared again. The grasslands permanently became arable cropping fields due to the steady decrease in organic matter. Nowadays, semi-natural grasslands are rather located in areas that are unsuitable for agricultural farming.
Ecology:
Biodiversity
Grasslands dominated by unsown wild-plant communities ("unimproved grasslands") can be called either natural or "semi-natural" habitat. Although their plant communities are natural, their maintenance depends upon anthropogenic activities such as grazing and cutting regimes. The semi-natural grasslands contain many species of wild plants, including grasses, sedges, rushes, and herbs; 25 plant-species per 100 square centimeters can be found. A European record that was found on a meadow in Estonia described 76 species of plants in one square meter. Chalk downlands in England can support over 40 species per square meter.
In many parts of the world, few examples have escaped agricultural improvement (fertilizing, weed killing, plowing, or re-seeding). For example, original North American prairie grasslands or lowland wildflower meadows in the UK are now rare and their associated wild flora equally threatened. Associated with the wild-plant diversity of the "unimproved" grasslands is usually a rich invertebrate fauna; there are also many species of birds that are grassland "specialists", such as the snipe and the little bustard. Owing to semi-natural grasslands being referred to as one of the most-species rich ecosystems in the world and essential habitat for many specialists, also including pollinators, there are many approaches to conservation activities lately.
Agriculturally improved grasslands, which dominate modern intensive agricultural landscapes, are usually poor in wild plant species due to the original diversity of plants having been destroyed by cultivation and by the use of fertilizers.
Almost 90% of the European semi-natural grasslands do not exist anymore due to political and economic reasons. This loss took place during the 20th century. The ones in Western and Central Europe have almost disappeared completely. There are a few left in Northern Europe.
Unfortunately, a large amount of red-listed species are specialists of semi-natural grasslands and are affected by the landscape change due to agriculture of the last century.
The original wild-plant communities having been replaced by sown monocultures of cultivated varieties of grasses and clovers, such as perennial ryegrass and white clover. In many parts of the world, "unimproved" grasslands are one of the most threatened types of habitat, and a target for acquisition by wildlife conservation groups or for special grants to landowners who are encouraged to manage them appropriately.
Vegetation
Grassland vegetation can vary considerably depending on the grassland type and on how strong it is affected by human impact. Dominant trees for the semi-natural grassland are Quercus robur, Betula pendula, Corylus avellana, Crataegus and many kinds of herbs.
In chalk grassland, the plants can vary from very tall to very short. Quite tall grasses can be found in North American tallgrass prairie, South American grasslands, and African savanna. Woody plants, shrubs or trees may occur on some grasslands—forming savannas, scrubby grassland or semi-wooded grassland, such as the African savannas or the Iberian deheza.
As flowering plants and trees, grasses grow in great concentrations in climates where annual rainfall ranges between 500 and 900 mm (20 and 35 in). The root systems of perennial grasses and forbs form complex mats that hold the soil in place.
Fauna
Grasslands support the greatest aggregations of large animals on Earth, including jaguars, African wild dogs, pronghorn, black-footed ferret, plains bison, mountain plover, African elephant, Sunda tiger, black rhino, white rhino, savanna elephant, greater one-horned rhino, Indian elephant and swift fox. Grazing animals, herd animals, and predators in grasslands, like lions and cheetahs live in the grasslands of the African savanna. Mites, insect larvae, nematodes, and earthworms inhabit deep soil, which can reach 6 metres (20 feet) underground in undisturbed grasslands on the richest soils of the world. These invertebrates, along with symbiotic fungi, extend the root systems, break apart hard soil, enrich it with urea and other natural fertilizers, trap minerals and water and promote growth. Some types of fungi make the plants more resistant to insect and microbial attacks.
Grassland in all its form supports a vast variety of mammals, reptiles, birds, and insects. Typical large mammals include the blue wildebeest, American bison, giant anteater, and Przewalski's horse.
The plants and animals that live in grasslands are connected through an unlimited web of interactions. But the removal of key species—such as buffalo and prairie dogs within the American West—and introduction of invasive species, like cane toads in northern Australia, have disrupted the balance in these ecosystems and damaged a number of other species. Grasslands are home to a number of the foremost magnificent animals on the planet—elephants, bison, lions—and hunters have found them to be enticing prey. But when hunting is not controlled or is conducted illegally, species can become extinct.
Additional Information
Grasslands go by many names. In the United States Midwest, they're often called prairies. In South America, they're known as pampas. Central Eurasian grasslands are referred to as steppes, while African grasslands are savannas. What they all have in common are grasses, their naturally dominant vegetation. Grasslands are found where there is not enough regular rainfall to support the growth of a forest, but not so little that a desert forms. In fact, grasslands often lie between forests and deserts. Depending on how they’re defined, grasslands account for between 20 and 40 percent of the world's land area. They are generally open and fairly flat, and they exist on every continent except Antarctica, which makes them vulnerable to pressure from human populations. Threats to natural grasslands, as well as the wildlife that live on them, include farming, overgrazing, invasive species, illegal hunting, and climate change.
At the same time, grasslands could help mitigate climate change: One study found California's grasslands and rangelands could store more carbon than forests because they are less susceptible to wildfires and drought. Still, only a small percentage—less than 10 percent—of the world's grassland is protected.
Types of Grasslands
There are two main kinds of grasslands: tropical and temperate. Examples of temperate grasslands include Eurasian steppes, North American prairies, and Argentine pampas. Tropical grasslands include the hot savannas of sub-Saharan Africa and northern Australia.
Rainfall can vary across grasslands from season to season and year to year, ranging from 25.4 too 101.6 centimeters (10 to 40 inches) annually. Temperatures can go below freezing in temperate grasslands to above 32.2 degrees Celsius (90 degrees Fahrenheit).
The height of vegetation on grasslands varies with the amount of rainfall. Some grasses might be under 0.3 meters (one foot) tall, while others can grow as high as 2.1 meters (seven feet). Their roots can extend 0.9 to 1.8 meters (three to six feet) deep into the soil. The combination of underground biomass with moderate rainfall—heavy rain can wash away nutrients—tends to make grassland soils very fertile and appealing for agricultural use. Much of the North American prairielands have been converted into land for crops, posing threats to species that depend on those habitats, as well as drinking water sources for people who live nearby.
Grassland Plants and Animals
Grasslands support a variety of species. Vegetation on the African savannas, for example, feeds animals including zebras, wildebeest, gazelles, and giraffes. On temperate grasslands, you might find prairie dogs, badgers, coyotes, swift foxes, and a variety of birds. There can be up to 25 species of large plant-eaters in a given grassland habitat, comprising a sort of buffet where different grasses appeal to different species.
Some grass species in these habitats include red oat grass (Themeda triandra) and Rhodes grass (Chloris gayana) in tropical savannas, and purple needlegrass (Nassella pulchra) and galleta in temperate areas. When rainy season arrives, many grasslands become coated with wildflowers such as yarrow (Achiella millefolium), hyssop, and milkweed. The plants on grasslands have adapted to the drought, fires, and grazing common to that habitat.
Fires, both natural and human-caused, are important factors shaping grasslands. In the U.S. Midwest, for example, Native Americans set fires to help maintain grasslands for game species, such as bison. Fire can also help prevent fire-intolerant trees and shrubs from taking over while increasing the diversity of wildflowers that support pollinators.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2307) Hospital Management
Gist
What is meant by hospital management?
It is the management that helps in managing the functioning of the hospital or a health unit. It integrates various departments of a health care unit, like clinical, non-clinical and supporting departments. Health care services must be comprehensive, preventive, curative and rehabilitative.
What is the duty of hospital management?
Overseeing hospital staff and ensuring adherence to hospital policies and procedures. Managing human resources functions such as recruitment, training, and performance evaluation. Coordinating with various departments to ensure smooth operations and effective communication.
Summary
General hospitals may be academic health facilities or community-based entities. They are general in the sense that they admit all types of medical and surgical cases, and they concentrate on patients with acute illnesses needing relatively short-term care. Community general hospitals vary in their bed numbers. Each general hospital, however, has an organized medical staff, a professional staff of other health providers (such as nurses, technicians, dietitians, and physiotherapists), and basic diagnostic equipment. In addition to the essential services relating to patient care, and depending on size and location, a community general hospital may also have a pharmacy, a laboratory, sophisticated diagnostic services (such as radiology and angiography), physical therapy departments, an obstetrical unit (a nursery and a delivery room), operating rooms, recovery rooms, an outpatient department, and an emergency department. Smaller hospitals may diagnose and stabilize patients prior to transfer to facilities with specialty services.
In larger hospitals there may be additional facilities: dental services, a nursery for premature infants, an organ bank for use in transplantation, a department of renal dialysis (removal of wastes from the blood by passing it through semipermeable membranes, as in the artificial kidney), equipment for inhalation therapy, an intensive care unit, a volunteer-services department, and, possibly, a home-care program or access to home-care placement services.
The complexity of the general hospital is in large part a reflection of advances in diagnostic and treatment technologies. Such advances range from the 20th-century introduction of antibiotics and laboratory procedures to the continued emergence of new surgical techniques, new materials and equipment for complex therapies (e.g., nuclear medicine and radiation therapy), and new approaches to and equipment for physical therapy and rehabilitation.
The legally constituted governing body of the hospital, with full responsibility for the conduct and efficient management of the hospital, is usually a hospital board. The board establishes policy and, on the advice of a medical advisory board, appoints a medical staff and an administrator. It exercises control over expenditures and has the responsibility for maintaining professional standards.
The administrator is the chief executive officer of the hospital and is responsible to the board. In a large hospital there are many separate departments, each of which is controlled by a department head. The largest department in any hospital is nursing, followed by the dietary department and housekeeping. Examples of other departments that are important to the functioning of the hospital include laundry, engineering, stores, purchasing, accounting, pharmacy, physical and occupational therapy, social service, pathology, X-ray, and medical records.
The medical staff is also organized into departments, such as surgery, medicine, obstetrics, and pediatrics. The degree of departmentalization of the medical staff depends on the specialization of its members and not primarily on the size of the hospital, although there is usually some correlation between the two. The chiefs of the medical-staff departments, along with the chiefs of radiology and pathology, make up the medical advisory board, which usually holds regular meetings on medical-administrative matters. The professional work of the individual staff members is reviewed by medical-staff committees. In a large hospital the committees may report to the medical advisory board; in a smaller hospital, to the medical staff directly, at regular staff meetings.
General hospitals often also have a formal or an informal role as teaching institutions. When formally designed as such, teaching hospitals are affiliated with undergraduate and postgraduate education of health professionals at a university, and they provide up-to-date and often specialized therapeutic measures and facilities unavailable elsewhere in the region. As teaching hospitals have become more specialized, general hospitals have become more involved in providing general clinical training to students in a variety of health professions.
Specialized health and medical care facilities
Hospitals that specialize in one type of illness or one type of patient can generally be found in the developed world. In large university centres where postgraduate teaching is carried out on a large scale, such specialized health services often are a department of the general hospital or a satellite operation of the hospital. Changing conditions or modes of treatment have lessened the need or reduced the number of some types of specialized institutions; this may be seen in the cases of tuberculosis, leprosy, and mental hospitals. On the other hand, specialized surgical centres and cancer centres have increased in number.
Details
Health administration, healthcare administration, healthcare management or hospital management is the field relating to leadership, management, and administration of public health systems, health care systems, hospitals, and hospital networks in all the primary, secondary, and tertiary sectors.
Terminology
Health systems management or health care systems management describes the leadership and general management of hospitals, hospital networks, and/or health care systems. In international use, the term refers to management at all level In the United States, management of a single institution (e.g. a hospital) is also referred to as "medical and health services management", "healthcare management", or "health administration".
Health systems management ensures that specific outcomes are attained that departments within a health facility are running smoothly that the right people are in the right jobs, that people know what is expected of them, that resources are used efficiently and that all departments are working towards a common goal for mutual development and growth.
Hospital administrators
Hospital administrators are individuals or groups of people who act as the central point of control within hospitals. These individuals may be previous or current clinicians, or individuals with other healthcare backgrounds. There are two types of administrators, generalists and specialists. Generalists are individuals who are responsible for managing or helping to manage an entire facility. Specialists are individuals who are responsible for the efficient and effective operations of a specific department such as policy analysis, finance, accounting, budgeting, human resources, or marketing.
It was reported in September 2014, that the United States spends roughly $218 billion per year on hospital's administration costs, which is equivalent to 1.43 percent of the total U.S. economy. Hospital administration has grown as a percent of the U.S. economy from .9 percent in 2000 to 1.43 percent in 2012, according to Health Affairs. In 11 countries, hospitals allocate approximately 12 percent of their budget toward administrative costs. In the United States, hospitals spend 25 percent on administrative costs.
Competencies
NCHL competencies that require to engage with credibility, creativity, and motivation in complex and dynamic health care environments.
* Accountability
* Achievement orientation
* Change leadership
* Collaboration
* Communication skills
* Financial Skills
* Impact and influence
* Innovative thinking
* Organizational awareness
* Professionalism
* Self-confidence
* Strategic orientation
* Talent development
* Team leadership
* Training and organizations
Associated qualifications
Health care management is usually studied through healthcare administration or healthcare management programs in a business school or, in some institutions, in a school of public health.
North America
Although many colleges and universities are offering a bachelor's degree in healthcare administration or human resources, a master's degree is considered the "standard credential" for most health administrators in the United States. Research and academic-based doctorate level degrees, such as the Doctor of Philosophy (PhD) in Health Administration and the Doctor of Health Administration (DHA) degree, prepare health care professionals to turn their clinical or administrative experiences into opportunities to develop new knowledge and practice, teach, shape public policy and/or lead complex organizations. There are multiple recognized degree types that are considered equivalent from the perspective of professional preparation.
The Commission on the Accreditation of Healthcare Management Education (CAHME) is the accrediting body overseeing master's-level programs in the United States and Canada on behalf of the United States Department of Education. It accredits several degree program types, including Master of Hospital Administration (MHA), Master of Health Services Administration (MHSA), Master of Business Administration in Hospital Management (MBA-HM), Master of Health Administration (MHA), Master of Public Health (MPH, MSPH, MSHPM), Master of Science (MS-HSM, MS-HA), and Master of Public Administration (MPA).(Master of Hospital Management) (MHM).
Professional organizations
There are a variety of different professional associations related to health systems management, which can be subcategorized as either personal or institutional membership groups. Personal membership groups are joined by individuals, and typically have individual skills and career development as their focus. Larger personal membership groups include the Healthcare Financial Management Association and the Healthcare Information and Management Systems Society. Institutional membership groups are joined by organizations; whereas they typically focus on organizational effectiveness, and may also include data sharing agreements and other medical related or administrative practice sharing vehicles for member organizations. Prominent examples include the American Hospital Association and the University Healthsystems Consortium.
System processes
A career in healthcare administration consists of organizing, developing, and managing medical and health services. These responsibilities are carried out at hospitals, clinics, managed care companies, public health agencies, and other comparable establishments. This job involves a lot of paperwork and minimal clinical engagement. Healthcare administrators make sure to promote excellence in patient care, patient satisfaction, and relationships with their physicians. In order to do this they must make sure that their employees are willing to follow protocols and keep a positive attitude with their patients. The entire organization has a better experience when everything is organized and protocols are set into place. The dual role of physicians follows as both consumers of healthcare resources and controllers of organizational revenue with their ability to direct patients and prescribe care. This makes leader relationships with physicians fairly atypical in comparison with key stakeholder relationships in other industries. Healthcare administrators might become overworked along with physicians feeling stressed from various protocols. However, both the parties of stakeholders and patients make up the backbone of a proper healthcare administration. These administrators make sure that the doctors, insurance companies, patients, and other healthcare providers have access to the files they need to provide appropriate treatments. Multiple hierarchies of professionals, on both the clinical and administrative sides of the organization, generate special challenges for directing and coordinating the healthcare organization. A healthcare administrator has a long-term effect in improving the hospital's process operation systems. They play a vital role in the sustainability of the institution.
Funding of hospitals
Healthcare administrators are in charge of hospital finances and advocate various strategies to improve their facilities and resources. Hospitals provide funding for assets like marketing, charity events, equipment, medicine, payroll, etc. At the same time, an institution should not be all things to people; it has its own limitations. The management administration carefully manages these funds due to a spending limitation. The healthcare administrators control the expenditures that the hospital allows in order to meet profits. Sometimes hospitals are limited on what they can do for patients. Administrators that run these hospitals strive to achieve goals within their financial limitations. This study examines the causes of healthcare employment growth and workforce composition in the US and evaluates the labor market's impact on healthcare spending and health outcomes. When healthcare spending reduces, employment growth will start reducing as well. The healthcare administration is critical to the lives of the people in hospitals. It contributes to cost saving practices and making sure that the necessities are brought to the institution. Healthcare management makes sure that protocols and funds are properly organized for each department. They are responsible for keeping the healthcare industry afloat. Many hospitals host charity events and donate to them as well.
Overall goal
The fundamental goal of a hospital administrator is to create a positive work environment where patients are treated in the most efficient and cost-effective way possible. The United States leads the world in high quality and advanced level healthcare. Everyone is working towards a common goal thanks to these mission statements. This improves the organization's efficiency and productivity. The mission statement establishes the organization's purpose and provides employees a sense of belonging and identity. This encourages management and stakeholders to put in more effort in order to obtain success. The ultimate purpose of health care is to help individuals regain their overall health and wellbeing.
Research
Health policy and systems research (HPSR) is a field of inquiry that studies "how societies organize themselves in achieving collective health goals, and how different actors interact in the policy and implementation processes to contribute to policy outcomes". HPSR is interdisciplinary and brings together expertise in a variety of biomedical and social sciences such as economics, sociology, anthropology, political science, public health and epidemiology.
The Commission on Health Research for Development and the Ad Hoc Committee on Health Research both highlighted the urgent need for focusing research methods, funding and practice towards addressing health inequities and embracing inter-disciplinary and intersectoral thinking. These reports and other academic and activist voices linked to them argued for greater voice and participation of developing countries in defining research priorities. Since then creation of the Alliance for Health Policy and Systems Research in 2000 and that of Health Systems Global in 2012 have consolidated the practice community of HPSR.
History
Early hospital administrators were called patient directors or superintendents. At the time, many were nurses who had taken on administrative responsibilities. Over half of the members of the American Hospital Association were graduate nurses in 1916. Other superintendents were medical doctors, laymen and members of the clergy. In the United States, the first degree granting program in the United States was established at Marquette University in Milwaukee, Wisconsin. By 1927, the first two students received their degrees. The original idea is credited to Father Moulinier, associated with the Catholic Hospital Association. The first modern health systems management program was established in 1934 at the University of Chicago. At the time, programs were completed in two years – one year of formal graduate study and one year of practicing internship. In 1958, the Sloan program at Cornell University began offering a special program requiring two years of formal study, which remains the dominant structure in the United States and Canada today (see also "Academic Preparation").
Health systems management has been described as a "hidden" health profession because of the relatively low-profile role managers take in health systems, in comparison to direct-care professions such as nursing and medicine. However the visibility of the management profession within healthcare has been rising in recent years, due largely to the widespread problems developed countries are having in balancing cost, access, and quality in their hospitals and health systems.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2308) Psychopathy
Gist:
Psychopathy, or psychopathic personality, is a personality construct characterized by impaired empathy and remorse, in combination with traits of boldness, disinhibition, and egocentrism. These traits are often masked by superficial charm and immunity to stress, which create an outward appearance of apparent normalcy.
Accumulating research suggests that psychopathy follows a developmental trajectory with strong genetic influences, and which precipitates deleterious effects on widespread functional networks, particularly within paralimbic regions of the brain.
Summary
Psychopathy is not a formal clinical diagnosis. The term is often used to refer to symptoms of antisocial personality disorder (ASPD), like low empathy, manipulation tendencies, and lack of remorse.
The word “psychopathy” was first used in the late 1800s to refer to people with some mental health conditions. The term comes from two Greek words that combined mean “suffering soul.”
These days, psychopathy is not a widely accepted clinical term or diagnosis. But people may still commonly use it to refer to antisocial personality disorder (ASPD).
And, while some clinicians still use the term “psychopathy” to refer to a severe subtype of ASPD, the general consensusTrusted Source is that this subtype just falls under the umbrella of ASPD.
What is antisocial personality disorder (ASPD)?
ASPD is a pattern of unconcern or disinterest in the rights and needs of others, often paired with a tendency for impulsivity and lack of remorse. It has also been called psychopathy and sociopathy.
ASPD is 1 of 10 personality disorders listed in the Diagnostic and Statistical Manual of Mental Disorders, 5th edition, text revision (DSM-5-TR). This is a reference handbook used by most U.S. mental health professionals when establishing a mental health diagnosis.
A personality disorder is a lifelong mental health condition that affects how you behave and feel about others and yourself, often causing a high degree of distress or impairment. It may affect aspects of your life, like the way you think, how you feel, how you interact with others, and your ability to control your impulses.
Unlike other personality disorders, only people over the age of 18 years can receive a diagnosis of ASPD, although symptoms often develop earlier than that.
Psychopathy in children and teens
Because the diagnosis of antisocial personality disorder (ASPD) is typically delayed until the age of 18, younger people who display similar symptoms are often evaluated for conduct disorder (CD) or oppositional defiant disorder (ODD).
Of the two behavioral disorders, CD tends to manifest with more severe symptoms than ODD.
When determining whether a child has ODD, mental health professionals may look at how they act around people they know. Typically, someone with ODD is more likely to act oppositional or defiant around family members, teachers, or a healthcare professional.
If an adolescent or teen shows an ongoing pattern of aggression toward others and regularly makes choices in opposition to the rules and social norms at home, school, or with peers, a clinician may decide to evaluate for CD.
To receive a diagnosis of ASPD, there must be enough evidence of problematic conduct or a previous diagnosis of CD by age 15.
Symptoms of psychopathy
According to the DSM-5-TR, these are the criteria to diagnose ASPD:
* behavior that often conflicts with social norms, particularly not following laws and regulations
* disregarding or violating the rights and feelings of others
* difficulty feeling remorse or empathy, or justifying actions that have hurt others
* tendency to lie, manipulate, and deceive others, often for personal enjoyment or gain
* general disregard for the safety of self and others
* avoidance of responsibility, including work and financial commitments
* aggressive and violent behavior and constant irritability
* difficulty planning ahead or managing impulses
These symptoms must be evident across different situations and over time, and they are often evident since adolescence, although a diagnosis can only be confirmed after age 18.
Lack of awareness is also often present in antisocial personality disorder, which means most people with the condition don’t realize they have it or see anything problematic with their behavior. As a result, few people seek treatment.
ASPD is more commonly diagnosed in men, and symptoms may slightly improve with age for some people.
Details
Psychopathy, or psychopathic personality, is a personality construct characterized by impaired empathy and remorse, in combination with traits of boldness, disinhibition, and egocentrism. These traits are often masked by superficial charm and immunity to stress, which create an outward appearance of apparent normalcy.
Hervey M. Cleckley, an American psychiatrist, influenced the initial diagnostic criteria for antisocial personality reaction/disturbance in the Diagnostic and Statistical Manual of Mental Disorders (DSM), as did American psychologist George E. Partridge. The DSM and International Classification of Diseases (ICD) subsequently introduced the diagnoses of antisocial personality disorder (ASPD) and dissocial personality disorder (DPD) respectively, stating that these diagnoses have been referred to (or include what is referred to) as psychopathy or sociopathy. The creation of ASPD and DPD was driven by the fact that many of the classic traits of psychopathy were impossible to measure objectively. Canadian psychologist Robert D. Hare later re-popularized the construct of psychopathy in criminology with his Psychopathy Checklist.
Although no psychiatric or psychological organization has sanctioned a diagnosis titled "psychopathy", assessments of psychopathic characteristics are widely used in criminal justice settings in some nations and may have important consequences for individuals.[specify] The study of psychopathy is an active field of research. The term is also used by the general public, popular press, and in fictional portrayals. While the abbreviated term "psycho" is often employed in common usage in general media along with "crazy", "insane", and "mentally ill", there is a categorical difference between psychosis and psychopathy.
Signs and symptoms
Socially, psychopathy typically involves extensive callous and manipulative self-serving behaviors with no regard for others and often is associated with repeated delinquency, crime, and violence. Mentally, impairments in processes related to affect and cognition, particularly socially related mental processes, have also been found. Developmentally, symptoms of psychopathy have been identified in young children with conduct disorder and suggest at least a partial constitutional factor that influences its development.
Primary features
Disagreement exists over which features should be considered as part of psychopathy, with researchers identifying around 40 traits supposedly indicative of the construct, though the following characteristics are almost universally considered central.
Core traits
Cooke and Michie (2001) proposed a three-factor model of the Psychopathy Checklist-Revised which has seen widespread application in other measures (e.g., Youth Psychopathic Traits Inventory, Antisocial Process Screening Device).
* Arrogant and deceitful interpersonal style: impression management or superficial charm, inflated and grandiose sense of self-worth, pathological lying/deceit, and manipulation for personal gain.
* Deficient affective experience: lack of remorse or guilt, shallow affect (coldness and unemotionality), callousness and lack of empathy, and failure to accept responsibility for own actions.
* Impulsive and irresponsible lifestyle: impulsivity, sensation-seeking and risk-taking, irresponsible and unreliable behavior, financially parasitic lifestyle, and a lack of realistic, long-term goals.
Low anxiety and fearlessness
Cleckley's (1941) original description of psychopathy included the absence of nervousness and neurotic disorders, and later theorists referred to psychopaths as fearless or thick-skinned. While it is often claimed that the PCL-R does not include low anxiety or fearlessness, such features do contribute to the scoring of the Facet 1 (interpersonal) items, mainly through self-assurance, unrealistic optimism, brazenness, and imperturbability. Indeed, while self-report studies have been inconsistent using the two-factor model of the PCL-R, studies which separate Factor 1 into interpersonal and affective facets, more regularly show modest associations between Facet 1 and low anxiety, boldness and fearless dominance (especially items assessing glibness/charm and grandiosity). When both psychopathy and low anxiety/boldness are measured using interviews, both interpersonal and affective facets are both associated with fearlessness and lack of internalizing disorders.
The importance of low anxiety/fearlessness to psychopathy has historically been underscored through behavioral and physiological studies showing diminished responses to threatening stimuli (interpersonal and affective facets both contributing). However, it is not known whether this is reflected in the reduced experience of state fear or where it reflects impaired detection and response to threat-related stimuli. Moreover, such deficits in threat responding are known to be reduced or even abolished when attention is focused on the threatening stimuli.
Additional Information
Psychopathy is a personality disorder characterized by a set of dysfunctional interpersonal, emotional, lifestyle, and antisocial tendencies. Persons suffering from psychopathy—sometimes called psychopaths—commonly exhibit a lack of empathy or remorse and manifest impulsiveness, manipulativeness, and deceitfulness, among other negative traits and behaviours. In addition, psychopathy leads some persons to commit criminal offenses. In psychology and personality theory, psychopathy refers to one element of the so-called “dark triad” of related negative personality traits—the other two being Machiavellianism and narcissism.
Studies of psychopathy
There is a long history of interest in the combination of personality traits and behaviours thought to indicate psychopathy. Psychologists frequently refer to The Mask of Sanity: An Attempt to Reinterpret the So-Called Psychopathic Personality (1941) by the American psychiatrist Hervey M. Cleckley as an early and careful description of the disorder. Early editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM), a publication of the American Psychiatric Association, used the terms “sociopathic personality disturbance,” “antisocial personality,” and “antisocial personality disorder” (ASPD) to classify traits and behaviours associated with psychopathy. The fifth edition of the manual, the DSM-5 (2013), identified psychopathy as a specific form of ASPD but did not list psychopathy itself as a clinical diagnosis. That approach reflects a lack of consensus on the definition of psychopathy, including its boundaries and features, as further manifested by the proliferation of competing methodologies for assessing, diagnosing, and treating the disorder. Indeed, psychologists have developed a variety of checklists that clearly define psychopathy for use in clinical, research, and criminal justice settings.
Experts in many fields are motivated to pursue research on psychopathy because it is a major public health problem. One study published in 2011 estimated the social costs of criminal psychopathic behaviour in the United States (including the costs of property damage and related expenditures for police, courts, prisons, and so on) at $460 billion per year in 2009 dollars. Such costs do not include those associated with health care for victims of psychopathic criminals.
Characteristics
Psychopathy is a disorder consisting of a suite of dysfunctional personality traits and behaviours that may include low levels of empathy, remorse, or inhibition and high levels of manipulativeness, superficial charm, and impulsiveness. Some people with psychopathy have a grandiose sense of self-worth and engage in pathological lying and promiscuity. Psychopathic behaviour can include criminal and violent acts, but this is not always the case. Psychologists note that psychopathic behaviours vary in intensity and frequency from one individual to another and over time. Moreover, there is a subclinical population that exhibits psychopathic traits and behaviours at a much lower level. Although the prevalence of psychopathy varies depending upon where and how it is measured, the disorder is estimated to affect 1 percent of the general population. Psychopathy is more common in men than in women.
Many psychologists support the model of psychopathy used in the Psychopathy Checklist–Revised (PCL-R), which groups psychopathic features into two broad factors—one centring on personality, the other on behaviour. The personality factor has an interpersonal facet (including superficial charm, deceitfulness, and manipulativeness) and an emotional facet (including remorselessness, emotional shallowness, and lack of empathy). The behavioral factor has a lifestyle facet (including thrill seeking, impulsiveness, and irresponsibility) and an antisocial facet (including poor self-control, early behavioral issues, and criminality).
However, a variety of other psychological models have focused on different key features of psychopathy. One important model is a three-tiered organizational framework associated with the Triarchic Psychopathy Measure (TriPM), which has been used to guide an understanding of psychopathy in the DSM-5. In this model, the core characteristics of psychopathy are clustered within categories of meanness, boldness, and disinhibition.
Psychopathy has a dramatic impact on those who suffer from it. Children at risk of developing the disorder may have behavioral problems that affect their family life, friendships, and education. Adults who suffer from psychopathy may have trouble holding jobs and maintaining personal relationships and can experience legal problems stemming from their reckless or criminal conduct. In addition, people with the disorder frequently develop health problems related to substance abuse.
Psychopathy increases the likelihood of criminal behaviours such as homicide and sadistic violence (not all individuals with the disorder are violent, however). Accordingly, the proportion of people with psychopathy in prison populations is far greater than the proportion in the general population. (Estimates of psychopathy in prison populations range from 16 to 25 percent among men and from 7 to 17 percent among women.) However, psychologists warn that the relationship between psychopathy and violence is complicated, which makes violent behaviour by individuals with the disorder difficult to predict.
Origins and development
Both genetic and environmental factors contribute to the development of psychopathy. Twin and adoption studies have shown that psychopathic traits are heritable. Many studies explore how candidate genes related to neurotransmitters such as serotonin or hormones such as oxytocin influence psychopathic traits, but there is no clear discrete set of genes that cause psychopathy.
Several areas of the brain have been found to be involved in the development of psychopathy. For example, abnormalities in the amygdala and hippocampus often occur in psychopathy. But the relation between psychopathy and brain development is complex and remains a subject of investigation.
A child’s home and social environments can have a major influence on the development of psychopathic traits. Growing up in a traumatic or stressful setting and being subject to harsh discipline or abuse have been correlated with increased levels of psychopathic behaviour. In contrast, growing up in a consistently warm environment with positive reinforcement can effectively extinguish some antisocial tendencies predicted by heredity.
Treatment
Psychotherapy, such as cognitive behaviour therapy, is often used to treat adults suffering from psychopathy. However, many psychologists consider the disorder to be extremely resistant to treatment. The current consensus is that the best way to treat it is to identify psychopathic tendencies in childhood, at which point intervention may be able to mitigate those tendencies and thus prevent psychopathic behaviours later in life.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2309) Stent
Gist
A stent is a tiny tube placed into a hollow structure in your body. This structure can be an artery, a vein, or another structure, such as the tube that carries urine (ureter). The stent holds the structure open.
A stent is a small mesh tube typically used to hold open passages in the body, such as weak or narrowed blood vessels. Stents are often used to treat narrowing in the coronary arteries, which provide the heart with oxygen-rich blood.
Summary
A stent is a small mesh tube typically used to hold open passages in the body, such as weak or narrowed blood vessels. Stents are often used to treat narrowing in the coronary arteries, which provide the heart with oxygen-rich blood. Stents can also help to treat an aneurysm, which is a bulge in the wall of an artery, as well as narrowed airways in the lungs.
Stenting is a minimally invasive procedure, meaning it does not require a large, open incision in the body and is not considered major surgery. However, before you get a stent, you may need certain tests or some medicines to prepare for the procedure. Stents can be made of metal mesh, fabric, silicone, or combinations of materials. Stents for coronary arteries are usually made of metal mesh and sometimes covered with another material. Fabric stents, or stent grafts, are used in larger arteries such as the aorta. Stents used in the airways of the lungs are often made of silicone.
After you receive a stent, and depending on its location in the body, you may need to take certain medicines, such as aspirin and other antiplatelet medicines that prevent your blood from forming clots. Your healthcare provider may recommend taking this medicine for a year or longer after receiving an artery stent to prevent complications. The most common problems are a stent becoming blocked, a blood clot forming in an artery stent, or an airway stent moving out of place.
Types of stents
Stents have different purposes. The materials used in stents can vary depending on where they will go in the body.
Airway stents
Some stents are used in the airways in the lungs. Airway stents are usually temporary.
Metal stents are made of bare metal or covered with another material, such as silicone. They can be difficult to remove from the airways, so they are not common.
Silicone stents are made of a material that can be molded to a certain shape. They are more common, because they are easy to insert and remove. Some silicone stents are 3D printed and can be custom fit for each person.
Aortic aneurysm stents
Stent grafts are used to treat aortic aneurysms. The stent graft is typically a tube made of leakproof polyester with metal mesh underneath. Stent grafts are used in larger arteries, such as the aorta, and provide a stable channel for the blood to flow through.
Coronary or carotid artery stents
Some stents are made specifically for the coronary or carotid arteries.
* Bare metal stents are simple metal mesh tubes that can be used in both the coronary and carotid arteries.
* Drug-eluting stents are the most common type of stents used in the coronary arteries. Usually, these stents have a metal mesh structure and another layer that covers the metal. The outside layer is coated with medicine that is released into the artery over time to prevent the artery from narrowing again. Different types of drug-eluting stents are coated with different medicines. Sometimes the outer layer is biodegradable, meaning it breaks down and dissolves over time, leaving only the metal mesh part of the stent in the artery.
Details
In medicine, a stent is a tube usually constructed of a metallic alloy or a polymer. It is inserted into the lumen (hollow space) of an anatomic vessel or duct to keep the passageway open.
Stenting refers to the placement of a stent. The word "stent" is also used as a verb to describe the placement of such a device, particularly when a disease such as atherosclerosis has pathologically narrowed a structure such as an artery.
A stent is different from a shunt. A shunt is a tube that connects two previously unconnected parts of the body to allow fluid to flow between them. Stents and shunts can be made of similar materials, but perform two different tasks.
There are various types of stents used for different medical purposes. Coronary stents are commonly used in coronary angioplasty, with drug-eluting stents being the most common type. Vascular stents are used for peripheral and cerebrovascular disease, while ureteral stents ensure the patency of a ureter.
Prostatic stents can be temporary or permanent and are used to treat conditions like benign prostatic hyperplasia. Colon and esophageal stents are palliative treatments for advanced colon and esophageal cancer. Pancreatic and biliary stents provide drainage from the gallbladder, pancreas, and bile ducts to the duodenum in conditions such as obstructing gallstones. There are also different types of bare-metal, drug-eluting, and bioresorbable stents available based on their properties.
The term "stent" originates from Charles Stent, an English dentist who made advances in denture-making techniques in the 19th century. The use of coronary stents began in 1986 by Jacques Puel and Ulrich Sigwart to prevent vessel closure during coronary surgery.
Stent types:
By destination organ :
Coronary stents are placed during a coronary angioplasty. The most common use for coronary stents is in the coronary arteries, into which a bare-metal stent, a drug-eluting stent, a bioabsorbable stent, a dual-therapy stent (combination of both drug and bioengineered stent), or occasionally a covered stent is inserted.
The majority of coronary stents used today are drug-eluting stents, which release medication to prevent complications such as blood clot formation and restenosis (re-narrowing). Stenting is performed through a procedure called percutaneous coronary intervention (PCI), where the cardiologist uses angiography and intravascular ultrasound to assess the blockage in the artery and determine the appropriate size and type of stent. The procedure is typically done in a catheterization clinic, and patients may need to stay overnight for observation. While stenting has been shown to reduce chest pain (angina) and improve survival rates after a heart attack, its effectiveness in stable angina patients has been debated.
Studies have found that most heart attacks occur due to plaque rupture rather than an obstructed artery that would benefit from a stent. Statins, along with PCI/stenting and anticoagulant therapies, are considered part of a broader treatment strategy. Some cardiologists believe that coronary stents are overused, but there is evidence of under-use in certain patient groups like the elderly. Ongoing research continues to explore new types of stents with biocompatible coatings or absorbable materials.
Vascular stent
Vascular stents are a common treatment for advanced peripheral and cerebrovascular disease. Common sites treated with vascular stents include the carotid, iliac, and femoral arteries. Because of the external compression and mechanical forces subjected to these locations, flexible stent materials such as nitinol are used in many peripheral stents.
Vascular stents made of metals can lead to thrombosis at the site of treatment or to inflammation scarring. Drug-eluting stents with pharmacologic agents or as drug delivery vehicles have been developed as an alternative to decrease the chances of restenosis.
Because vascular stents are designed to expand inside a blocked artery to keep it open, allowing blood to flow freely, the mechanical properties of vascular stents are crucial for their function: they need to be highly elastic to allow for the expansion and contraction of the stent within the blood vessel, they also need to have high strength and fatigue resistance to withstand the constant physiological load of the arteries, they should have good biocompatibility to reduce the risk of thrombosis and vascular restenosis, and to minimize the body's rejection of the implant.
Vascular stents are commonly used in angioplasty, a surgical procedure that opens blocked arteries and places a stent to keep the artery open. This is a common treatment for heart attacks and is also used in the prevention and treatment of strokes. Over 2 million people receive a stent each year for coronary artery disease alone. Vascular stents can also be used to prevent the rupture of aneurysms in the brain, aorta, or other blood vessels.
Ureteric stent
Ureteral stents are used to ensure the patency of a ureter, which may be compromised, for example, by a kidney stone. This method is sometimes used as a temporary measure to prevent damage to a kidney caused by a kidney stone until a procedure to remove the stone can be performed.
An ureteral stent it is typically inserted using a cystoscope, and one or both ends of the stent may be coiled to prevent movement. Ureteral stents are used for various purposes, such as temporary measures to prevent damage to a blocked kidney until a stone removal procedure can be performed, providing drainage for compressed ureters caused by tumors, and preventing spasms and collapse of the ureter after trauma during procedures like stone removal. The thread attached to some stents may cause irritation but allows for easy removal by pulling gently.
Stents without threads require cystoscopy for removal. Recent developments have introduced magnetic retrieval systems that eliminate the need for invasive procedures like cystoscopy when removing the stent. The use of magnets enables simple extraction without anesthesia and can be done by primary care physicians or nurses rather than urologists. This method has shown high success rates across different patient groups including adults, children, and kidney transplant patients while reducing costs associated with operating room procedures.
Prostatic stent
Prostatic stents are placed from the bladder through the prostatic and penile urethra to allow drainage of the bladder through the math. This is sometimes required in benign prostatic hyperplasia.
A prostatic stent is used to keep the male urethra open and allow for the passage of urine in cases of prostatic obstruction and lower urinary tract symptoms (LUTS). There are two types of prostatic stents: temporary and permanent. Permanent stents, typically made of metal coils, are inserted into the urethra to apply constant gentle pressure and hold open sections that obstruct urine flow. They can be placed under anesthesia as an outpatient procedure but have disadvantages such as increased urination, limited incontinence, potential displacement or infection, and limitations on subsequent endoscopic surgical options. On the other hand, temporary stents can be easily inserted with topical anesthesia similar to a Foley catheter, and allow patients to retain volitional voiding. However, they may cause discomfort or increased urinary frequency.
In the US, there is one temporary prostatic stent that has received FDA approval called The Spanner. It maintains urine flow while allowing natural voluntary urination. Research on permanent stents often focuses on metal coil designs that expand radially to hold open obstructed areas of the urethra.
These permanent stents are used for conditions like benign prostatic hyperplasia (BPH), recurrent bulbar urethral stricture (RBUS), or detrusor external sphincter dyssynergia (DESD). The Urolume is currently the only FDA-approved permanent prostatic stent.
Colon and Esophageal stents
Colon and esophageal stents are a palliative treatment for advanced colon and esophageal cancer.
A colon stent is typically made of flexible metal mesh that can expand and hold open the blocked area, allowing for the passage of stool. Colon stents are used primarily as a palliative treatment for patients with advanced colorectal cancer who are not candidates for surgery. They help relieve symptoms such as abdominal pain, constipation, and bowel obstruction caused by tumors or strictures in the colon.
The placement of a colon stent involves endoscopic techniques similar to esophageal stenting. A thin tube called an endoscope is inserted into the rectum and guided through the colon to locate the blockage. Using fluoroscopy or endoscopic guidance, a guidewire is passed through the narrowed area and then removed after positioning it properly. The stent is then delivered over the guidewire and expanded to keep open the obstructed section of the colon. Complications associated with colon stents include perforation of the intestinal wall, migration or dislodgment of the stent, bleeding, infection at insertion site, or tissue overgrowth around it.
Colon stenting provides several benefits including prompt relief from bowel obstruction symptoms without invasive surgery in many cases. It allows for faster recovery time compared to surgical interventions while providing palliative care for patients with advanced colorectal cancer by improving quality of life and enabling better nutritional intake. However, there are potential risks associated with complications such as migration or obstruction that may require additional procedures or interventions to address these issues effectively.
Pancreatic and biliary stents
Pancreatic and biliary stents provide pancreatic and bile drainage from the gallbladder, pancreas, and bile ducts to the duodenum in conditions such as ascending cholangitis due to obstructing gallstones.
Pancreatic and biliary stents can also be used to treat biliary/pancreatic leaks or to prevent post-ERCP pancreatitis.
In the case of gallstone pancreatitis, a gallstone travels from the gallbladder and blocks the opening to the first part of the small intestine (duodenum). This causes a backup of fluid that can travel up both the bile duct and the pancreatic duct. Gallbladder stones can lead to obstruction of the biliary tree via which gallbladder and pancreas enzymes are secreted into the duodenum, causing emergency events such as acute cholecystitis or acute pancreatitis.
In conditions such as ascending cholangitis due to obstructing gallstones, these stents play a crucial role. They help in maintaining the flow of bile and pancreatic juices from the gallbladder, pancreas, and bile ducts to the duodenum1. Biliary stents are often used during endoscopic retrograde cholangiopancreatography (ERCP) to treat blockages that narrow your bile or pancreatic ducts. In cases of malignant biliary obstruction, endoscopic stent placement is one of the treatment options to relieve the obstruction. Biliary drainage is considered effective, particularly in bile duct conditions that are diagnosed and treated early.
Glaucoma drainage stent
Glaucoma drainage stents are recent developments and have been recently approved in some countries. They are used to reduce intraocular pressure by providing a drainage channel.
By properties or function:
Bare-metal stent
A stent graft or covered stent is type of vascular stent with a fabric coating that creates a contained tube but is expandable like a bare metal stent. Covered stents are used in endovascular surgical procedures such as endovascular aneurysm repair. Stent grafts are also used to treat stenoses in vascular grafts and fistulas used for hemodialysis.
Bioresorbable stent
A bioresorbable stent is a tube-like device made from a material that can release a drug to prevent scar tissue growth. It is used to open and widen clogged heart arteries and then dissolves or is absorbed by the body. Unlike traditional metal stents, bioresorbable stents can restore normal vessel function, avoid long-term complications, and enable natural reconstruction of the arterial wall.
Metal-based bioresorbable scaffolds include iron, magnesium, zinc, and their alloys. Magnesium-based scaffolds have been approved for use in several countries around the world and show promising clinical results in delivering against the drawbacks of permanent metal stents. However, attention has been given to reducing the rate of magnesium corrosion through alloying and coating techniques.
Clinical research shows that resorbable scaffolds offer comparable efficacy and safety profiles to traditional drug-eluting stents (DES). The Magmaris resorbable magnesium scaffold has reported favorable safety outcomes similar to thin-strutted DES in patient populations. The Absorb naturally dissolving stent has also shown low rates of major adverse cardiac events when compared to DES. Imaging studies demonstrate that these naturally dissolving stents begin to dissolve between six months to two years after placement in the artery.
Drug-eluting stent
Drug-eluting stents (DES) are specialized medical devices used to treat coronary artery disease and peripheral artery disease. They release a drug that inhibits cellular growth into the blocked or narrowed arteries, reducing the risk of blockages. DES are commonly placed using percutaneous coronary intervention (PCI), a minimally invasive procedure performed via catheter. These stents have shown clear advantages over older bare-metal stents, improving patient outcomes and quality of life for cardiac patients. With over 90% of stents used in PCI procedures being drug-eluting as of 2023, DES have become the standard choice for interventional cardiologists.
DES gradually release drugs that prevent restenosis and thrombosis within the treated arteries, addressing common complications associated with previous treatments. While risks such as clot formation and bleeding exist, studies have demonstrated superior efficacy compared to bare-metal stents in reducing major adverse cardiac events like heart attacks and repeat revascularization procedures. Long-term outcomes are still being studied due to their relatively recent introduction; however, DES have revolutionized the treatment of coronary artery disease by significantly improving patient outcomes and enhancing their quality of life.
Additional Information
A stent is a metal or plastic tube inserted into a blocked passageway to keep it open. Since their introduction in the late-1980s, stents have revolutionized the treatment of coronary artery disease and other diseases in which vital vessels or passageways are obstructed.
The practice of stenting has become fairly common and has allowed for the minimally invasive treatment of conditions that once required surgery. Even so, there are complications associated with stenting and times when they may not be the best option for everyone.
This article looks at the different types of stents used in medicine today. It also describes the general procedure and the possible risks and side effects of stenting.
Stents should not be confused with shunts. Shunts are similar in design but are used to connect two previously unconnected passageways.
Types
The first stent was implanted into a patient's heart in Toulouse, France, in 1986. Since then, stents have been used in other organs, including the kidneys, colon, and esophagus. Recent innovations have even allowed for the use of stents in treating certain types of glaucoma.
There are different types of stents used to treat different medical conditions. These include:
* Coronary stents: Used for treating coronary artery disease, these stents are used as part of a procedure known as angioplasty. Today, the vast majority of angioplasties involve a coronary stent.
* Endovascular stents: These stents are commonly used to treat advanced peripheral artery disease (involving arteries other than the heart), cerebrovascular disease (involving the brain), and renal artery stenosis (involving the kidneys).
* Ureteral stents: Used to treat or prevent the obstruction of urine from the kidneys, these stents are placed inside a ureter (the vessel that connects a kidney to the bladder) and can be as long as 11 inches in length.
* Prostatic stents: Used to enable urination in males with an enlarged prostate, these stents overcome obstructions caused when the prostate gland compresses the urethra (the passage through which urine exits the body).
* Colonic stents: Used to treat bowel obstructions, these stents are often used in people with advanced colon cancer or other causes of bowel blockage.
* Esophageal stents: Often used in people with advanced esophageal cancer, these stents keep the esophagus (feeding tube) open so that the person can swallow soft foods and liquids.
* Pancreatic and biliary stents: Used to drain bile from the gallbladder and pancreas to the small intestine, these stents are often used when a gallstone blocks a bile duct and triggers a potentially life-threatening condition known as cholangitis.
* Micro-bypass stents: A recent innovation used in people with mild to moderate open-angle glaucoma, these stents are implanted by a microsurgeon to reduce intraocular pressure (pressure within the eye) and slow disease progression.
There are different stents designed for different parts of the body. The goal of all stents is to keep a passageway open in order to restore normal flow and function.
Procedures
The procedures used to implant a stent differ depending on the type of stent. Whether made with coated metals or next-generation polymers, the stents are meant to expand once inserted and provide a stable opening that prevents future collapse.
There are several techniques commonly used for the placement of a stent:
* Coronary or endovascular stents: Performed under regional anesthesia or mild sedation, the procedure involves the insertion of a tiny tube called a balloon catheter into a vein in the groin, arm, or neck. The catheter is tipped with the stent and fed to the site of the obstruction. After inflating the tube to widen the vessel, the balloon is deflated and retracted, leaving the stent behind.
* Ureteral or prostatic stents: The placement of these stents involves a cystoscope (a thin tube equipped with a camera) that is fed through the urethra to the site of the obstruction. A tiny wire connected to the scope's tip helps guide the stent into the correct position. Local, regional, or general anesthesia may be used.
* Colonic or esophageal stents: The placement of these stents is similar to that of a ureteral or prostatic stent but involves either a colonoscope (inserted into the opening of the rectum to the outside of the body. to visualize the colon) or an endoscope (inserted into the mouth to visualize the esophagus). A balloon catheter is commonly used to widen narrowed passages.
* Pancreatic or biliary stents: The placement of these stents is performed either with an endoscope or a percutaneous transhepatic cholangiography (PTC), in which a needle is inserted into the liver through the abdomen to place the stent. Monitored sedation or general anesthesia may be used.
* Micro-bypass stents: The placement of these stents involves a tiny incision in the eye's cornea by an ophthalmologic microsurgeon. The tiny stent (roughly one millimeter in length and 0.3 millimeters in height) is positioned in a structure known as the Schlemm's canal that helps regulate the fluid balance of the eye.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2310) Balance Sheet
Gist
What Is a Balance Sheet? The term balance sheet refers to a financial statement that reports a company's assets, liabilities, and shareholder equity at a specific point in time. Balance sheets provide the basis for computing rates of return for investors and evaluating a company's capital structure.
Summary:
The term balance sheet refers to a financial statement that reports a company's assets, liabilities, and shareholder equity at a specific point in time. Balance sheets provide the basis for computing rates of return for investors and evaluating a company's capital structure.
In short, the balance sheet is a financial statement that provides a snapshot of what a company owns and owes, as well as the amount invested by shareholders. Balance sheets can be used with other important financial statements to conduct fundamental analysis or calculate financial ratios.
Key Takeaways
* A balance sheet is a financial statement that reports a company's assets, liabilities, and shareholder equity.
* The balance sheet is one of the three core financial statements that are used to evaluate a business.
* It provides a snapshot of a company's finances (what it owns and owes) as of the date of publication.
* The balance sheet adheres to an equation that equates assets with the sum of liabilities and shareholder equity.
* Fundamental analysts use balance sheets to calculate financial ratios.
How Balance Sheets Work
The balance sheet provides an overview of the state of a company's finances at a moment in time. It cannot give a sense of the trends playing out over a longer period on its own. For this reason, the balance sheet should be compared with those of previous periods.
Investors can get a sense of a company's financial well-being by using a number of ratios that can be derived from a balance sheet, including the debt-to-equity ratio and the acid-test ratio, along with many others. The income statement and statement of cash flows also provide valuable context for assessing a company's finances, as do any notes or addenda in an earnings report that might refer back to the balance sheet.
The balance sheet adheres to the following accounting equation, with assets on one side, and liabilities plus shareholder equity on the other, balance out:
Assets = Liabilities + Shareholders’ Equity
This formula is intuitive. That's because a company has to pay for all the things it owns (assets) by either borrowing money (taking on liabilities) or taking it from investors (issuing shareholder equity).
If a company takes out a five-year, $4,000 loan from a bank, its assets (specifically, the cash account) will increase by $4,000. Its liabilities (specifically, the long-term debt account) will also increase by $4,000, balancing the two sides of the equation. If the company takes $8,000 from investors, its assets will increase by that amount, as will its shareholder equity. All revenues the company generates in excess of its expenses will go into the shareholder equity account. These revenues will be balanced on the assets side, appearing as cash, investments, inventory, or other assets.
Important
Balance sheets should also be compared with those of other businesses in the same industry since different industries have unique approaches to financing.
Special Considerations
As noted above, you can find information about assets, liabilities, and shareholder equity on a company's balance sheet. The assets should always equal the liabilities and shareholder equity. This means that the balance sheet should always balance, hence the name. If they don't balance, there may be some problems, including incorrect or misplaced data, inventory or exchange rate errors, or miscalculations.
Each category consists of several smaller accounts that break down the specifics of a company's finances. These accounts vary widely by industry, and the same terms can have different implications depending on the nature of the business. Companies might choose to use a form of balance sheet known as the common size, which shows percentages along with the numerical values. This type of report allows for a quick comparison of items.
Components of a Balance Sheet:
Assets
Accounts within this segment are listed from top to bottom in order of their liquidity. This is the ease with which they can be converted into cash. They are divided into current assets, which can be converted to cash in one year or less; and non-current or long-term assets, which cannot.
Here is the general order of accounts within current assets:
* Cash and cash equivalents are the most liquid assets and can include Treasury bills and short-term certificates of deposit, as well as hard currency.
* Marketable securities are equity and debt securities for which there is a liquid market.
* Accounts receivable (AR) refer to money that customers owe the company. This may include an allowance for doubtful accounts as some customers may not pay what they owe.
* Inventory refers to any goods available for sale, valued at the lower of the cost or market price.
* Prepaid expenses represent the value that has already been paid for, such as insurance, advertising contracts, or rent.
Long-term assets include the following:
* Long-term investments are securities that will not or cannot be liquidated in the next year.
* Fixed assets include land, machinery, equipment, buildings, and other durable, generally capital-intensive assets.
* Intangible assets include non-physical (but still valuable) assets such as intellectual property and goodwill. These assets are generally only listed on the balance sheet if they are acquired, rather than developed in-house. Their value may thus be wildly understated (by not including a globally recognized logo, for example) or just as wildly overstated.
Liabilities
A liability is any money that a company owes to outside parties, from bills it has to pay to suppliers to interest on bonds issued to creditors to rent, utilities and salaries. Current liabilities are due within one year and are listed in order of their due date. Long-term liabilities, on the other hand, are due at any point after one year.
Current liabilities accounts might include:
* Current portion of long-term debt is the portion of a long-term debt due within the next 12 months. For example, if a company has a 10 years left on a loan to pay for its warehouse, 1 year is a current liability and 9 years is a long-term liability.
* Interest payable is accumulated interest owed, often due as part of a past-due obligation such as late remittance on property taxes.
* Wages payable is salaries, wages, and benefits to employees, often for the most recent pay period.
* Customer prepayments is money received by a customer before the service has been provided or product delivered. The company has an obligation to (a) provide that good or service or (b) return the customer's money.
* Dividends payable is dividends that have been authorized for payment but have not yet been issued.
* Earned and unearned premiums is similar to prepayments in that a company has received money upfront, has not yet executed on their portion of an agreement, and must return unearned cash if they fail to execute.
* Accounts payable is often the most common current liability. Accounts payable is debt obligations on invoices processed as part of the operation of a business that are often due within 30 days of receipt.
Long-term liabilities can include:
* Long-term debt includes any interest and principal on bonds issued
* Pension fund liability refers to the money a company is required to pay into its employees' retirement accounts
* Deferred tax liability is the amount of taxes that accrued but will not be paid for another year. Besides timing, this figure reconciles differences between requirements for financial reporting and the way tax is assessed, such as depreciation calculations.
* Some liabilities are considered off the balance sheet, meaning they do not appear on the balance sheet.
Shareholder Equity
Shareholder equity is the money attributable to the owners of a business or its shareholders. It is also known as net assets since it is equivalent to the total assets of a company minus its liabilities or the debt it owes to non-shareholders.
Retained earnings are the net earnings a company either reinvests in the business or uses to pay off debt. The remaining amount is distributed to shareholders in the form of dividends.
Treasury stock is the stock a company has repurchased. It can be sold at a later date to raise cash or reserved to repel a hostile takeover.
Some companies issue preferred stock, which will be listed separately from common stock under this section. Preferred stock is assigned an arbitrary par value (as is common stock, in some cases) that has no bearing on the market value of the shares. The common stock and preferred stock accounts are calculated by multiplying the par value by the number of shares issued.
Additional paid-in capital or capital surplus represents the amount shareholders have invested in excess of the common or preferred stock accounts, which are based on par value rather than market price. Shareholder equity is not directly related to a company's market capitalization. The latter is based on the current price of a stock, while paid-in capital is the sum of the equity that has been purchased at any price.
Details
In financial accounting, a balance sheet (also known as statement of financial position or statement of financial condition) is a summary of the financial balances of an individual or organization, whether it be a sole proprietorship, a business partnership, a corporation, private limited company or other organization such as government or not-for-profit entity. Assets, liabilities and ownership equity are listed as of a specific date, such as the end of its financial year. A balance sheet is often described as a "snapshot of a company's financial condition". It is the summary of each and every financial statement of an organization.
Of the four basic financial statements, the balance sheet is the only statement which applies to a single point in time of a business's calendar year.
A standard company balance sheet has two sides: assets on the left, and financing on the right–which itself has two parts; liabilities and ownership equity. The main categories of assets are usually listed first, and typically in order of liquidity. Assets are followed by the liabilities. The difference between the assets and the liabilities is known as equity or the net assets or the net worth or capital of the company and according to the accounting equation, net worth must equal assets minus liabilities.
Another way to look at the balance sheet equation is that total assets equals liabilities plus owner's equity. Looking at the equation in this way shows how assets were financed: either by borrowing money (liability) or by using the owner's money (owner's or shareholders' equity). Balance sheets are usually presented with assets in one section and liabilities and net worth in the other section with the two sections "balancing".
A business operating entirely in cash can measure its profits by withdrawing the entire bank balance at the end of the period, plus any cash in hand. However, many businesses are not paid immediately; they build up inventories of goods and acquire buildings and equipment. In other words: businesses have assets and so they cannot, even if they want to, immediately turn these into cash at the end of each period. Often, these businesses owe money to suppliers and to tax authorities, and the proprietors do not withdraw all their original capital and profits at the end of each period. In other words, businesses also have liabilities.
Types
A balance sheet summarizes an organization's or individual's assets, equity and liabilities at a specific point in time. Two forms of balance sheet exist. They are the report form and account form. Individuals and small businesses tend to have simple balance sheets. Larger businesses tend to have more complex balance sheets, and these are presented in the organization's annual report. Large businesses also may prepare balance sheets for segments of their businesses. A balance sheet is often presented alongside one for a different point in time (typically the previous year) for comparison.
Personal
A personal balance sheet lists current assets such as cash in checking accounts and savings accounts, long-term assets such as common stock and real estate, current liabilities such as loan debt and mortgage debt due, or overdue, long-term liabilities such as mortgage and other loan debt. Securities and real estate values are listed at market value rather than at historical cost or cost basis. Personal net worth is the difference between an individual's total assets and total liabilities.
US small business
A small business balance sheet lists current assets such as cash, accounts receivable, and inventory, fixed assets such as land, buildings, and equipment, intangible assets such as patents, and liabilities such as accounts payable, accrued expenses, and long-term debt. Contingent liabilities such as warranties are noted in the footnotes to the balance sheet. The small business's equity is the difference between total assets and total liabilities.
Charities
In England and Wales, smaller charities which are not also companies are permitted to file a statement of assets and liabilities instead of a balance sheet. This statement lists the charity's main assets and liabilities as at the end of its financial year.
Public business entities structure
Guidelines for balance sheets of public business entities are given by the International Accounting Standards Board and numerous country-specific organizations/companies. The standard used by companies in the US adheres to U.S. Generally Accepted Accounting Principles (GAAP). The Federal Accounting Standards Advisory Board (FASAB) is a United States federal advisory committee whose mission is to develop generally accepted accounting principles (GAAP) for federal financial reporting entities.
Balance sheet account names and usage depend on the organization's country and the type of organization. Government organizations do not generally follow standards established for individuals or businesses.
If applicable to the business, summary values for the following items should be included in the balance sheet: Assets are all the things the business owns. This will include property, tools, vehicles, furniture, machinery, and so on.
Assets:
Current assets
* Accounts receivable
* Cash and cash equivalents
* Inventories
* Cash at bank, Petty Cash, Cash On Hand
* Prepaid expenses for future services that will be used within a year
* Revenue Earned In Arrears (Accrued Revenue) for services done but not yet received for the year
* Loan To (Less than one financial period)
Non-current assets (Fixed assets)
* Property, plant and equipment
* Investment property, such as real estate held for investment purposes
* Intangible assets, such as patents, copyrights and goodwill
* Financial assets (excluding investments accounted for using the equity method, accounts receivables, and cash and cash equivalents), such as notes receivables
* Investments accounted for using the equity method
* Biological assets, which are living plants or animals. Bearer biological assets are plants or animals which bear agricultural produce for harvest, such as apple trees grown to produce apples and sheep raised to produce wool.
* Loan To (More than one financial period)
Liabilities
* Accounts payable
* Provisions for warranties or court decisions (contingent liabilities that are both probable and measurable)
* Financial liabilities (excluding provisions and accounts payables), such as promissory notes and corporate bonds
* Liabilities and assets for current tax
* Deferred tax liabilities and deferred tax assets
* Unearned revenue for services paid for by customers but not yet provided
* Interests on loan stock
* Creditors' equity
Net current assets
Net current assets means current assets minus current liabilities.
Equity / capital
The net assets shown by the balance sheet equals the third part of the balance sheet, which is known as the shareholders' equity. It comprises:
* Issued capital and reserves attributable to equity holders of the parent company (controlling interest)
* Non-controlling interest in equity
Formally, shareholders' equity is part of the company's liabilities: they are funds "owing" to shareholders (after payment of all other liabilities); usually, however, "liabilities" are used in the more restrictive sense of liabilities excluding shareholders' equity. The balance of assets and liabilities (including shareholders' equity) is not a coincidence. Records of the values of each account in the balance sheet are maintained using a system of accounting known as double-entry bookkeeping. In this sense, shareholders' equity by construction must equal assets minus liabilities, and thus the shareholders' equity is considered to be a residual.
Regarding the items in the equity section, the following disclosures are required:
* Numbers of shares authorized, issued and fully-paid, and issued but not fully paid
* Par value of shares
* Reconciliation of shares outstanding at the beginning and the end of the period
* Description of rights, preferences, and restrictions of shares
* Treasury shares, including shares held by subsidiaries and associates
* Shares reserved for issuance under options and contracts
* A description of the nature and purpose of each reserve within owners' equity.
Substantiation
Balance sheet substantiation is the accounting process conducted by businesses on a regular basis to confirm that the balances held in the primary accounting system of record (e.g. SAP, Oracle, other ERP system's General Ledger) are reconciled (in balance with) with the balance and transaction records held in the same or supporting sub-systems.
Balance sheet substantiation includes multiple processes including reconciliation (at a transactional or at a balance level) of the account, a process of review of the reconciliation and any pertinent supporting documentation and a formal certification (sign-off) of the account in a predetermined form driven by corporate policy.
Balance sheet substantiation is an important process that is typically carried out on a monthly, quarterly and year-end basis. The results help to drive the regulatory balance sheet reporting obligations of the organization.
Historically, balance sheet substantiation has been a wholly manual process, driven by spreadsheets, email and manual monitoring and reporting. In recent years software solutions have been developed to bring a level of process automation, standardization and enhanced control to the balance sheet substantiation or account certification process. These solutions are suitable for organizations with a high volume of accounts and/or personnel involved in the Balance Sheet Substantiation process and can be used to drive efficiencies, improve transparency and help to reduce risk.
Balance sheet substantiation is a key control process in the SOX 404 top-down risk assessment.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2311) Journalist
Gist
A journalist is a person who gathers information in the form of text, audio or pictures, processes it into a newsworthy form and disseminates it to the public. This is called journalism.
A journalist is a person whose job is to collect news and write about it for newspapers, magazines, television, or radio. This is a kick in the teeth for journalists. Synonyms: reporter, writer, correspondent, newsman or newswoman.
Journalism or news writing is a prose style used for reporting in newspapers, radio, and television. When writing journalistically, one has to take into account not only one's audience, but also the tone in which the piece is delivered, as well as the ABCs of news writing: Accuracy, Brevity, and Clarity.
Summary
A journalist investigates, gathers, and reports news and information to the public through various media outlets, including newspapers, magazines, television, radio, and online platforms. Journalists inform the public about current events, issues, and developments, serving as watchdogs of government and institutions and facilitating democratic discourse.
Journalism encompasses a wide range of roles and specialties, including reporters, correspondents, editors, photojournalists, and investigative journalists, each with their own responsibilities and areas of expertise. Journalists may cover local, national, or international news, politics, business, sports, entertainment, culture, science, health, or other topics, depending on their beat or assignment. With the rise of digital media and social networking, journalists also engage in multimedia storytelling, data journalism, and audience engagement strategies to reach and connect with diverse audiences in an ever-evolving media landscape.
Duties and Responsibilities
The duties and responsibilities of a journalist encompass a wide range of tasks aimed at gathering, analyzing, and disseminating news and information to the public. Here are some key responsibilities:
* Research and Investigation: Journalists are responsible for researching and investigating news stories, events, and issues to uncover facts, gather evidence, and verify information. This may involve conducting interviews with sources, reviewing documents, observing events firsthand, and consulting experts or witnesses.
* Reporting and Writing: Journalists write news articles, reports, features, or opinion pieces based on their research and investigation. They use clear, concise, and compelling language to communicate information accurately and effectively to their audience. Journalists must adhere to ethical standards and journalistic principles such as accuracy, fairness, objectivity, and balance in their reporting.
* Interviewing: Journalists conduct interviews with sources, including officials, experts, witnesses, and individuals involved in news stories, to gather quotes, insights, and perspectives. They must ask probing questions, listen attentively, and record or transcribe interviews accurately to ensure the integrity and credibility of their reporting.
* Fact-Checking and Verification: Journalists are responsible for fact-checking and verifying the accuracy of information before publishing or broadcasting news stories. This involves corroborating information from multiple sources, cross-referencing data and statistics, and confirming details with reliable sources to avoid spreading misinformation or falsehoods.
* Ethical Considerations: Journalists must adhere to ethical guidelines and professional standards in their reporting, including principles of fairness, impartiality, transparency, and accountability. They must avoid conflicts of interest, disclose sources, and respect the privacy and dignity of individuals involved in news stories.
* Adaptation to New Technologies: With the evolving landscape of digital media and technology, journalists must adapt to new tools, platforms, and storytelling formats to engage and inform their audience effectively. This may involve multimedia storytelling, data journalism, social media reporting, or audience interaction strategies.
Details
Journalism is the production and distribution of reports on the interaction of events, facts, ideas, and people that are the "news of the day" and that informs society to at least some degree of accuracy. The word, a noun, applies to the occupation (professional or not), the methods of gathering information, and the organizing literary styles.
The appropriate role for journalism varies from country to country, as do perceptions of the profession, and the resulting status. In some nations, the news media are controlled by government and are not independent. In others, news media are independent of the government and operate as private industry. In addition, countries may have differing implementations of laws handling the freedom of speech, freedom of the press as well as slander and libel cases.
The proliferation of the Internet and smartphones has brought significant changes to the media landscape since the turn of the 21st century. This has created a shift in the consumption of print media channels, as people increasingly consume news through e-readers, smartphones, and other personal electronic devices, as opposed to the more traditional formats of newspapers, magazines, or television news channels. News organizations are challenged to fully monetize their digital wing, as well as improvise on the context in which they publish in print. Newspapers have seen print revenues sink at a faster pace than the rate of growth for digital revenues.
Production
Journalistic conventions vary by country. In the United States, journalism is produced by media organizations or by individuals. Bloggers are often regarded as journalists. The Federal Trade Commission requires that bloggers who write about products received as promotional gifts, disclose that they received the products for free. This is intended to eliminate conflicts of interest and protect consumers.
In the US, many credible news organizations are incorporated entities, have an editorial board, and exhibit separate editorial and advertising departments. Many credible news organizations, or their employees, often belong to and abide by the ethics of professional organizations such as the American Society of News Editors, the Society of Professional Journalists, Investigative Reporters & Editors, Inc., or the Online News Association. Many news organizations also have their own codes of ethics that guide journalists' professional publications. For instance, The New York Times code of standards and ethics is considered particularly rigorous.
When crafting news stories, regardless of the medium, fairness and bias are issues of concern to journalists. Some stories are intended to represent the author's own opinion; others are more neutral or feature balanced points of view. In a traditional print newspaper and its online version, information is organized into sections. This makes clear the distinction between content based on fact and on opinion. In other media, many of these distinctions break down. Readers should pay careful attention to headings and other design elements to ensure that they understand the journalist's intent. Opinion pieces are generally written by regular columnists or appear in a section titled "Op-ed", these reflect a journalist's own opinions and ideology. While feature stories, breaking news, and hard news stories typically make efforts to remove opinion from the copy.
According to Robert McChesney, healthy journalism in a democratic country must provide an opinion of people in power and who wish to be in power, must include a range of opinions and must regard the informational needs of all people.
Many debates center on whether journalism ethics require them to be objective and neutral. Arguments include the fact that journalists produce news out of and as part of a particular social context, and that they are guided by professional codes of ethics and do their best to represent all legitimate points of view. Additionally, the ability to render a subject's complex and fluid narrative with sufficient accuracy is sometimes challenged by the time available to spend with subjects, the affordances or constraints of the medium used to tell the story, and the evolving nature of people's identities.
Forms
There are several forms of journalism with diverse audiences. Journalism is said to serve the role of a "fourth estate", acting as a watchdog on the workings of the government. A single publication (such as a newspaper) contains many forms of journalism, each of which may be presented in different formats. Each section of a newspaper, magazine, or website may cater to a different audience.
Some forms include:
Access journalism – journalists who self-censor and voluntarily cease speaking about issues that might embarrass their hosts, guests, or powerful politicians or businesspersons.
Advocacy journalism – writing to advocate particular viewpoints or influence the opinions of the audience.
Broadcast journalism – written or spoken journalism for radio or television
Business journalism – tracks, records, analyzes and interprets the business, economic and financial activities and changes that take place in societies.
Citizen journalism – participatory journalism.
Data journalism – the practice of finding stories in numbers, and using numbers to tell stories. Data journalists may use data to support their reporting. They may also report about uses and misuses of data. The US news organization ProPublica is known as a pioneer of data journalism.
Drone journalism – use of drones to capture journalistic footage.
Gonzo journalism – first championed by Hunter S. Thompson, gonzo journalism is a "highly personal style of reporting".
Interactive journalism – a type of online journalism that is presented on the web
Investigative journalism – in-depth reporting that uncovers social problems.
Photojournalism – the practice of telling true stories through images.
Political journalism – coverage of all aspects of politics and political science.
Science journalism - conveys reporting about science to the public.
Sensor journalism – the use of sensors to support journalistic inquiry.
Sports journalism – writing that reports on matters pertaining to sporting topics and competitions.
Student journalism – the practice of journalism by students at an educational institution, often covering topics particularly relevant to the student body.
Tabloid journalism – writing that is light-hearted and entertaining. Considered less legitimate than mainstream journalism.
Yellow journalism (or sensationalism) – writing which emphasizes exaggerated claims or rumors.
Global journalism – journalism that encompasses a global outlook focusing on intercontinental issues.
War journalism – the covering of wars and armed conflicts
Social media
The rise of social media has drastically changed the nature of journalistic reporting, giving rise to so-called citizen journalists. In a 2014 study of journalists in the United States, 40% of participants claimed they rely on social media as a source, with over 20% depending on microblogs to collect facts. From this, the conclusion can be drawn that breaking news nowadays often stems from user-generated content, including videos and pictures posted online in social media.. However, though 69.2% of the surveyed journalists agreed that social media allowed them to connect to their audience, only 30% thought it had a positive influence on news credibility. In addition to this, a 2021 study done by Pew Research Center shows that 86% of Americans are getting their news from digital devices.
Consequently, this has resulted in arguments to reconsider journalism as a process distributed among many authors, including the socially mediating public, rather than as individual products and articles written by dedicated journalists.
Because of these changes, the credibility ratings of news outlets has reached an all-time low. A 2014 study revealed that only 22% of Americans reported a "great deal" or "quite a lot of confidence" in either television news or newspapers.
Additional Information
Journalists inform the public about what is going on in the world. They cover a wide range of events, from local celebrations to international tragedies. Journalists help their audience understand the latest news stories by interviewing people and researching the topics they cover. Photojournalists are journalists who communicate the news with photographs.
Journalists work for newspapers, magazines, Web sites, radio stations, or television stations. Some journalists focus their reporting on specific subjects, such as politics, business, technology, or art. However, all journalists are expected to have a wide range of knowledge. Good communication is important in journalism. Journalists must be able to research and write an informed, easy-to-read article or to announce, in a strong, clear voice, a story on the radio, television, or Internet that is simple to understand.
Experience is the most important thing for a journalist to have. The most effective way for a journalist to learn the craft and to gain experience is to be in the field. This is how journalists used to learn their craft. Even today, many news organizations value experience and knowledge over a degree. However, there are degrees centered on the field of journalism, and many journalists working today attended college. While earning a degree, journalism students gain necessary experience by working at the school newspaper, radio or television stations, or by interning at large news organizations.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2312) Neurology
Gist:
Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves.
Summary
Neurology is a medical specialty concerned with the nervous system and its functional or organic disorders. Neurologists diagnose and treat diseases and disorders of the brain, spinal cord, and nerves.
The first scientific studies of nerve function in animals were performed in the early 18th century by English physiologist Stephen Hales and Scottish physiologist Robert Whytt. Knowledge was gained in the late 19th century about the causes of aphasia, epilepsy, and motor problems arising from brain damage. French neurologist Jean-Martin Charcot and English neurologist William Gowers described and classified many diseases of the nervous system. The mapping of the functional areas of the brain through selective electrical stimulation also began in the 19th century. Despite these contributions, however, most knowledge of the brain and nervous functions came from studies in animals and from the microscopic analysis of nerve cells.
The electroencephalograph (EEG), which records electrical brain activity, was invented in the 1920s by Hans Berger. Development of the EEG, analysis of cerebrospinal fluid obtained by lumbar puncture (spinal tap), and development of cerebral angiography allowed neurologists to increase the precision of their diagnoses and develop specific therapies and rehabilitative measures. Further aiding the diagnosis and treatment of brain disorders were the development of computerized axial tomography (CT) scanning in the early 1970s and magnetic resonance imaging (MRI) in the 1980s, both of which yielded detailed, noninvasive views of the inside of the brain. The identification of chemical agents in the central nervous system and the elucidation of their roles in transmitting and blocking nerve impulses have led to the introduction of a wide array of medications that can correct or alleviate various neurological disorders including Parkinson disease, multiple sclerosis, and epilepsy. Neurosurgery, a medical specialty related to neurology, has also benefited from CT scanning and other increasingly precise methods of locating lesions and other abnormalities in nervous tissues.
Details
Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves. Neurological practice relies heavily on the field of neuroscience, the scientific study of the nervous system.
A neurologist is a physician specializing in neurology and trained to investigate, diagnose and treat neurological disorders. Neurologists diagnose and treat myriad neurologic conditions, including stroke, epilepsy, movement disorders such as Parkinson's disease, brain infections, autoimmune neurologic disorders such as multiple sclerosis, sleep disorders, brain injury, headache disorders like migraine, tumors of the brain and dementias such as Alzheimer's disease. Neurologists may also have roles in clinical research, clinical trials, and basic or translational research. Neurology is a nonsurgical specialty, its corresponding surgical specialty is neurosurgery.
History
The academic discipline began between the 15th and 16th centuries with the work and research of many neurologists such as Thomas Willis, Robert Whytt, Matthew Baillie, Charles Bell, Moritz Heinrich Romberg, Duchenne de Boulogne, William A. Hammond, Jean-Martin Charcot, C. Miller Fisher and John Hughlings Jackson. Neo-Latin neurologia appeared in various texts from 1610 denoting an anatomical focus on the nerves (variably understood as vessels), and was most notably used by Willis, who preferred Greek νευρολογία.
Polish neurologist Edward Flatau greatly influenced the developing field of neurology. He published a human brain atlas in 1894 and wrote a fundamental book on migraines in 1912.
Jean-Martin Charcot is considered one of the fathers of neurology.
In the United States and Canada, neurologists are physicians who have completed a postgraduate training period known as residency specializing in neurology after graduation from medical school. This additional training period typically lasts four years, with the first year devoted to training in internal medicine. On average, neurologists complete a total of eight to ten years of training. This includes four years of medical school, four years of residency and an optional one to two years of fellowship.
While neurologists may treat general neurologic conditions, some neurologists go on to receive additional training focusing on a particular subspecialty in the field of neurology. These training programs are called fellowships, and are one to three years in duration. Subspecialties in the United States include brain injury medicine, clinical neurophysiology, epilepsy, neurodevelopmental disabilities, neuromuscular medicine, pain medicine, sleep medicine, neurocritical care, vascular neurology (stroke), behavioral neurology, headache, neuroimmunology and infectious disease, movement disorders, neuroimaging, neurooncology, and neurorehabilitation.
In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology.
In the United Kingdom and Ireland, neurology is a subspecialty of general (internal) medicine. After five years of medical school and two years as a Foundation Trainee, an aspiring neurologist must pass the examination for Membership of the Royal College of Physicians (or the Irish equivalent) and complete two years of core medical training before entering specialist training in neurology. Up to the 1960s, some intending to become neurologists would also spend two years working in psychiatric units before obtaining a diploma in psychological medicine. However, that was uncommon and, now that the MRCPsych takes three years to obtain, would no longer be practical. A period of research is essential, and obtaining a higher degree aids career progression. Many found it was eased after an attachment to the Institute of Neurology at Queen Square, London. Some neurologists enter the field of rehabilitation medicine (known as physiatry in the US) to specialise in neurological rehabilitation, which may include stroke medicine, as well as traumatic brain injuries.
Physical examination
During a neurological examination, the neurologist reviews the patient's health history with special attention to the patient's neurologic complaints. The patient then takes a neurological exam. Typically, the exam tests mental status, function of the cranial nerves (including vision), strength, coordination, reflexes, sensation and gait. This information helps the neurologist determine whether the problem exists in the nervous system and the clinical localization. Localization of the pathology is the key process by which neurologists develop their differential diagnosis. Further tests may be needed to confirm a diagnosis and ultimately guide therapy and appropriate management. Useful adjunct imaging studies in neurology include CT scanning and MRI. Other tests used to assess muscle and nerve function include nerve conduction studies and electromyography.
Clinical tasks
Neurologists examine patients who are referred to them by other physicians in both the inpatient and outpatient settings. Neurologists begin their interactions with patients by taking a comprehensive medical history, and then performing a physical examination focusing on evaluating the nervous system. Components of the neurological examination include assessment of the patient's cognitive function, cranial nerves, motor strength, sensation, reflexes, coordination, and gait.
In some instances, neurologists may order additional diagnostic tests as part of the evaluation. Commonly employed tests in neurology include imaging studies such as computed axial tomography (CAT) scans, magnetic resonance imaging (MRI), and ultrasound of major blood vessels of the head and neck. Neurophysiologic studies, including electroencephalography (EEG), needle electromyography (EMG), nerve conduction studies (NCSs) and evoked potentials are also commonly ordered. Neurologists frequently perform lumbar punctures to assess characteristics of a patient's cerebrospinal fluid. Advances in genetic testing have made genetic testing an important tool in the classification of inherited neuromuscular disease and diagnosis of many other neurogenetic diseases. The role of genetic influences on the development of acquired neurologic diseases is an active area of research.
Some of the commonly encountered conditions treated by neurologists include headaches, radiculopathy, neuropathy, stroke, dementia, seizures and epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, Parkinson's disease, Tourette's syndrome, multiple sclerosis, head trauma, sleep disorders, neuromuscular diseases, and various infections and tumors of the nervous system. Neurologists are also asked to evaluate unresponsive patients on life support to confirm brain death.
Treatment options vary depending on the neurological problem. They can include referring the patient to a physiotherapist, prescribing medications, or recommending a surgical procedure.
Some neurologists specialize in certain parts of the nervous system or in specific procedures. For example, clinical neurophysiologists specialize in the use of EEG and intraoperative monitoring to diagnose certain neurological disorders. Other neurologists specialize in the use of electrodiagnostic medicine studies – needle EMG and NCSs. In the US, physicians do not typically specialize in all the aspects of clinical neurophysiology – i.e. sleep, EEG, EMG, and NCSs. The American Board of Clinical Neurophysiology certifies US physicians in general clinical neurophysiology, epilepsy, and intraoperative monitoring. The American Board of Electrodiagnostic Medicine certifies US physicians in electrodiagnostic medicine and certifies technologists in nerve-conduction studies. Sleep medicine is a subspecialty field in the US under several medical specialties including anesthesiology, internal medicine, family medicine, and neurology. Neurosurgery is a distinct specialty that involves a different training path and emphasizes the surgical treatment of neurological disorders.
Also, many nonmedical doctors, those with doctoral degrees (usually PhDs) in subjects such as biology and chemistry, study and research the nervous system. Working in laboratories in universities, hospitals, and private companies, these neuroscientists perform clinical and laboratory experiments and tests to learn more about the nervous system and find cures or new treatments for diseases and disorders.
A great deal of overlap occurs between neuroscience and neurology. Many neurologists work in academic training hospitals, where they conduct research as neuroscientists in addition to treating patients and teaching neurology to medical students.
General caseload
Neurologists are responsible for the diagnosis, treatment, and management of all the conditions mentioned above. When surgical or endovascular intervention is required, the neurologist may refer the patient to a neurosurgeon or an interventional neuroradiologist. In some countries, additional legal responsibilities of a neurologist may include making a finding of brain death when it is suspected that a patient has died. Neurologists frequently care for people with hereditary (genetic) diseases when the major manifestations are neurological, as is frequently the case. Lumbar punctures are frequently performed by neurologists. Some neurologists may develop an interest in particular subfields, such as stroke, dementia, movement disorders, neurointensive care, headaches, epilepsy, sleep disorders, chronic pain management, multiple sclerosis, or neuromuscular diseases.
Overlapping areas
Some overlap also occurs with other specialties, varying from country to country and even within a local geographic area. Acute head trauma is most often treated by neurosurgeons, whereas sequelae of head trauma may be treated by neurologists or specialists in rehabilitation medicine. Although stroke cases have been traditionally managed by internal medicine or hospitalists, the emergence of vascular neurology and interventional neuroradiology has created a demand for stroke specialists. The establishment of Joint Commission-certified stroke centers has increased the role of neurologists in stroke care in many primary, as well as tertiary, hospitals. Some cases of nervous system infectious diseases are treated by infectious disease specialists. Most cases of headache are diagnosed and treated primarily by general practitioners, at least the less severe cases. Likewise, most cases of sciatica are treated by general practitioners, though they may be referred to neurologists or surgeons (neurosurgeons or orthopedic surgeons). Sleep disorders are also treated by pulmonologists and psychiatrists. Cerebral palsy is initially treated by pediatricians, but care may be transferred to an adult neurologist after the patient reaches a certain age. Physical medicine and rehabilitation physicians may treat patients with neuromuscular diseases with electrodiagnostic studies (needle EMG and nerve-conduction studies) and other diagnostic tools. In the United Kingdom and other countries, many of the conditions encountered by older patients such as movement disorders, including Parkinson's disease, stroke, dementia, or gait disorders, are managed predominantly by specialists in geriatric medicine.
Clinical neuropsychologists are often called upon to evaluate brain-behavior relationships for the purpose of assisting with differential diagnosis, planning rehabilitation strategies, documenting cognitive strengths and weaknesses, and measuring change over time (e.g., for identifying abnormal aging or tracking the progression of a dementia).
Relationship to clinical neurophysiology
In some countries such as the United States and Germany, neurologists may subspecialize in clinical neurophysiology, the field responsible for EEG and intraoperative monitoring, or in electrodiagnostic medicine nerve conduction studies, EMG, and evoked potentials. In other countries, this is an autonomous specialty (e.g., United Kingdom, Sweden, Spain).
Overlap with psychiatry
In the past, prior to the advent of more advanced diagnostic techniques such as MRI some neurologists have considered psychiatry and neurology to overlap. Although mental illnesses are believed by many[weasel words] to be neurological disorders affecting the central nervous system, traditionally they are classified separately, and treated by psychiatrists. In a 2002 review article in the American Journal of Psychiatry, Professor Joseph B. Martin, Dean of Harvard Medical School and a neurologist by training, wrote, "the separation of the two categories is arbitrary, often influenced by beliefs rather than proven scientific observations. And the fact that the brain and mind are one makes the separation artificial anyway".
Neurological disorders often have psychiatric manifestations, such as post-stroke depression, depression and dementia associated with Parkinson's disease, mood and cognitive dysfunctions in Alzheimer's disease, and Huntington disease, to name a few. Hence, the sharp distinction between neurology and psychiatry is not always on a biological basis. The dominance of psychoanalytic theory in the first three-quarters of the 20th century has since then been largely replaced by a focus on pharmacology. Despite the shift to a medical model, brain science has not advanced to a point where scientists or clinicians can point to readily discernible pathological lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder.
Neurological enhancement
The emerging field of neurological enhancement highlights the potential of therapies to improve such things as workplace efficacy, attention in school, and overall happiness in personal lives. However, this field has also given rise to questions about neuroethics.
Neurological UX
Neurological UX is a specialised branch of web accessibility focusing on designing for individuals with neurological dispositions, such as ADHD, dyslexia, and autism. Coined by Gareth Slinn in his book NeurologicalUX neurologicalux.com, this field aims to create inclusive digital experiences that reduce anxiety, enhance readability, and improve usability for diverse cognitive needs. It emphasises thoughtful design choices like accessible colour themes, simplified navigation, and adaptable interfaces to accommodate varying neurological profiles.
Additional Information
A neurologist is a medical doctor who specializes in diagnosing and treating diseases of the brain, spinal cord and nerves. Neurological diseases and conditions can affect nearly every part of your body and affect both adults and children.
Overview:
What is a neurologist?
A neurologist is a medical doctor who diagnoses, treats and manages disorders of the brain and nervous system (brain, spinal cord and nerves). A neurologist knows the anatomy, function and conditions that affect your nerves and nervous system. Your nervous system is your body’s command center. It controls everything you think, feel and do — from moving your arm to the beating of your heart.
What is a pediatric neurologist?
A pediatric neurologist is a medical doctor who diagnoses, treats and manages disorders of the brain and nervous system in children — from newborn to adolescent. Many of the conditions they treat are the same as those seen in adults, in addition to inherited and developmental conditions.
What is a neurosurgeon?
A neurosurgeon is a medical doctor who performs surgery on the brain, spinal cord and nerves.
What diseases and conditions does a neurologist treat?
Some of the most common neurologic disorders a neurologist may treat include:
* Alzheimer’s disease and other dementias.
* Amyotrophic lateral sclerosis (also called ALS or Lou Gehrig’s disease).
* Brain injury, spinal cord injury or vascular malformations.
* Cerebral aneurysms and arteriovenous malformations.
* Cerebral palsy and spasticity.
* Concussion.
* Encephalitis.
* Epilepsy.
* Facial pain syndromes.
* Headache/migraine.
* Hydrocephalus.
* Meningitis.
* Mental and behavioral health disorders.
* Multiple sclerosis.
* Myasthenia gravis and myopathies.
* Pain in your neck, back and spine.
* Parkinson’s disease.
* Peripheral neuropathy.
* Sleep disorders.
* Stroke.
* Tremor, dystonia.
* Tumors of the brain, spine and nerves.
How do neurologists diagnose conditions?
Your neurologist will ask about your medical history, family history, medication history and any current symptoms. They’ll also conduct a neurologic examination, including tests of your:
* Coordination, balance, reflexes and gait.
* Muscle strength.
* Mental health.
* Vision, hearing and speech.
* Sensation.
Your neurologist may also order blood, urine or other fluid tests in order to help understand condition severity or check on medication levels. Genetic testing may be ordered to identify inherited disorders. Imaging studies of your nervous system might also be ordered to aid in diagnosis.
Neurologists treat people with medications, physical therapy or other approaches.
What types of tests does a neurologist order?
Common neurologic tests include:
* Angiography. Angiography can show if blood vessels in your brain, head or neck are blocked, damaged or abnormal. It can detect such things as aneurysms and blood clots.
* Biopsy. A biopsy is the removal of a piece of tissue from your body. Biopsies may be taken of muscle, nerve or brain tissue.
* Cerebrospinal fluid analysis. This test involves the removal of a sample of the fluid that surrounds your brain and spinal cord. The test can detect evidence of a brain bleed, infection, multiple sclerosis and metabolic diseases.
* Computed tomography (CT), magnetic resonance imaging (MRI), X-raysandultrasound.
* Electroencephalography (EEG). This test measures your brain’s electrical activity and is used to help diagnose seizures and infections (such as encephalitis) brain injury and tumors.
* Electromyography (EMG). This test records the electrical activity in muscles and is used to diagnose nerve and muscle disorders, spinal nerve root compression and motor neuron disorders such as amyotrophic lateral sclerosis.
* Electronystagmography (ENG). This group of tests is used to diagnose involuntary eye movement, dizziness and balance disorders.
* Evoked potentials. This test measures how quickly and completely electrical signals reach your brain from your eyes, ears or touch to your skin. The test can help diagnose multiple sclerosis, acoustic neuroma and spinal cord injury.
* Myelography. This test helps diagnose spinal and spinal cord tumors and herniated disks and fractures.
* Polysomnogram. This test measures brain and body activity during sleep and helps diagnose sleep disorders.
* Positron emission tomography (PET). This imaging test can show tumors or be used to evaluate epilepsy, brain tumors, dementia and Alzheimer’s disease.
* Single-photon emission computed tomography (SPECT). This imaging test can diagnose tumors, infections and assess the location of seizures, degenerative spine disease and stress fractures.
* Thermography. This test measures temperature changes within your body or specific organs and is used to evaluate pain syndromes, peripheral nerve disorders and nerve root compression.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2313) Electromyography
Gst
Electromyography (EMG) measures muscle response or electrical activity in response to a nerve's stimulation of the muscle. The test is used to help detect neuromuscular abnormalities.
Electromyography (EMG) is a diagnostic procedure to assess the health of muscles and the nerve cells that control them (motor neurons). EMG results can reveal nerve dysfunction, muscle dysfunction or problems with nerve-to-muscle signal transmission.
Summary
Electromyography (EMG) is a test that checks the health of the muscles and the nerves that control the muscles.
How the Test is Performed
Your health care provider inserts a very thin needle electrode through your skin into one of your muscles. The electrode on the needle picks up the electrical activity given off by your muscles. This activity appears on a nearby monitor and may be heard through a speaker.
After placement of the electrodes, you may be asked to contract the muscle. For example, by bending your arm. The electrical activity seen on the monitor provides information about your muscle's ability to respond when the nerves to your muscles are stimulated.
A nerve conduction velocity test is almost always performed during the same visit as an EMG. The velocity test is done to see how fast and strong electrical signals move through a nerve.
How to Prepare for the Test
No special preparation is usually necessary. Avoid using any creams or lotions on the day of the test.
Body temperature can affect the results of this test. If it is extremely cold outside, you may be told to wait in a warm room for a while before the test is performed.
If you are taking blood thinners or anticoagulants, inform the provider performing the test before it is done.
How the Test will Feel
You may feel some pain or discomfort when the needles are inserted. But most people are able to complete the test without problems.
Afterward, the muscle may feel tender or bruised for a few days.
Why the Test is Performed
EMG is most often used when a person has symptoms of weakness, pain, or abnormal sensation. It can help tell the difference between muscle weakness caused by the injury of a nerve attached to a muscle, and weakness due to a muscle or other nervous system disease.
Normal Results
There is normally very little electrical activity in a muscle while at rest. Inserting the needles can cause some electrical activity, but once the muscles quiet down, there should be little electrical activity detected.
When you flex a muscle, activity begins to appear. As you contract your muscle more, the electrical activity increases and a pattern can be seen. This pattern helps your provider determine if the muscle is responding as it should.
What Abnormal Results Mean
An EMG can detect problems with your muscles during rest or activity. Disorders or conditions that cause abnormal results include the following:
* Alcoholic neuropathy (damage to nerves from drinking too much alcohol)
* Amyotrophic lateral sclerosis (ALS; disease of the nerve cells in the brain and spinal cord that control muscle movement)
* Axillary nerve dysfunction (damage of the nerve that controls shoulder movement and sensation)
* Becker muscular dystrophy (muscle weakness of the legs and pelvis)
* Brachial plexopathy (problem affecting the set of nerves that leave the neck and enter the arm)
* Carpal tunnel syndrome (problem affecting the median nerve in the wrist and hand)
* Cubital tunnel syndrome (problem affecting the ulnar nerve in the elbow)
* Cervical spondylosis (neck pain from wear on the disks and bones of the neck)
* Common peroneal nerve dysfunction (damage of the peroneal nerve leading to loss of movement or sensation in the foot and leg)
* Denervation (reduced nerve stimulation of a muscle)
* Dermatomyositis (muscle disease that involves inflammation and a skin rash)
* Distal median nerve dysfunction (problem affecting the median nerve in the arm)
* Duchenne muscular dystrophy (inherited disease that involves muscle weakness)
* Facioscapulohumeral muscular dystrophy (Landouzy-Dejerine; disease of muscle weakness and loss of muscle tissue)
* Familial periodic paralysis (disorder that causes muscle weakness and sometimes a lower than normal level of potassium in the blood)
* Femoral nerve dysfunction (loss of movement or sensation in parts of the legs due to damage to the femoral nerve)
* Friedreich ataxia (inherited disease that affects areas in the brain and spinal cord that control coordination, muscle movement, and other functions)
* Guillain-Barré syndrome (autoimmune disorder of the nerves that leads to muscle weakness or paralysis)
* Lambert-Eaton syndrome (autoimmune disorder of the nerves that causes muscle weakness)
* Multiple mononeuropathy (a nervous system disorder that involves damage to at least 2 separate nerve areas)
* Mononeuropathy (damage to a single nerve that results in loss of movement, sensation, or other function of that nerve)
* Myopathy (muscle degeneration caused by a number of disorders, including muscular dystrophy)
* Myasthenia gravis (autoimmune disorder of the nerves that causes weakness of the voluntary muscles)
* Peripheral neuropathy (damage of nerves away from the brain and spinal cord)
* Polymyositis (muscle weakness, swelling, tenderness, and tissue damage of the skeletal muscles)
* Radial nerve dysfunction (damage of the radial nerve causing loss of movement or sensation in the back of the arm or hand)
* Radiculopathy (injury to nerve roots as they exit the spine most often in the neck or lower back)
* Sciatic nerve dysfunction (injury to or pressure on the sciatic nerve that causes weakness, numbness, or tingling in the leg)
* Sensorimotor polyneuropathy (condition that causes a decreased ability to move or feel because of nerve damage)
* Shy-Drager syndrome (nervous system disease that causes body-wide symptoms)
* Thyrotoxic periodic paralysis (muscle weakness from high levels of thyroid hormone)
* Tibial nerve dysfunction (damage of the tibial nerve causing loss of movement or sensation in the foot)
Risks
Risks of this test include:
* Bleeding (minimal)
* Infection at the electrode sites (rare).
Details
Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG is performed using an instrument called an electromyograph to produce a record called an electromyogram. An electromyograph detects the electric potential generated by muscle cells when these cells are electrically or neurologically activated. The signals can be analyzed to detect abnormalities, activation level, or recruitment order, or to analyze the biomechanics of human or animal movement. Needle EMG is an electrodiagnostic medicine technique commonly used by neurologists. Surface EMG is a non-medical procedure used to assess muscle activation by several professionals, including physiotherapists, kinesiologists and biomedical engineers. In computer science, EMG is also used as middleware in gesture recognition towards allowing the input of physical action to a computer as a form of human-computer interaction.
Clinical uses
EMG testing has a variety of clinical and biomedical applications. Needle EMG is used as a diagnostics tool for identifying neuromuscular diseases, or as a research tool for studying kinesiology, and disorders of motor control. EMG signals are sometimes used to guide botulinum toxin or phenol injections into muscles. Surface EMG is used for functional diagnosis and during instrumental motion analysis. EMG signals are also used as a control signal for prosthetic devices such as prosthetic hands, arms and lower limbs.
An acceleromyograph may be used for neuromuscular monitoring in general anesthesia with neuromuscular-blocking drugs, in order to avoid postoperative residual curarization (PORC).
Except in the case of some purely primary myopathic conditions EMG is usually performed with another electrodiagnostic medicine test that measures the conducting function of nerves. This is called nerve conduction study (NCS). Needle EMG and NCSs are typically indicated when there is pain in the limbs, weakness from spinal nerve compression, or concern about some other neurologic injury or disorder. Spinal nerve injury does not cause neck, mid back pain or low back pain, and for this reason, evidence has not shown EMG or NCS to be helpful in diagnosing causes of axial lumbar pain, thoracic pain, or cervical spine pain. Needle EMG may aid with the diagnosis of nerve compression or injury (such as carpal tunnel syndrome), nerve root injury (such as sciatica), and with other problems of the muscles or nerves. Less common medical conditions include amyotrophic lateral sclerosis, myasthenia gravis, and muscular dystrophy.
Technique:
Skin preparation and risks
The first step before insertion of the needle electrode is skin preparation. This typically involves simply cleaning the skin with an alcohol pad.
The actual placement of the needle electrode can be difficult and depends on a number of factors, such as specific muscle selection and the size of that muscle. Proper needle EMG placement is very important for accurate representation of the muscle of interest, although EMG is more effective on superficial muscles as it is unable to bypass the action potentials of superficial muscles and detect deeper muscles. Also, the more body fat an individual has, the weaker the EMG signal. When placing the EMG sensor, the ideal location is at the belly of the muscle: the longitudinal midline. The belly of the muscle can also be thought of as in-between the motor point (middle) of the muscle and the tendonus insertion point.
Cardiac pacemakers and implanted cardiac defibrillators (ICDs) are used increasingly in clinical practice, and no evidence exists indicating that performing routine electrodiagnostic studies on patients with these devices pose a safety hazard. However, there are theoretical concerns that electrical impulses of nerve conduction studies (NCS) could be erroneously sensed by devices and result in unintended inhibition or triggering of output or reprogramming of the device. In general, the closer the stimulation site is to the pacemaker and pacing leads, the greater the chance for inducing a voltage of sufficient amplitude to inhibit the pacemaker. Despite such concerns, no immediate or delayed adverse effects have been reported with routine NCS.
No known contraindications exist for performing needle EMG or NCS on pregnant patients. Additionally, no complications from these procedures have been reported in the literature. Evoked potential testing, likewise, has not been reported to cause any problems when it is performed during pregnancy.
Patients with lymphedema or patients at risk for lymphedema are routinely cautioned to avoid percutaneous procedures in the affected extremity, namely venipuncture, to prevent development or worsening of lymphedema or cellulitis. Despite the potential risk, the evidence for such complications subsequent to venipuncture is limited. No published reports exist of cellulitis, infection, or other complications related to EMG performed in the setting of lymphedema or prior lymph node dissection. However, given the unknown risk of cellulitis in patients with lymphedema, reasonable caution should be exercised in performing needle examinations in lymphedematous regions to avoid complications. In patients with gross edema and taut skin, skin puncture by needle electrodes may result in chronic weeping of serous fluid. The potential bacterial media of such serous fluid and the violation of skin integrity may increase the risk of cellulitis. Before proceeding, the physician should weigh the potential risks of performing the study with the need to obtain the information gained.
Surface and intramuscular EMG recording electrodes
There are two kinds of EMG: surface EMG and intramuscular EMG. Surface EMG assesses muscle function by recording muscle activity from the surface above the muscle on the skin. Surface EMG can be recorded by a pair of electrodes or by a more complex array of multiple electrodes. More than one electrode is needed because EMG recordings display the potential difference (voltage difference) between two separate electrodes. Limitations of this approach are the fact that surface electrode recordings are restricted to superficial muscles, are influenced by the depth of the subcutaneous tissue at the site of the recording which can be highly variable depending on the weight of a patient, and cannot reliably discriminate between the discharges of adjacent muscles. Specific electrode placements and functional tests have been developed to minimize this risk, thus providing reliable examinations.
Intramuscular EMG can be performed using a variety of different types of recording electrodes. The simplest approach is a monopolar needle electrode. This can be a fine wire inserted into a muscle with a surface electrode as a reference; or two fine wires inserted into muscle referenced to each other. Most commonly fine wire recordings are for research or kinesiology studies. Diagnostic monopolar EMG electrodes are typically insulated and stiff enough to penetrate skin, with only the tip exposed using a surface electrode for reference. Needles for injecting therapeutic botulinum toxin or phenol are typically monopolar electrodes that use a surface reference, in this case, however, the metal shaft of a hypodermic needle, insulated so that only the tip is exposed, is used both to record signals and to inject. Slightly more complex in design is the concentric needle electrode. These needles have a fine wire, embedded in a layer of insulation that fills the barrel of a hypodermic needle, that has an exposed shaft, and the shaft serves as the reference electrode. The exposed tip of the fine wire serves as the active electrode. As a result of this configuration, signals tend to be smaller when recorded from a concentric electrode than when recorded from a monopolar electrode and they are more resistant to electrical artifacts from tissue and measurements tend to be somewhat more reliable. However, because the shaft is exposed throughout its length, superficial muscle activity can contaminate the recording of deeper muscles. Single fiber EMG needle electrodes are designed to have very tiny recording areas, and allow for the discharges of individual muscle fibers to be discriminated.
To perform intramuscular EMG, typically either a monopolar or concentric needle electrode is inserted through the skin into the muscle tissue. The needle is then moved to multiple spots within a relaxed muscle to evaluate both insertional activity and resting activity in the muscle. Normal muscles exhibit a brief burst of muscle fiber activation when stimulated by needle movement, but this rarely lasts more than 100ms. The two most common pathologic types of resting activity in muscle are fasciculation and fibrillation potentials. A fasciculation potential is an involuntary activation of a motor unit within the muscle, sometimes visible with the naked eye as a muscle twitch or by surface electrodes. Fibrillations, however, are detected only by needle EMG, and represent the isolated activation of individual muscle fibers, usually as the result of nerve or muscle disease. Often, fibrillations are triggered by needle movement (insertional activity) and persist for several seconds or more after the movement ceases.
After assessing resting and insertional activity, the electromyographer assess the activity of muscle during voluntary contraction. The shape, size, and frequency of the resulting electrical signals are judged. Then the electrode is retracted a few millimetres, and again the activity is analyzed. This is repeated, sometimes until data on 10–20 motor units have been collected in order to draw conclusions about motor unit function. Each electrode track gives only a very local picture of the activity of the whole muscle. Because skeletal muscles differ in the inner structure, the electrode has to be placed at various locations to obtain an accurate study. For the interpretation of EMG study is important to evaluate parameters of tested muscle motor units. This process may well be partially automated using appropriate software.
Single fiber electromyography assesses the delay between the contractions of individual muscle fibers within a motor unit and is a sensitive test for dysfunction of the neuromuscular junction caused by drugs, poisons, or diseases such as myasthenia gravis. The technique is complicated and typically performed only by individuals with special advanced training.
Surface EMG is used in a number of settings; for example, in the physiotherapy clinic, muscle activation is monitored using surface EMG and patients have an auditory or visual stimulus to help them know when they are activating the muscle (biofeedback). A review of the literature on surface EMG published in 2008, concluded that surface EMG may be useful to detect the presence of neuromuscular disease (level C rating, class III data), but there are insufficient data to support its utility for distinguishing between neuropathic and myopathic conditions or for the diagnosis of specific neuromuscular diseases. EMGs may be useful for additional study of fatigue associated with post-poliomyelitis syndrome and electromechanical function in myotonic dystrophy (level C rating, class III data). Recently, with the rise of technology in sports, sEMG has become an area of focus for coaches to reduce the incidence of soft tissue injury and improve player performance.
Certain US states limit the performance of needle EMG by nonphysicians. New Jersey declared that it cannot be delegated to a physician's assistant. Michigan has passed legislation saying needle EMG is the practice of medicine. Special training in diagnosing medical diseases with EMG is required only in residency and fellowship programs in neurology, clinical neurophysiology, neuromuscular medicine, and physical medicine and rehabilitation. There are certain subspecialists in otolaryngology who have had selective training in performing EMG of the laryngeal muscles, and subspecialists in urology, obstetrics and gynecology who have had selective training in performing EMG of muscles controlling bowel and bladder function.
Maximal voluntary contraction
One basic function of EMG is to see how well a muscle can be activated. The most common way that can be determined is by performing a maximal voluntary contraction (MVC) of the muscle that is being tested. Each muscle group type has different characteristics, and MVC positions are varied for different muscle group types. Therefore, the researcher should be very careful while choosing the MVC position type to elicit the greater muscle activity level from the subjects.
The types of MVC positions can vary among muscle types, contingent upon the specific muscle group being considered, including trunk muscles, lower limb muscles, and others.
Muscle force, which is measured mechanically, typically correlates highly with measures of EMG activation of muscle. Most commonly this is assessed with surface electrodes, but it should be recognized that these typically record only from muscle fibers in close proximity to the surface.
Several analytical methods for determining muscle activation are commonly used depending on the application. The use of mean EMG activation or the peak contraction value is a debated topic. Most studies commonly use the maximal voluntary contraction as a means of analyzing peak force and force generated by target muscles. According to the article "Peak and average rectified EMG measures: Which method of data reduction should be used for assessing core training exercises?", it was concluded that the "average rectified EMG data (ARV) is significantly less variable when measuring the muscle activity of the core musculature compared to the peak EMG variable." Therefore, these researchers would suggest that "ARV EMG data should be recorded alongside the peak EMG measure when assessing core exercises." Providing the reader with both sets of data would result in enhanced validity of the study and potentially eradicate the contradictions within the research.
Other measurements
EMG can also be used for indicating the amount of fatigue in a muscle. The following changes in the EMG signal can signify muscle fatigue: an increase in the mean absolute value of the signal, increase in the amplitude and duration of the muscle action potential and an overall shift to lower frequencies. Monitoring the changes of different frequency changes the most common way of using EMG to determine levels of fatigue. The lower conduction velocities enable the slower motor neurons to remain active.
A motor unit is defined as one motor neuron and all of the muscle fibers it innervates. When a motor unit fires, the impulse (called an action potential) is carried down the motor neuron to the muscle. The area where the nerve contacts the muscle is called the neuromuscular junction, or the motor end plate. After the action potential is transmitted across the neuromuscular junction, an action potential is elicited in all of the innervated muscle fibers of that particular motor unit. The sum of all this electrical activity is known as a motor unit action potential (MUAP). This electrophysiologic activity from multiple motor units is the signal typically evaluated during an EMG. The composition of the motor unit, the number of muscle fibres per motor unit, the metabolic type of muscle fibres and many other factors affect the shape of the motor unit potentials in the myogram.
Nerve conduction testing is also often done at the same time as an EMG to diagnose neurological diseases.
Some patients can find the procedure somewhat painful, whereas others experience only a small amount of discomfort when the needle is inserted. The muscle or muscles being tested may be slightly sore for a day or two after the procedure.
EMG signal decomposition
EMG signals are essentially made up of superimposed motor unit action potentials (MUAPs) from several motor units. For a thorough analysis, the measured EMG signals can be decomposed into their constituent MUAPs. MUAPs from different motor units tend to have different characteristic shapes, while MUAPs recorded by the same electrode from the same motor unit are typically similar. Notably MUAP size and shape depend on where the electrode is located with respect to the fibers and so can appear to be different if the electrode moves position. EMG decomposition is non-trivial, although many methods have been proposed.
EMG signal processing
Rectification is the translation of the raw EMG signal to a signal with a single polarity, usually positive. The purpose of rectifying the signal is to ensure the signal does not average to zero, due to the raw EMG signal having positive and negative components. Two types of rectification are used: full-wave and half-wave rectification. Full-wave rectification adds the EMG signal below the baseline to the signal above the baseline to make a conditioned signal that is all positive. If the baseline is zero, this is equivalent to taking the absolute value of the signal. This is the preferred method of rectification because it conserves all of the signal energy for analysis. Half-wave rectification discards the portion of the EMG signal that is below the baseline. In doing so, the average of the data is no longer zero therefore it can be used in statistical analyses.
Limitations
Needle EMG used in clinical settings has practical applications such as helping to discover disease. Needle EMG has limitations, however, in that it does involve voluntary activation of muscle, and as such is less informative in patients unwilling or unable to cooperate, children and infants, and in individuals with paralysis. Surface EMG can have limited applications due to inherent problems associated with surface EMG. Adipose tissue (fat) can affect EMG recordings. Studies show that as adipose tissue increased the active muscle directly below the surface decreased. As adipose tissue increased, the amplitude of the surface EMG signal directly above the center of the active muscle decreased. EMG signal recordings are typically more accurate with individuals who have lower body fat, and more compliant skin, such as young people when compared to old. Muscle cross talk occurs when the EMG signal from one muscle interferes with that of another limiting reliability of the signal of the muscle being tested. Surface EMG is limited due to lack of deep muscles reliability. Deep muscles require intramuscular wires that are intrusive and painful in order to achieve an EMG signal. Surface EMG can measure only superficial muscles and even then it is hard to narrow down the signal to a single muscle.
Electrical characteristics
The electrical source is the muscle membrane potential of about –90 mV. Measured EMG potentials range between less than 50 μV and up to 30 mV, depending on the muscle under observation.
Typical repetition rate of muscle motor unit firing is about 7–20 Hz, depending on the size of the muscle (eye muscles versus seat (gluteal) muscles), previous axonal damage and other factors. Damage to motor units can be expected at ranges between 450 and 780 mV.
Additional Information
Electromyography (EMG) is a scientific technique that measures the electrical activity of the nervous system during muscle contractions. It utilizes electrodes to detect the electrical signals produced by the depolarization of muscle fibers, allowing researchers to study the function and behavior of skeletal muscles.
Electromyography (EMG) measures the activity of the nervous system as it manifests during the contraction of skeletal muscles. Movements, voluntary or reflex, originating in the brain or spinal cord will, in the absence of impairment, result in the depolarization of a peripheral nerve which in turn will activate skeletal muscles. The biochemical process by which a muscle contracts results in an electrical field that can be measured either intramuscularly or from the surface of the skin.
The electrical signal measured during electromyography originates from the depolarization of the sarcolemma, a gated plasma membrane that responds to the release of the neurotransmitter acetylcholine across the synaptic cleft, the gap between the muscle fibre membrane and the nerve that innervates it.
The muscle or fascia is formed of many muscle fibres or fascicles, which are formed of many myofibrils, a series of contractile elements connected end-to-end. Muscle fibres are innervated by efferent motor nerves. These are distinct from the sensory or afferent nerves that are also present, which originate within the grey matter ventral horn of the spinal cord and are contiguous with the alpha motor neuron – an all-or-nothing trigger that summates the huge number of inhibitive and excitatory inputs from the spinal and supraspinal centres. The alpha motor neuron, the nerve, and the muscle fibre it innervates are termed the motor unit. This is an important insight into the complexity of the electromyographic signal because any fascia, or collection of fascicles, may have many motor units and many fibre innervations. Therefore, many moving electrical fields generated by, almost certainly, phasically dissimilar and geographically distributed sources form any single electromyography signal.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2314) Babysitter/Babysitting
Gist
A babysitter at home is someone who is paid, usually by the hour, to care for another person's children. A babysitter usually spends an evening or an afternoon at someone's house, playing with or caring for kids while their parents are away.
Summary
Your baby will need to have muscles strong enough to hold their head up on their own and control their movement. When they do, they're ready to practise sitting. Your baby should be showing signs of sitting by 8 months. There are several activities you can try with your baby to help them to start sitting.
The term may have originated from the caretaker "sitting on" the baby in one room while the parents were entertaining or busy in another. It is also theorized that the term may come from hens "sitting" on their eggs, thus "caring for" their chicks.
Babysitting often involves quick thinking and requires solving problems in the moment. Whether managing a disgruntled child at bedtime, resolving a disagreement between siblings, or handling an accident, problem-solving abilities are essential babysitter skills.
Details
Babysitting is temporarily caring for a child. Babysitting can be a paid job for all ages; however, it is best known as a temporary activity for early teenagers who are not yet eligible for employment in the general economy. It provides autonomy from parental control and dispensable income, as well as an introduction to the techniques of childcare. It emerged as a social role for teenagers in the 1920s, and became especially important in suburban America in the 1950s and 1960s, when small children were abundant. It stimulated an outpouring of folk culture in the form of urban legends, pulp novels, and horror films.
Overall:
In developed countries, most babysitters are high-school or college students (age 16+). Some adults have in-home childcare as well. They are not babysitters but professional childcare providers and early-childhood educators. The work for babysitters also varies from watching a sleeping child, changing diapers, playing games, and preparing meals, to teaching the child to read or even drive, depending on the agreement between parents and babysitter.
In some countries, various organizations produce courses for babysitters, many focusing on child safety and first aid appropriate for infants and children; these educational programs can be provided at local hospitals and schools. Different activities are needed for babies and toddlers. It is beneficial for babysitters to understand toddler developmental milestones to plan for necessary activities. As paid employees, babysitters often require a disclosure or assessment of one's criminal record to ward off possible hebephiles, pedophiles, and other unsuitable applicants.
Babysitting and gender:
History:
1920s
Despite women gaining the right to vote with the passage of the Nineteenth Amendment, traditional gender roles persisted, particularly concerning motherhood and domestic duties. Women's main duties included housekeeping, meal preparation, and caring for children. However, by 1920, women were about 20% of the overall workforce, raising concerns about women's independence.
Although modern household appliances were marketed as time-saving, rising cleanliness standards meant that mothers spent more time on household chores. While family size decreased, meaning women bore less children, they also dedicated more time to child-rearing, following advice from psychologists like John B. Watson and Arnold Gesell.
Leisure activities gained cultural importance and children enjoyed an abundance of toys and games, but mothers faced criticism for neglecting maternal duties if they also pursued leisure activities.
Historically, girls from various backgrounds had been responsible for childcare duties, but societal changes led to the disappearance of roles like "Little Mothers" and "baby tenders." These shifts reflected evolving notions of childhood and girlhood because adolescent girls were seen as ill-equipped to care for younger children.
In the 1920s, most middle-class girls did not rely on babysitting for extra income because they received allowances from parents. Only a small percentage of high school girls earned their own spending money independently. However, sociologist Ernest R. Groves warned against hiring high school girls as babysitters, because of fears about their immaturity and lack of responsibility.
1930s
The field of babysitting experienced significant growth during the Great Depression partly due to families' financial constraints, which limited teenagers' allowances and job opportunities. Many teenage girls became "mother's helpers" or "neighborhood helpers." The rise of youth culture, fostered by increasing high school attendance and consumerism, also played a role.
However, the growing visibility of teenage girls as babysitters also raised concerns among adults. Some adults disapproved of teenage girls spending their earnings, including purchasing makeup. Babysitters were also criticized for prioritizing socializing over their responsibilities, such as chatting on the phone while working.
During the Great Depression, concerns about teenage girls' behavior and the need for better childcare led to the employment of male "child tenders," a term used before "babysitter." Many adolescent boys were among the one million unemployed youth during this time and they took on various jobs to earn money, including household chores and tutoring. Some women preferred hiring boys because they believed that they were more responsible.
Babysitting emerged as a means of socially rehabilitating girlhood. To attract teenage girls to babysitting, it was presented as a pathway to independence and future career success. They suggested that babysitting would equip girls with valuable skills for future careers. Publications like The American Girl magazine and the Camp Fire Girls' Everygirls magazine framed babysitting as a practical skill for present childcare needs and future homemaking responsibilities. But some believed that girls deserved better job opportunities than childcare. Parents' expectations were inconsistent and demanding, requiring babysitters to perform various household tasks alongside childcare duties.
Despite legislative efforts like the Fair Labor Standards Act of 1938, which restricted employment for those under seventeen, babysitters were still tasked with chores beyond childcare. Many Depression-era mothers, tasked babysitters with additional household responsibilities. Fifteen to eighteen year old girls were often treated unfairly by employers, who sometimes failed to provide adequate instructions and pay. The American Home magazine criticized parent-employers for their treatment of babysitters. Babysitters were frequently underpaid or not paid at all.
1940s
During World War II, the demand for babysitters increased significantly because of the rising birth rate and the working mothers needing childcare. Despite the low pay of twenty-five cents per hour, babysitting offered adolescent girls autonomy.
However, many girls left babysitting for better-paying positions in war production centers and other industries. By 1944, the number of working girls had increased significantly compared to pre-war levels. The scarcity of babysitters made many mothers rely on grandparents for childcare.
Adults during World War II saw babysitting as a solution to social problems, aiming to keep teenage girls off the streets, provide them with respectable roles, and prepare them for future domestic responsibilities. Similar to approaches taken during the Great Depression, wartime authorities promoted babysitting as a patriotic duty, encouraging girls to contribute to the war effort by caring for children. Organizations like the Girl Scouts and Wellesley College offered training in childcare, and magazines like Calling All Girls praised babysitting as a vital wartime service.
However, many teenage girls preferred jobs that offered better pay, status, and social opportunities, leading to a shortage of babysitters. Consequently, younger children, often as young as fourth or fifth graders, ended up assuming caregiving roles in households. Organizations like the Children's Aid Society began offering childcare courses to younger girls to address the shortage. These courses taught practical skills like diapering and preparing formula, aiming to assure mothers that young babysitters were reliable sources of childcare.
End of the 20th century
The introduction of "The Bad Baby-Sitters Handbook" in 1991 marked a shift in sentiment among teenage girls towards babysitting. While experts and fiction often depicted babysitting as empowering, many real-life babysitters disagreed. They faced last-minute calls, low pay, and uncomfortable situations in employers' homes, including inappropriate behavior. Despite guidance, babysitters struggled to assert themselves and negotiate fair wages.
Girls frequently found themselves underpaid, with boys often earning more for similar tasks. The feminist concept of comparable worth influenced their perception of the value of babysitting work, leading to frustration over gender-based wage disparities. However, many did not discuss payment with their employers or negotiate raises.
Additionally, babysitters often encountered challenges related to employers' tardiness, cancellations, and lack of important information. While some employers provided emergency contacts and instructions, other babysitters were unprepared.
Babysitters often had positive experiences with considerate parents of well-behaved children, who treated them as professionals rather than just employees. Many employers followed advice from magazines like Working Woman, emphasizing the importance of establishing a good working relationship with babysitters. Some babysitters did not mind last-minute cancellations, seeing them as unexpected breaks or opportunities for socializing with friends.
However, encounters with drunk employers or uncomfortable situations with male employers raised doubts among babysitters about the worth of their job. Instances where employers arrived home intoxicated or exhibited inappropriate behavior made babysitters feel uneasy.
Babysitters often faced challenges not only from potential dangers but also from the children they were responsible for. Handling multiple children simultaneously could be overwhelming such as dealing with fights or disagreements between children, dealing with children's emotions, especially crying or bedtime resistance, soothing upset children or enforcing bedtime routines, even when children resisted or expressed fears about sleeping alone. Some children engaged in physical or verbal aggression. Boys, in particular, were perceived as more challenging to manage, with some exhibiting dangerous behavior like wielding knives or engaging in destructive activities.Babysitters used various strategies to handle difficult situations, such as sending children to their rooms or threatening to call parents. However, these methods were not always effective, leaving babysitters feeling frustrated or inadequate.
Despite their best efforts, babysitters sometimes faced criticism or blame from parents or social workers, who focused more on describing incidents as "abuse" rather than considering the babysitter's intentions or the challenging circumstances they faced. Babysitters often felt pressure to maintain control and appear responsible in the eyes of their employers, fearing they would be seen as inadequate or incapable. Many girls identified with the children they cared for and hesitated to report misbehavior to parents, fearing repercussions. Despite expert advice to communicate openly with parents about challenges faced while babysitting, sitters were reluctant to present a laundry list of wrongdoing. Some employers were understanding, but others automatically believed their children's reports, leading to unjust distrust of babysitters.
The portrayal of teenage babysitters in popular culture further reinforced negative stereotypes. In stories like "The Beast and the Babysitter" babysitters were depicted as incompetent or disinterested, reinforcing unfair cultural scripts about female adolescence and babysitting. The reluctance of babysitters to engage fully with their responsibilities perpetuated these stereotypes.
Babysitting and race:
History
Before the Civil War, enslaved Black women cared for the children of white women, even feeding babies using their own breast milk.[4] In 1863, after the Emancipation Proclamation, African American women began to dominate the domestic workforce due to limited employment opportunities and segregation. These women worked long hours for little pay, often receiving hand-me-downs instead. By 1870, over half of employed women were engaged in "domestic and personal service," reflecting the significant presence of African American women in this sector.
In 1901, a group of domestic workers formed the Working Women's Association in response to mistreatment. However, the association disbanded because of low membership. By the 1930s, domestic workers in Chicago faced issues such as employers offering work to the lowest bidder at designated locations known as "slave pens"
In 1934, Dora Lee Jones established the Domestic Workers Union, advocating for wage and hour laws and inclusion in the Social Security Act. However, in 1935, domestic workers were explicitly excluded from the National Labor Relations Act, which protects employees' rights to form unions. The Fair Labor Standards Act passed in 1938, introduced minimum wage and overtime pay, but domestic workers were excluded.
In 1964, the Civil Rights Act prohibited employment discrimination, but most domestic workers were not covered as it applied only to employers with 15 or more employees. Similarly, the Age Discrimination in Employment Act of 1967 protected older workers but excluded many domestic workers. Amendments to the Fair Labor Standards Act in 1974 provided protections like minimum wage and overtime pay, but those caring for the elderly or children were again excluded.
Currently, 20% of childcare workers are Black women.
During the post-Civil War era and the Jim Crow period, the mammy stereotype surfaced as one of the most pervasive and enduring images of Black domestic workers. Portrayed prominently in popular culture, such as in 1939's "Gone with the Wind", the mammy caricature depicted Black women in domestic servitude roles. They were typically portrayed as kind-hearted, overweight, and outspoken. This stereotype romanticized the Antebellum South and ignored the actual experiences of Black women and domestic workers.
State and federal laws (21st century)
In 2007, the Supreme Court case Long Island Care at Home Ltd. v. Coke highlighted the lack of overtime pay entitlement for domestic worker Evelyn Coke, despite her extensive hours of labor. This case underscored the challenges faced by domestic workers regarding fair compensation.
Also in 2007, the National Domestic Workers Alliance (NDWA) became a leading advocate for domestic workers' rights, aiming to establish a domestic workers' bill of rights. This began in New York State and resulted in the signing of the New York Domestic Workers Bill of Rights into law in 2010.
In 2011, the International Labor Organization established Fair Labor Laws to protect domestic workers globally, although the United States has not ratified this convention. Local initiatives emerged to address these issues such as in 2014 when Chicago implemented its first minimum wage ordinance, explicitly including domestic workers.
In 2016, Illinois passed the Domestic Worker Bill of Rights following a five-year campaign by the Illinois Domestic Workers Coalition. Additionally, Cook County passed a minimum wage law covering domestic workers.
By 2019, nine states had enacted legislation granting labor rights to domestic workers. On July 15, 2019, U.S. Senator Kamala D. Harris and U.S. Representative Pramila Jayapal introduced the Domestic Workers Bill of Rights at the federal level. This bill aims to ensure the rights and protections of domestic workers nationwide, but it has not yet passed into law.
Cost:
United States
According to the caregiver-finding platform UrbanSitter, the national average babysitting cost in 2022 was $22.68 an hour for one child, $25.37 an hour for two, and $27.70 an hour for three children. This rate has increased by 21 percent since 2019.
Etymology
The term "baby sitter" first appeared in 1937, while the verb form "baby-sit" was first recorded in 1947. The American Heritage College Dictionary notes, "One normally would expect the agent noun babysitter with its -er suffix to come from the verb baby-sit, as diver comes from dive, but in fact babysitter is first recorded in 1937, ten years earlier than the first appearance of baby-sit. Thus the verb was derived from the agent noun rather than the other way around and represented a good example of back-formation. The use of the word "sit" to refer to a person tending to a child is recorded from 1800. The term may have originated from the caretaker "sitting on" the baby in one room while the parents were entertaining or busy in another. It is also theorized that the term may come from hens "sitting" on their eggs, thus "caring for" their chicks.
International variations in definition
In British English, the term refers only to caring for a child for a few hours, on an informal basis, and usually in the evening when the child is asleep for most of the time.
In American English, the term can include caring for a child for all or most of the day and on a regular or more formal basis, which would be described as childminding in British English.
In India and Pakistan, a babysitter or nanny, known as an ayah or aya, is hired on a longer-term contract basis to look after a child regardless of the presence of the parents.
Additional Information
Babysitters can be lifesavers for parents – and children like them too! Whether it’s for a short break or a longer period of time, having a babysitter is a great resource, but one that requires some planning and preparation.
Establishing clear parameters with your babysitter in advance is critical. Before selecting a babysitter, talk with your child about what the babysitting will look like. If your child is old enough, include her in the research and selection of the babysitter. If it’s the first time your child is with a babysitter, you’ll want to take extra time to talk through the experience with your child. Talk about what to expect, when you’ll be back, and what kinds of activities they may do together.
When looking for a babysitter, ask friends and family for referrals or recommended, trusted people. Or turn to reputable online listing resources. You should always plan to meet with a potential babysitter ahead of time for an “interview” to determine if they’re a match for your child and your needs. Ask for and check references to confirm the babysitter’s experience and skills.
Not sure what to ask? Here are some general questions you can ask a potential babysitter:
* Have you babysat children who are my child’s age?
* Why do you want to babysit?
* How much do you charge per hour?
* Do you have any additional training or certificates (CPR training, early childhood classes, etc.)?
* What do you plan to do with my child while you are babysitting?
Once you have selected a babysitter, have your child meet the babysitter before you leave. If possible, invite the babysitter over when you are there so that you, your child, and the babysitter can play and interact together. It is important for your child to see you with the babysitter to know that the babysitter is a safe, trusted person to be with.
Before leaving your babysitter with your child, be sure to cover these five things together first:
* Food. Discuss what types of foods – and how much – your child eats. Offering favorite snacks and dishes can help your child feel more comfortable with the babysitter.
* Sleep. Talk about when and how long your child sleeps so that the same sleep schedule can be followed. Letting your babysitter know what to look for when your child is getting sleepy (rubbing eyes, asking for milk, etc.) is also helpful.
* Discipline. Talking about and showing your babysitter how you handle your child’s behavior is very important to help shape your child’s behavior and help your child develop positive skills with anyone they are with. Consistency is important.
* Routines. If you have a particular way you put your child to sleep (reading a book, singing a goodnight song, etc.), teach the babysitter your routine so she can do the same with your child. The familiarity of your routine will be both comforting and helpful.
* Activities. Plan ahead to make sure your child will engage in fun activities that you approve of while under the babysitter’s care. For example, bring along your child’s favorite books or toys and talk about your child’s favorite things to do. These simple steps will not only help your babysitter know what to do with your child, but will also build a positive association between the two of them.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2315) Weather
Gist
Weather is the state of the atmosphere, including temperature, atmospheric pressure, wind, humidity, precipitation, and cloud cover. It differs from climate, which is all weather conditions for a particular location averaged over about 30 years.
Summary
The weather concerns everyone and has some effect on nearly every human activity. It occurs within the atmosphere, the mixture of gases that completely envelops Earth. Weather is defined as the momentary, day-to-day state of the atmosphere over any place on Earth’s surface. Climate, on the other hand, refers to weather averaged over a long period. The basic atmospheric conditions that make up the weather include precipitation, humidity, temperature, pressure, cloudiness, and wind.
The air is constantly in movement. There also is a continuous exchange of heat and moisture between the atmosphere and Earth’s land and sea surfaces. These ever-changing conditions can be scientifically analyzed. The science of observing and predicting the weather is known as meteorology.
The Atmosphere and Its General Circulation
Air is compressed by its own weight, so that about half the bulk of the atmosphere is squeezed into the bottom 3.5 miles (5.6 kilometers). The bottom layer of the atmosphere, the troposphere, is the site of almost all the world’s weather. Above its turbulence and storminess is the calmer stratosphere, which has little moisture and few clouds. (See also Earth, “Atmosphere.”)
Underlying the great variety of atmospheric motions is a pattern of large-scale air movement over Earth. The basic cause of these planetary winds, or general circulation of the atmosphere, is that the Sun heats the air over the Equator more than it does the air over the poles. The heated air over the equatorial regions rises and flows generally poleward—in both the Northern and Southern hemispheres. In the polar regions the air cools and sinks. From time to time it flows back toward the Equator.
The upward movement of air results in a belt of low pressure in the tropical regions astride the Equator. On either side—at about 30° N latitude and 30° S latitude—is a belt of high pressure. It is formed as the upper-level flow of air from the Equator sinks to the surface. From each of these subtropical high-pressure belts, surface winds blow outward, toward both the Equator and the poles. The Coriolis effect—a result of Earth’s rotation—deflects the winds to the right of the winds’ direction of motion in the Northern Hemisphere and to the left of their direction in the Southern Hemisphere. This produces a belt of tropical easterly winds (winds blowing from east to west). It also produces two belts of midlatitude westerly winds (blowing from west to east), one in each hemisphere.
Like the tropical easterlies, or trade winds, the surface winds from the poles are also deflected to the west. Where these polar easterlies meet the westerly winds in each hemisphere—at about 60° latitude—a belt of low pressure girdles Earth.
This arrangement of Earth’s wind and pressure belts varies somewhat with the time of the year. They shift northward during the Northern Hemisphere’s summer. They shift southward during the Southern Hemisphere’s summer. Both the continuity of the pressure belts and the prevailing directions of the winds are also modified greatly by the differing rates at which Earth’s land and water surfaces exchange heat and moisture with the atmosphere.
Very large-scale and long-lasting changes in wind and pressure patterns also sometimes occur. Most of the time, for example, the eastern Pacific Ocean near South America has relatively cool water temperatures and high pressure. The western Pacific near Australia and Indonesia usually has warmer water and lower pressure. This results in dry conditions in Peru and Chile and wetter weather in Indonesia and eastern Australia. In some years, however, the pattern reverses as part of a phenomenon called El Niño/Southern Oscillation (ENSO), which strongly affects weather in most parts of the world. A buildup of warm water in the eastern Pacific then brings heavy rains to Peru, while Australia experiences drought. The easterly trade winds in the Pacific weaken and may even reverse. The warm ocean water also strengthens winter storms that move onshore in the southwestern United States. As a result, there is heavy rain in southern California and much of the southern United States.
Weather is the state of the atmosphere at a particular place during a short period of time. It involves such atmospheric phenomena as temperature, humidity, precipitation (type and amount), air pressure, wind, and cloud cover. Weather differs from climate in that the latter includes the synthesis of weather conditions that have prevailed over a given area during a long time period—generally 30 years. For a full discussion of the elements and origins of weather, see climate. For a treatment of how conditions in space affect satellites and other technologies, see space weather.
Weather, as most commonly defined, occurs in the troposphere, the lowest region of the atmosphere that extends from the Earth’s surface to 6–8 km (4–5 miles) at the poles and to about 17 km (11 miles) at the Equator. Weather is largely confined to the troposphere since this is where almost all clouds occur and almost all precipitation develops. Phenomena occurring in higher regions of the troposphere and above, such as jet streams and upper-air waves, significantly affect sea-level atmospheric-pressure patterns—the so-called highs and lows—and thereby the weather conditions at the terrestrial surface. Geographic features, most notably mountains and large bodies of water (e.g., lakes and oceans), also affect weather. Recent research, for example, has revealed that ocean-surface temperature anomalies are a potential cause of atmospheric temperature anomalies in successive seasons and at distant locations. One manifestation of such weather-affecting interactions between the ocean and the atmosphere is what scientists call the El Niño/Southern Oscillation (ENSO). It is believed that ENSO is responsible not only for unusual weather events in the equatorial Pacific region (e.g., the exceedingly severe drought in Australia and the torrential rains in western South America in 1982–83) but also for those that periodically occur in the mid-latitudes (as, for example, the record-high summer temperatures in western Europe and unusually heavy spring rains in the central United States in 1982–83). The ENSO event of 1997–98 was associated with winter temperatures well above average in much of the United States. The ENSO phenomenon appears to influence mid-latitude weather conditions by modulating the position and intensity of the polar-front jet stream.
Weather has a tremendous influence on human settlement patterns, food production, and personal comfort. Extremes of temperature and humidity cause discomfort and may lead to the transmission of disease; heavy rain can cause flooding, displacing people and interrupting economic activities; thunderstorms, tornadoes, hail, and sleet storms may damage or destroy crops, buildings, and transportation routes and vehicles. Storms may even kill or injure people and livestock. At sea and along adjacent coastal areas, tropical cyclones (also called hurricanes or typhoons) can cause great damage through excessive rainfall and flooding, winds, and wave action to ships, buildings, trees, crops, roads, and railways, and they may interrupt air service and communications. Heavy snowfall and icy conditions can impede transportation and increase the frequency of accidents. The long absence of rainfall, by contrast, can cause droughts and severe dust storms when winds blow over parched farmland, as with the “dustbowl” conditions of the U.S. Plains states in the 1930s.
The variability of weather phenomena has resulted in a long-standing human concern with predictions of future weather conditions and weather forecasting. In early historical times, severe weather was ascribed to annoyed or malevolent divinities. Since the mid-19th century, scientific weather forecasting has evolved, using the precise measurement of air pressure, temperature, humidity, and wind direction and speed to predict changing weather. The development of weather satellites since the 1980s has enabled meteorologists to track the movement of cyclones, anticyclones, their associated fronts, and storms worldwide. In addition, the use of radar permits the monitoring of precipitation, clouds, and tropospheric winds. To predict the weather one week or more in advance, computers combine weather models, which are based on the principles of physics, with measured weather variables, such as current temperature and wind speed. These developments have improved the accuracy of local forecasts and have led to extended and long-range forecasts, although the high variability of weather in the mid-latitudes makes longer-range forecasts less accurate. In tropical regions, by contrast, daily weather variations are small, with regularly occurring phenomena and perceptible change associated more with seasonal cycles (dry weather and monsoons). For some tropical areas, tropical cyclones themselves are one of the more influential weather variables.
Details
Weather is the state of the atmosphere, describing for example the degree to which it is hot or cold, wet or dry, calm or stormy, clear or cloudy. On Earth, most weather phenomena occur in the lowest layer of the planet's atmosphere, the troposphere, just below the stratosphere. Weather refers to day-to-day temperature, precipitation, and other atmospheric conditions, whereas climate is the term for the averaging of atmospheric conditions over longer periods of time. When used without qualification, "weather" is generally understood to mean the weather of Earth.
Weather is driven by air pressure, temperature, and moisture differences between one place and another. These differences can occur due to the Sun's angle at any particular spot, which varies with latitude. The strong temperature contrast between polar and tropical air gives rise to the largest scale atmospheric circulations: the Hadley cell, the Ferrel cell, the polar cell, and the jet stream. Weather systems in the middle latitudes, such as extratropical cyclones, are caused by instabilities of the jet streamflow. Because Earth's axis is tilted relative to its orbital plane (called the ecliptic), sunlight is incident at different angles at different times of the year. On Earth's surface, temperatures usually range ±40 °C (−40 °F to 104 °F) annually. Over thousands of years, changes in Earth's orbit can affect the amount and distribution of solar energy received by Earth, thus influencing long-term climate and global climate change.
Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes, as most atmospheric heating is due to contact with the Earth's surface while radiative losses to space are mostly constant. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. Earth's weather system is a chaotic system; as a result, small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts to control the weather have occurred throughout history, and there is evidence that human activities such as agriculture and industry have modified weather patterns.
Studying how the weather works on other planets has been helpful in understanding how weather works on Earth. A famous landmark in the Solar System, Jupiter's Great Red Spot, is an anticyclonic storm known to have existed for at least 300 years. However, the weather is not limited to planetary bodies. A star's corona is constantly being lost to space, creating what is essentially a very thin atmosphere throughout the Solar System. The movement of mass ejected from the Sun is known as the solar wind.
Causes
On Earth, common weather phenomena include wind, cloud, rain, snow, fog and dust storms. Less common events include natural disasters such as tornadoes, hurricanes, typhoons and ice storms. Almost all familiar weather phenomena occur in the troposphere (the lower part of the atmosphere). Weather does occur in the stratosphere and can affect weather lower down in the troposphere, but the exact mechanisms are poorly understood.
Weather occurs primarily due to air pressure, temperature and moisture differences from one place to another. These differences can occur due to the sun angle at any particular spot, which varies by latitude in the tropics. In other words, the farther from the tropics one lies, the lower the sun angle is, which causes those locations to be cooler due to the spread of the sunlight over a greater surface. The strong temperature contrast between polar and tropical air gives rise to the large scale atmospheric circulation cells and the jet stream. Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow (see baroclinity). Weather systems in the tropics, such as monsoons or organized thunderstorm systems, are caused by different processes.
.
Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. In June the Northern Hemisphere is tilted towards the Sun, so at any given Northern Hemisphere latitude sunlight falls more directly on that spot than in December (see Effect of sun angle on climate). This effect causes seasons. Over thousands to hundreds of thousands of years, changes in Earth's orbital parameters affect the amount and distribution of solar energy received by the Earth and influence long-term climate.
The uneven solar heating (the formation of zones of temperature and moisture gradients, or frontogenesis) can also be due to the weather itself in the form of cloudiness and precipitation. Higher altitudes are typically cooler than lower altitudes, which is the result of higher surface temperature and radiational heating, which produces the adiabatic lapse rate. In some situations, the temperature actually increases with height. This phenomenon is known as an inversion and can cause mountaintops to be warmer than the valleys below. Inversions can lead to the formation of fog and often act as a cap that suppresses thunderstorm development. On local scales, temperature differences can occur because different surfaces (such as oceans, forests, ice sheets, or human-made objects) have differing physical characteristics such as reflectivity, roughness, or moisture content.
Surface temperature differences in turn cause pressure differences. A hot surface warms the air above it causing it to expand and lower the density and the resulting surface air pressure. The resulting horizontal pressure gradient moves the air from higher to lower pressure regions, creating a wind, and the Earth's rotation then causes deflection of this airflow due to the Coriolis effect. The simple systems thus formed can then display emergent behaviour to produce more complex systems and thus other weather phenomena. Large scale examples include the Hadley cell while a smaller scale example would be coastal breezes.
The atmosphere is a chaotic system. As a result, small changes to one part of the system can accumulate and magnify to cause large effects on the system as a whole. This atmospheric instability makes weather forecasting less predictable than tidal waves or eclipses. Although it is difficult to accurately predict weather more than a few days in advance, weather forecasters are continually working to extend this limit through meteorological research and refining current methodologies in weather prediction. However, it is theoretically impossible to make useful day-to-day predictions more than about two weeks ahead, imposing an upper limit to potential for improved prediction skill.
Shaping the planet Earth
Weather is one of the fundamental processes that shape the Earth. The process of weathering breaks down the rocks and soils into smaller fragments and then into their constituent substances. During rains precipitation, the water droplets absorb and dissolve carbon dioxide from the surrounding air. This causes the rainwater to be slightly acidic, which aids the erosive properties of water. The released sediment and chemicals are then free to take part in chemical reactions that can affect the surface further (such as acid rain), and sodium and chloride ions (salt) deposited in the seas/oceans. The sediment may reform in time and by geological forces into other rocks and soils. In this way, weather plays a major role in erosion of the surface.
Effect on humans
Weather, seen from an anthropological perspective, is something all humans in the world constantly experience through their senses, at least while being outside. There are socially and scientifically constructed understandings of what weather is, what makes it change, the effect it has on humans in different situations, etc. Therefore, weather is something people often communicate about. The National Weather Service has an annual report for fatalities, injury, and total damage costs which include crop and property. They gather this data via National Weather Service offices located throughout the 50 states in the United States as well as Puerto Rico, Guam, and the Virgin Islands. As of 2019, tornadoes have had the greatest impact on humans with 42 fatalities while costing crop and property damage over 3 billion dollars.
Effects on populations
The weather has played a large and sometimes direct part in human history. Aside from climatic changes that have caused the gradual drift of populations (for example the desertification of the Middle East, and the formation of land bridges during glacial periods), extreme weather events have caused smaller scale population movements and intruded directly in historical events. One such event is the saving of Japan from invasion by the Mongol fleet of Kublai Khan by the Kamikaze winds in 1281. French claims to Florida came to an end in 1565 when a hurricane destroyed the French fleet, allowing Spain to conquer Fort Caroline. More recently, Hurricane Katrina redistributed over one million people from the central Gulf coast elsewhere across the United States, becoming the largest diaspora in the history of the United States.
The Little Ice Age caused crop failures and famines in Europe. During the period known as the Grindelwald Fluctuation (1560–1630), volcanic forcing events seem to have led to more extreme weather events. These included droughts, storms and unseasonal blizzards, as well as causing the Swiss Grindelwald Glacier to expand. The 1690s saw the worst famine in France since the Middle Ages. Finland suffered a severe famine in 1696–1697, during which about one-third of the Finnish population died.
Forecasting
Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. Human beings have attempted to predict the weather informally for millennia, and formally since at least the nineteenth century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve.
Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. On the other hand, human input is still required to pick the best possible forecast model to base the forecast upon, which involves many disciplines such as pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases.
The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, the error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as of the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus helps to narrow the error and pick the most likely outcome.
There are a variety of end users to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days.
In some areas, people use weather forecasts to determine what to wear on a given day. Since outdoor activities are severely curtailed by heavy rain, snow and the wind chill, forecasts can be used to plan activities around these events and to plan ahead to survive through them.
Tropical weather forecasting is different from that at higher latitudes. The sun shines more directly on the tropics than on higher latitudes (at least on average over a year), which makes the tropics warm (Stevens 2011). And, the vertical direction (up, as one stands on the Earth's surface) is perpendicular to the Earth's axis of rotation at the equator, while the axis of rotation and the vertical are the same at the pole; this causes the Earth's rotation to influence the atmospheric circulation more strongly at high latitudes than low latitudes. Because of these two factors, clouds and rainstorms in the tropics can occur more spontaneously compared to those at higher latitudes, where they are more tightly controlled by larger-scale forces in the atmosphere. Because of these differences, clouds and rain are more difficult to forecast in the tropics than at higher latitudes. On the other hand, the temperature is easily forecast in the tropics, because it does not change much.
Additional Information
One of the first things you probably do every morning is look out the window to see what the weather is like. Looking outside and listening to the day’s forecast helps you decide what clothes you will wear and maybe even what you will do throughout the day. If you don’t have school and the weather looks sunny, you might visit the zoo or go on a picnic. A rainy day might make you think about visiting a museum or staying home to read.
The weather affects us in many ways. Day-to-day changes in weather can influence how we feel and the way we look at the world. Severe weather, such as tornadoes, hurricanes, and blizzards, can disrupt many people’s lives because of the destruction they cause.
The term “weather” refers to the temporary conditions of the atmosphere, the layer of air that surrounds the Earth. We usually think of weather in terms of the state of the atmosphere in our own part of the world. But weather works like dropping a pebble in water—the ripples eventually affect water far away from where the pebble was dropped. The same happens with weather around the globe. Weather in your region will eventually affect the weather hundreds or thousands of kilometers away. For example, a snowstorm around Winnipeg, Manitoba, Canada, might eventually reach Chicago, Illinois, as it moves southeast through the U.S.
Weather doesn’t just stay in one place. It moves, and changes from hour to hour or day to day. Over many years, certain conditions become familiar weather in an area. The average weather in a specific region, as well as its variations and extremes over many years, is called climate. For example, the city of Las Vegas in the U.S. state of Nevada is generally dry and hot. Honolulu, the capital of the U.S. state of Hawaii, is also hot, but much more humid and rainy.
Climate changes, just like weather. However, climate change can take hundreds or even thousands of years. Today, the Sahara Desert in northern Africa is the largest desert in the world. However, several thousand years ago, the climate in the Sahara was quite different. This “Green Sahara” experienced frequent rainy weather.
What Makes Weather
There are six main components, or parts, of weather. They are temperature, atmospheric pressure, wind, humidity, precipitation, and cloudiness. Together, these components describe the weather at any given time. These changing components, along with the knowledge of atmospheric processes, help meteorologists—scientists who study weather—forecast what the weather will be in the near future.
Temperature is measured with a thermometer and refers to how hot or cold the atmosphere is. Meteorologists report temperature two ways: in Celsius (C) and Fahrenheit (F). The United States uses the Fahrenheit system; in other parts of the world, Celsius is used. Almost all scientists measure temperature using the Celsius scale.
Temperature is a relative measurement. An afternoon at 70 degrees Fahrenheit, for example, would seem cool after several days of 95 degrees Fahrenheit, but it would seem warm after temperatures around 32 degrees Fahrenheit. The coldest weather usually happens near the poles, while the warmest weather usually happens near the Equator.
Atmospheric pressure is the weight of the atmosphere overhead. Changes in atmospheric pressure signal shifts in the weather. A high-pressure system usually brings cool temperatures and clear skies. A low-pressure system can bring warmer weather, storms, and rain.
Meteorologists express atmospheric pressure in a unit of measurement called an atmosphere. Atmospheres are measured in millibars or inches of mercury. Average atmospheric pressure at sea level is about one atmosphere (about 1,013 millibars, or 29.9 inches). An average low-pressure system, or cyclone, measures about 995 millibars (29.4 inches). A typical high-pressure system, or anticyclone, usually reaches 1,030 millibars (30.4 inches). The word “cyclone” refers to air that rotates in a circle, like a wheel.
Atmospheric pressure changes with <altitude. The atmospheric pressure is much lower at high altitudes. The air pressure on top of Mount Kilimanjaro, Tanzania—which is 5,895 meters (19,344 feet) tall—is 40 percent of the air pressure at sea level. The weather is much colder. The weather at the base of Mount Kilimanjaro is tropical, but the top of the mountain has ice and snow.
Wind is the movement of air. Wind forms because of differences in temperature and atmospheric pressure between nearby regions. Winds tend to blow from areas of high pressure, where it’s colder, to areas of low pressure, where it’s warmer.
In the upper atmosphere, strong, fast winds called jet streams occur at altitudes of 8 to 15 kilometers (5 to 9 miles) above the Earth. They usually blow from about 129 to 225 kilometers per hour (80 to 140 miles per hour), but they can reach more than 443 kilometers per hour (275 miles per hour). These upper-atmosphere winds help push weather systems around the globe.
Wind can be influenced by human activity. Chicago, Illinois, is nicknamed the “Windy City.” After the Great Chicago Fire of 1871 destroyed the city, city planners rebuilt it using a grid system. This created wind tunnels. Winds are forced into narrow channels, picking up speed and strength. The Windy City is a result of natural and manmade winds.
Humidity refers to the amount of water vapor in the air. Water vapor is a gas in the atmosphere that helps make clouds, rain, or snow. Humidity is usually expressed as relative humidity, or the percentage of the maximum amount of water air can hold at a given temperature. Cool air holds less water than warm air. At a relative humidity of 100 percent, air is said to be saturated, meaning the air cannot hold any more water vapor. Excess water vapor will fall as precipitation. Clouds and precipitation occur when air cools below its saturation point. This usually happens when warm, humid air cools as it rises.
The most humid places on Earth are islands near the Equator. Singapore, for instance, is humid year-round. The warm air is continually saturated with water from the Indian Ocean.
Clouds come in a variety of forms. Not all of them produce precipitation. Wispy cirrus clouds, for example, usually signal mild weather. Other kinds of clouds can bring rain or snow. A blanketlike cover of nimbostratus clouds produces steady, extended precipitation. Enormous cumulonimbus clouds, or thunderheads, release heavy downpours. Cumulonimbus clouds can produce thunderstorms and tornadoes as well.
Clouds can affect the amount of sunlight reaching the Earth’s surface. Cloudy days are cooler than clear ones because clouds prevent more of the sun’s radiation from reaching the Earth’s surface. The opposite is true at night—then, clouds act as a blanket, keeping the Earth warm.
Weather Systems
Cloud patterns indicate the presence of weather systems, which produce most of the weather we are familiar with: rain, heat waves, cold snaps, humidity, and cloudiness. Weather systems are simply the movement of warm and cold air across the globe. These movements are known as low-pressure systems and high-pressure systems.
High-pressure systems are rotating masses of cool, dry air. High-pressure systems keep moisture from rising into the atmosphere and forming clouds. Therefore, they are usually associated with clear skies. On the other hand, low-pressure systems are rotating masses of warm, moist air. They usually bring storms and high winds.
High-pressure and low-pressure systems continually pass through the mid-latitudes, or areas of the Earth about halfway between the Equator and the poles, so weather there is constantly changing.
A weather map is filled with symbols indicating different types of weather systems. Spirals, for instance, are cyclones or hurricanes, and thick lines are fronts. Cyclones have a spiral shape because they are composed of air that swirls in a circular pattern.
A front is a narrow zone across which temperature, humidity, and wind change abruptly. A front exists at the boundary between two air masses. An air mass is a large volume of air that is mostly the same temperature and has mostly the same humidity.
When a warm air mass moves into the place of a cold air mass, the boundary between them is called a warm front. On a weather map, a warm front is shown as a red band with half-circles pointing in the direction the air is moving.
When a cold air mass takes the place of a warm air mass, the boundary between them is called a cold front. On a weather map, a cold front is shown as a blue band with triangles pointing in the direction the air is moving.
A stationary front develops when warm air and cold air meet and the boundary between the two does not move. On a weather map, a stationary front is shown as alternating red half-circles and blue triangles, pointing in opposite directions.
When a cold front overtakes a warm front, the new front is called an occluded front. On a weather map, an occluded front is shown as a purple band with half-circles and triangles pointing in the direction the air is moving. Cold fronts are able to overtake warm fronts because they move faster.
History of Weather Forecasting
Meteorology is the science of forecasting weather. Weather forecasting has been important to civilizations for thousands of years. Agriculture relies on accurate weather forecasting: when to plant, when to irrigate, when to harvest. Ancient cultures—from the Aztecs of Mesoamerica to the Egyptians in Africa and Indians in Asia—became expert astronomers and predictors of seasonal weather patterns.
In all of these cultures, weather forecasting became associated with religion and spirituality. Weather such as rain, drought, wind, and cloudiness were associated with a deity, or god. These deities were worshipped in order to ensure good weather. Rain gods and goddesses were particularly important, because rain influenced agriculture and construction projects. Tlaloc (Aztec), Set (Egyptian), and Indra (India), as well as Thor (Norse), Zeus (Greek), and Shango (Yoruba), are only some gods associated with rain, thunder, and lightning.
Developments in the 17th and 18th centuries made weather forecasting more accurate. The 17th century saw the invention of the thermometer, which measures temperature, and the barometer, which measures air pressure. In the 18th century, Sir Isaac Newton was able to explain the complex physics of gravity, motion, and thermodynamics. These principles guided the science of meteorology into the modern age. Scientists were able to predict the impact of high-pressure systems and low-pressure systems, as well as such weather events as storm surges, floods, and tornadoes.
Since the late 1930s, one of the main tools for observing general conditions of the atmosphere has been the radiosonde balloon, which sends information needed for forecasting back to Earth. Twice each day, radiosondes are released into the atmosphere from about a thousand locations around the world. The U.S. National Weather Service sends up radiosondes from more than 90 weather stations across the country.
A weather station is simply a facility with tools and technology used to forecast the weather. Different types of thermometers, barometers, and anemometers, which measure wind speed, are found at weather stations. Weather stations may also have computer equipment that allows meteorologists to create detailed maps of weather patterns, and technology that allows them to launch weather balloons.
Many weather stations are part of networks. These networks allow meteorologists from different regions and countries to share information on weather patterns and predictions. In the United States, the Citizen Weather Observer Program depends on amateur meteorologists with homemade weather stations and internet connections to provide forecasts across the United States.
The Aircraft Meteorological Data Relay (AMDAR) also assists in gathering weather data directly from the atmosphere. AMDAR uses commercial aircraft to transmit information about the atmosphere as the planes fly through it.
Weather balloons and AMDAR instruments gather information about temperature, pressure, humidity, and wind from very high levels in the atmosphere. Meteorologists input the data to computers and use it to map atmospheric winds and jet streams. They often combine this with data about temperature, humidity, and wind recorded at ground level. These complex weather maps using geographic information system (GIS) technology can calculate how weather systems are moving and predict how they might change.
This type of forecasting is called synoptic forecasting. Synoptic forecasting is getting a general idea of the weather over a large area. It relies on the fact that in certain atmospheric conditions, particular weather conditions are usually produced. For example, meteorologists know that a low-pressure system over the U.S. state of Arizona in winter will bring warm, moist air from the Gulf of Mexico toward Colorado. The high-pressure weather system of the Rocky Mountains drains the water vapor out of the air, resulting in rain. Meteorologists know that heavy snow may result when that warm air mass heads toward Colorado. Businesses, such as ski resorts, rely on such information. Transportation networks also rely on synoptic forecasting.
If meteorologists knew more about how the atmosphere functions, they would be able to make more accurate forecasts from day to day or even from week to week. Making such forecasts, however, would require knowing the temperature, atmospheric pressure, wind speed and direction, humidity, precipitation, and cloudiness at every point on the Earth.
It is impossible for meteorologists to know all this, but they do have some tools that help them accurately forecast weather for a day or two in advance. But because the atmosphere is constantly changing, detailed forecasts for more than a week or two will never be possible. Weather is just too unpredictable.
Weather Satellites
A new era in weather forecasting began on April 1, 1960, when the first weather satellite, TIROS-1, went into orbit. TIROS-1, which stands for Television Infrared Observation Satellite, was launched by NASA from Cape Canaveral, Florida. TIROS-1 was mostly an orbiting television camera, recording and sending back images. It gave meteorologists their first detailed look at clouds from above. With images from TIROS-1, they could track hurricanes and other cyclones moving across the globe.
Since then, meteorologists have depended on weather satellites for the most up-to-date and reliable information on weather patterns. Some satellites have geostationary orbits, meaning they stay in the same spot and move at the speed the Earth rotates. Geostationary satellites track the weather over one region. Other satellites orbit the Earth every 12 hours. These satellites can trace weather patterns, such as hurricanes, over the entire part of the globe they orbit.
Weather satellites can give more than just information about clouds and wind speeds. Satellites can see fires, volcanoes, city lights, dust storms, the effects of pollution, boundaries of ocean currents, and other environmental information.
In 2010, the volcano Eyjafjallajokull, in Iceland, erupted. It sent millions of tons of gases and ash into the atmosphere. Weather satellites in orbit above Iceland tracked the ash cloud as it moved across western Europe. Meteorologists were able to warn airlines about the toxic cloud, which darkened the sky and would have made flying dangerous. Hundreds of flights were canceled.
Radiosonde instruments are still more accurate than weather satellites. Satellites, however, can cover a larger area of the Earth. They also cover areas where there are no weather stations, like over the ocean. Satellite data have helped weather forecasts become more accurate, especially in the remote areas of the world that don’t have other ways to get information about the weather.
Radar
Radar is another major tool of weather observation and forecasting. It is used primarily to observe clouds and rain locally. One type of radar, called Doppler radar, is used at weather stations throughout the world. Doppler radar measures changes in wind speed and direction. It provides information within a radius of about 230 kilometers (143 miles). Conventional radar can only show existing clouds and precipitation. With Doppler radar, meteorologists are able to forecast when and where severe thunderstorms and tornadoes are developing.
Doppler radar has made air travel safer. It lets air traffic controllers detect severe local conditions, such as microbursts. Microbursts are powerful winds that originate in thunderstorms. They are among the most dangerous weather phenomena a pilot can encounter. If an aircraft attempts to land or take off through a microburst, the suddenly changing wind conditions can cause the craft to lose lift and crash. In the United States alone, airline crashes because of microbursts have caused more than 600 deaths since 1964.
Radar allowed meteorologists in the U.S. to track Hurricane Katrina in 2005, and predict the power of the storm with great accuracy. The National Weather Service and the National Hurricane Center created sophisticated GIS maps using radar, satellite, and balloon data. They were able to predict the site of the storm’s landing, and the strength of the storm over a period of days. A full day before the storm made landfall near Buras, Louisiana, the National Hurricane Center released a public warning: “Some levees in greater New Orleans area could be overtopped.” The National Weather Service warned that the area around New Orleans, Louisiana, “would be uninhabitable for weeks, if not longer. Human suffering incredible by modern standards.”
In fact, both of those forecasts were true. Levees in New Orleans were overtopped by the Mississippi River. Hundreds of homes, schools, hospitals, and businesses were destroyed. Many areas between New Orleans and Biloxi, Mississippi, were uninhabitable for weeks or months, and rebuilding efforts took years. More than a thousand people died.
Making a Weather Forecast
To produce a weather forecast for a particular area, meteorologists use a computer-generated forecast as a guide. They combine it with additional data from current satellite and radar images. They also rely on their own knowledge of weather processes.
If you follow the weather closely, you, too, can make a reasonable forecast. Radar and satellite images showing precipitation and cloud cover are now common on television, online, and in the daily newspaper.
In addition, you will probably see weather maps showing high- and low-pressure systems and fronts. In addition to bars representing different fronts, weather maps usually show isotherms and isobars. Isotherms are lines connecting areas of the same temperature, and isobars connect regions of the same atmospheric pressure. Weather maps also include information about cloudiness, precipitation, and wind speed and direction.
More Accurate Forecasts
Although weather forecasts have become more reliable, there is still a need for greater accuracy. Better forecasts could save industries across the world many billions of dollars each year. Farmers and engineers, in particular, would benefit.
Better frost predictions, for example, could save U.S. citrus growers millions of dollars each year. Citrus fruits such as oranges are very vulnerable to frost—they die in cold, wet weather. With more accurate frost forecasts, citrus farmers could plant when they know the new, tender seedlings wouldn’t be killed by frost. More accurate rain forecasts would enable farmers to plan timely irrigation schedules and avoid floods.
Imperfect weather forecasts cause construction companies to lose both time and money. A construction foreman might call his crew in to work only to have it rain, when the crew can’t work. An unexpected cold spell could ruin a freshly poured concrete foundation.
Outdoor activities, such as concerts or sporting events, could be planned with greater accuracy. Sports teams and musicians would not have to reschedule, and fans would not be inconvenienced.
Power companies would also benefit from accurate forecasts. They adjust their systems when they expect extreme temperatures, because people will use their furnaces and air conditioning more on these days. If the forecast predicts a hot, humid day and it turns out to be mild, the power company loses money. The extra electricity or gas it bought doesn’t get used.
Small businesses, too, would benefit from a better forecast. An ice cream store owner, for example, could save her advertising funds for some time in the future if she knew the coming weekend was going to be cool and rainy.
Responding to such needs, meteorologists are working to develop new tools and new methods that will improve their ability to forecast the weather.
More Information
Rain and dull clouds, windy blue skies, cold snow, and sticky heat are very different conditions, yet they are all weather.
Weather is the mix of events that happen each day in our atmosphere. Weather is different in different parts of the world and changes over minutes, hours, days and weeks. Most weather happens in the troposphere, the part of Earth’s atmosphere that is closest to the ground.
Air Pressure and Weather
The weather events happening in an area are controlled by changes in air pressure. Air pressure is caused by the weight of the huge numbers of air molecules that make up the atmosphere. Typically, when air pressure is high their skies are clear and blue. The high pressure causes air to flow down and fan out when it gets near the ground, preventing clouds from forming. When air pressure is low, air flows together and then upward where it converges, rising, cooling, and forming clouds. Remember to bring an umbrella with you on low pressure days because those clouds might cause rain or other types of precipitation.
Predicting Weather
Meteorologists develop local or regional weather forecasts including predictions for several days into the future. The best forecasts take into account the weather events that are happening over a broad region. Knowing where storms are now can help forecasters predict where storms will be tomorrow and the next day. Technology, such as weather satellites and Doppler radar, helps the process of looking over a large area, as does the network of weather observations.
The chaotic nature of the atmosphere means that it will probably always be impossible to predict the weather more than two weeks ahead; however, new technologies combined with more traditional methods are allowing forecasters to develop better and more complete forecasts.
Weather and Climate
The average weather pattern in a place over several decades is called climate. Different regions have different regional climates. For example, the climate of Antarctica is quite different than the climate of a tropical island. Global climate refers to the average of all regional climates.
As global climate changes, weather patterns are expected to change as well. While it is impossible to say whether a particular day’s weather was affected by climate change, it is possible to predict how patterns might change. For example, scientists predict more severe weather events as climate warms. Also, they predict more hot summer days and fewer extreme cold winter days. That doesn’t mean that there will be no more winter weather, in fact, large snowstorms might even be more likely in some areas as less cold air is able to carry more water with which to make snowflakes.
Weather is also affected by climate events like El Nino and La Nina (together known as ENSO). Climate events like these affect the weather in many areas of the world causing extreme events like storms and droughts.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2316) Climate
Gist
Climate is the average weather conditions for a particular location over a long period of time, ranging from months to thousands or millions of years. WMO uses a 30-year period to determine the average climate. 1.45°C. The global mean temperature in 2023 was about 1.45°C above the 1850-1900 average.
Climate is the average weather in a given area over a longer period of time. A description of a climate includes information on, e.g. the average temperature in different seasons, rainfall, and sunshine.
Summary
Climate is conditions of the atmosphere at a particular location over a long period of time; it is the long-term summation of the atmospheric elements (and their variations) that, over short time periods, constitute weather. These elements are solar radiation, temperature, humidity, precipitation (type, frequency, and amount), atmospheric pressure, and wind (speed and direction).
From the ancient Greek origins of the word (klíma, “an inclination or slope”—e.g., of the Sun’s rays; a latitude zone of Earth; a clime) and from its earliest usage in English, climate has been understood to mean the atmospheric conditions that prevail in a given region or zone. In the older form, clime, it was sometimes taken to include all aspects of the environment, including the natural vegetation. The best modern definitions of climate regard it as constituting the total experience of weather and atmospheric behaviour over a number of years in a given region. Climate is not just the “average weather” (an obsolete, and always inadequate, definition). It should include not only the average values of the climatic elements that prevail at different times but also their extreme ranges and variability and the frequency of various occurrences. Just as one year differs from another, decades and centuries are found to differ from one another by a smaller, but sometimes significant, amount. Climate is therefore time-dependent, and climatic values or indexes should not be quoted without specifying what years they refer to.
Details
Climate is the long-term weather pattern in a region, typically averaged over 30 years. More rigorously, it is the mean and variability of meteorological variables over a time spanning from months to millions of years. Some of the meteorological variables that are commonly measured are temperature, humidity, atmospheric pressure, wind, and precipitation. In a broader sense, climate is the state of the components of the climate system, including the atmosphere, hydrosphere, cryosphere, lithosphere and biosphere and the interactions between them. The climate of a location is affected by its latitude, longitude, terrain, altitude, land use and nearby water bodies and their currents.
Climates can be classified according to the average and typical variables, most commonly temperature and precipitation. The most widely used classification scheme is the Köppen climate classification. The Thornthwaite system, in use since 1948, incorporates evapotranspiration along with temperature and precipitation information and is used in studying biological diversity and how climate change affects it. The major classifications in Thornthwaite's climate classification are microthermal, mesothermal, and megathermal. Finally, the Bergeron and Spatial Synoptic Classification systems focus on the origin of air masses that define the climate of a region.
Paleoclimatology is the study of ancient climates. Paleoclimatologists seek to explain climate variations for all parts of the Earth during any given geologic period, beginning with the time of the Earth's formation. Since very few direct observations of climate were available before the 19th century, paleoclimates are inferred from proxy variables. They include non-biotic evidence—such as sediments found in lake beds and ice cores—and biotic evidence—such as tree rings and coral. Climate models are mathematical models of past, present, and future climates. Climate change may occur over long and short timescales due to various factors. Recent warming is discussed in terms of global warming, which results in redistributions of biota. For example, as climate scientist Lesley Ann Hughes has written: "a 3 °C [5 °F] change in mean annual temperature corresponds to a shift in isotherms of approximately 300–400 km [190–250 mi] in latitude (in the temperate zone) or 500 m [1,600 ft] in elevation. Therefore, species are expected to move upwards in elevation or towards the poles in latitude in response to shifting climate zones."
Definition
Climate (from Ancient Greek 'inclination') is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations. The Intergovernmental Panel on Climate Change (IPCC) 2001 glossary definition is as follows:
"Climate in a narrow sense is usually defined as the "average weather", or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period ranging from months to thousands or millions of years. The classical period is 30 years, as defined by the World Meteorological Organization (WMO). These quantities are most often surface variables such as temperature, precipitation, and wind. Climate in a wider sense is the state, including a statistical description, of the climate system."
The World Meteorological Organization (WMO) describes "climate normals" as "reference points used by climatologists to compare current climatological trends to that of the past or what is considered typical. A climate normal is defined as the arithmetic average of a climate element (e.g. temperature) over a 30-year period. A 30-year period is used as it is long enough to filter out any interannual variation or anomalies such as El Niño–Southern Oscillation, but also short enough to be able to show longer climatic trends."
The WMO originated from the International Meteorological Organization which set up a technical commission for climatology in 1929. At its 1934 Wiesbaden meeting, the technical commission designated the thirty-year period from 1901 to 1930 as the reference time frame for climatological standard normals. In 1982, the WMO agreed to update climate normals, and these were subsequently completed on the basis of climate data from 1 January 1961 to 31 December 1990. The 1961–1990 climate normals serve as the baseline reference period. The next set of climate normals to be published by WMO is from 1991 to 2010. Aside from collecting from the most common atmospheric variables (air temperature, pressure, precipitation and wind), other variables such as humidity, visibility, cloud amount, solar radiation, soil temperature, pan evaporation rate, days with thunder and days with hail are also collected to measure change in climate conditions.
The difference between climate and weather is usefully summarized by the popular phrase "Climate is what you expect, weather is what you get." Over historical time spans, there are a number of nearly constant variables that determine climate, including latitude, altitude, proportion of land to water, and proximity to oceans and mountains. All of these variables change only over periods of millions of years due to processes such as plate tectonics. Other climate determinants are more dynamic: the thermohaline circulation of the ocean leads to a 5 °C (9 °F) warming of the northern Atlantic Ocean compared to other ocean basins. Other ocean currents redistribute heat between land and water on a more regional scale. The density and type of vegetation coverage affects solar heat absorption, water retention, and rainfall on a regional level. Alterations in the quantity of atmospheric greenhouse gases (particularly carbon dioxide and methane) determines the amount of solar energy retained by the planet, leading to global warming or global cooling. The variables which determine climate are numerous and the interactions complex, but there is general agreement that the broad outlines are understood, at least insofar as the determinants of historical climate change are concerned.
Climate classification
Map of world dividing climate zones, largely influenced by latitude. The zones, going from the equator upward (and downward) are Tropical, Dry, Moderate, Continental and Polar. There are subzones within these zones.
Climate classifications are systems that categorize the world's climates. A climate classification may correlate closely with a biome classification, as climate is a major influence on life in a region. One of the most used is the Köppen climate classification scheme first developed in 1899.
There are several ways to classify climates into similar regimes. Originally, climes were defined in Ancient Greece to describe the weather depending upon a location's latitude. Modern climate classification methods can be broadly divided into genetic methods, which focus on the causes of climate, and empiric methods, which focus on the effects of climate. Examples of genetic classification include methods based on the relative frequency of different air mass types or locations within synoptic weather disturbances. Examples of empiric classifications include climate zones defined by plant hardiness, evapotranspiration, or more generally the Köppen climate classification which was originally designed to identify the climates associated with certain biomes. A common shortcoming of these classification schemes is that they produce distinct boundaries between the zones they define, rather than the gradual transition of climate properties more common in nature.
Record:
Paleoclimatology
Paleoclimatology is the study of past climate over a great period of the Earth's history. It uses evidence with different time scales (from decades to millennia) from ice sheets, tree rings, sediments, pollen, coral, and rocks to determine the past state of the climate. It demonstrates periods of stability and periods of change and can indicate whether changes follow patterns such as regular cycles.
Modern
Details of the modern climate record are known through the taking of measurements from such weather instruments as thermometers, barometers, and anemometers during the past few centuries. The instruments used to study weather over the modern time scale, their observation frequency, their known error, their immediate environment, and their exposure have changed over the years, which must be considered when studying the climate of centuries past. Long-term modern climate records skew towards population centres and affluent countries. Since the 1960s, the launch of satellites allow records to be gathered on a global scale, including areas with little to no human presence, such as the Arctic region and oceans.
Climate variability
Climate variability is the term to describe variations in the mean state and other characteristics of climate (such as chances or possibility of extreme weather, etc.) "on all spatial and temporal scales beyond that of individual weather events." Some of the variability does not appear to be caused systematically and occurs at random times. Such variability is called random variability or noise. On the other hand, periodic variability occurs relatively regularly and in distinct modes of variability or climate patterns.
There are close correlations between Earth's climate oscillations and astronomical factors (barycenter changes, solar variation, cosmic ray flux, cloud albedo feedback, Milankovic cycles), and modes of heat distribution between the ocean-atmosphere climate system. In some cases, current, historical and paleoclimatological natural oscillations may be masked by significant volcanic eruptions, impact events, irregularities in climate proxy data, positive feedback processes or anthropogenic emissions of substances such as greenhouse gases.
Over the years, the definitions of climate variability and the related term climate change have shifted. While the term climate change now implies change that is both long-term and of human causation, in the 1960s the word climate change was used for what we now describe as climate variability, that is, climatic inconsistencies and anomalies.
Climate change
Observed temperature from NASA vs the 1850–1900 average used by the IPCC as a pre-industrial baseline. The primary driver for increased global temperatures in the industrial era is human activity, with natural forces adding variability.
Climate change is the variation in global or regional climates over time. It reflects changes in the variability or average state of the atmosphere over time scales ranging from decades to millions of years. These changes can be caused by processes internal to the Earth, external forces (e.g. variations in sunlight intensity) or human activities, as found recently. Scientists have identified Earth's Energy Imbalance (EEI) to be a fundamental metric of the status of global change.
In recent usage, especially in the context of environmental policy, the term "climate change" often refers only to changes in modern climate, including the rise in average surface temperature known as global warming. In some cases, the term is also used with a presumption of human causation, as in the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC uses "climate variability" for non-human caused variations.
Earth has undergone periodic climate shifts in the past, including four major ice ages. These consist of glacial periods where conditions are colder than normal, separated by interglacial periods. The accumulation of snow and ice during a glacial period increases the surface albedo, reflecting more of the Sun's energy into space and maintaining a lower atmospheric temperature. Increases in greenhouse gases, such as by volcanic activity, can increase the global temperature and produce an interglacial period. Suggested causes of ice age periods include the positions of the continents, variations in the Earth's orbit, changes in the solar output, and volcanism. However, these naturally caused changes in climate occur on a much slower time scale than the present rate of change which is caused by the emission of greenhouse gases by human activities.
According to the EU's Copernicus Climate Change Service, average global air temperature has passed 1.5C of warming the period from February 2023 to January 2024.
Additional Information:
Overview
All life on Earth depends on the climate system, which consists of five major components:
* the atmosphere,
* the hydrosphere (oceans, lakes, and rivers),
* the cryosphere (ice and snow),
* the lithosphere (land surface), and
* the biosphere (living organisms).
The complex interactions and influences between these components, such as the exchange of energy, water, and carbon dioxide, determine our climate patterns and variability.
By observing and monitoring components of the climate system such as temperature, precipitation, air pressure, ice cover, and carbon cycles over a long period, we can better understand the climate and what drives changes to it and build climate models to predict our future climate.
WMO coordinates the study of the climate, its variations, extremes and trends and collaborates with partners worldwide to study the socio-economic climate impacts that support evidence-based decision-making to manage risks and adapt to a changing climate.
Impact
The climate system changes in time under the influence of its own internal dynamics and because of external forcings such as volcanic eruptions, solar variations, orbital forcing, and anthropogenic forcings including changes in the composition of the atmosphere and land-use.
According to the IPCC, the scale of recent changes across the climate system as a whole – and the present state of many aspects of the climate system – are unprecedented over many centuries to many thousands of years.
It is unequivocal that human influence has warmed the atmosphere, ocean and land. Widespread and rapid changes in the atmosphere, ocean, cryosphere and biosphere have occurred.
Climate variability and climate change can impact virtually every aspect of society, including food production, health, housing, energy, water resources, safety, tourism, finance, and transportation.
Monitoring climate conditions and predicting what the next season will bring or how our climate will change in the coming years and decades is critical for sustainable development.
WMO's response
WMO supports its Members to understand the Earth’s climate on global to local scales by developing technical standards for observing instruments and ensuring that the collected data are quality controlled and comparable, by monitoring the current climate and by ensuring that skillful predictions of the climate over the coming weeks, months, seasons and years, as well as climate change projections over the coming decades to longer periods, are available and freely accessible by the Members.
All these cascading sources of information are essential for climate-smart decision-making at all levels to deal with climate risks. Climate information is also essential for monitoring the success of mitigation efforts such as reduction of greenhouse gas emissions that contribute to climate change, as well as for promoting efforts to increase energy efficiency, to transition to a carbon-neutral economy and effectively pursue Sustainable Development Goals (SDGs).
WMO helps different sectors make climate-smart decisions by collaborating with our Members to routinely issue global climate predictions on seasonal to decadal timescales, and producing bulletin such as the El Niño/La Niña Update, and the Global Seasonal Climate Update.
WMO publishes the Global and Regional State of the Climate reports, the State of Climate Services reports, is involved in leadership capacity in the Global Framework for Climate Services, World Climate Research Programme, and provides Infrastructure Department activities, and Member Services activities to nations around the world.
More Information
Climate is the long-term pattern of weather in a particular area. Weather can change from hour-to-hour, day-to-day, month-to-month or even year-to-year. A region’s weather patterns, usually tracked for at least 30 years, are considered its climate.
Climate System
Different parts of the world have different climates. Some parts of the world are hot and rainy nearly every day. They have a tropical wet climate. Others are cold and snow-covered most of the year. They have a polar climate. Between the icy poles and the steamy tropics are many other climates that contribute to Earth’s biodiversity and geologic heritage.
Climate is determined by a region’s climate system. A climate system has five major components: the atmosphere, the hydrosphere, the cryosphere, the land surface, and the biosphere.
The atmosphere is the most variable part of the climate system. The composition and movement of gases surrounding the Earth can change radically, influenced by natural and human-made factors.
Changes to the hydrosphere, which include variations in temperature and salinity, occur at much slower rates than changes to the atmosphere.
The cryosphere is another generally consistent part of the climate system. Ice sheets and glaciers reflect sunlight, and the thermal conductivity of ice and permafrost profoundly influences temperature. The cryosphere also helps regulate thermohaline circulation. This “ocean conveyor belt” has an enormous influence on marine ecosystems and biodiversity.
Topography
Topography and vegetation influence climate by helping determine how the Sun’s energy is used on Earth. The abundance of plants and the type of land cover (such as soil, sand, or asphalt) impacts evaporation and ambient temperature.
The biosphere, the sum total of living things on Earth, profoundly influences climate. Through photosynthesis, plants help regulate the flow of greenhouse gases in the atmosphere. Forests and oceans serve as “carbon sinks” that have a cooling impact on climate. Living organisms alter the landscape, through both natural growth and created structures such as burrows, dams, and mounds. These altered landscapes can influence weather patterns such as wind, erosion, and even temperature.
Climate Features
The most familiar features of a region’s climate are probably average temperature and precipitation. Changes in day-to-day, day-to-night, and seasonal variations also help determine specific climates. For example, San Francisco, California, and Beijing, China, have similar yearly temperatures and precipitation. However, the daily and seasonal changes make San Francisco and Beijing very different. San Francisco’s winters are not much cooler than its summers, while Beijing is hot in summer and cold in winter. San Francisco’s summers are dry and its winters are wet. Wet and dry seasons are reversed in Beijing—it has rainy summers and dry winters.
Climate features also include windiness, humidity, cloud cover, atmospheric pressure, and fogginess. Latitude plays a huge factor in determining climate. Landscape can also help define regional climate. A region’s elevation, proximity to the ocean or freshwater, and land-use patterns can all impact climate.
All climates are the product of many factors, including latitude, elevation, topography, distance from the ocean, and location on a continent. The rainy, tropical climate of West Africa, for example, is influenced by the region’s location near the Equator (latitude) and its position on the western side of the continent. The area receives direct sunlight year-round, and sits at an area called the intertropical convergence zone (ITCZ, pronounced “itch”), where moist trade winds meet. As a result, the region’s climate is warm and rainy.
Microclimates
Of course, no climate is uniform. Small variations, called microclimates, exist in every climate region. Microclimates are largely influenced by topographic features such as lakes, vegetation, and cities. In large urban areas, for example, streets and buildings absorb heat from the Sun, raising the average temperature of the city higher than average temperatures of more open areas nearby. This is known as the “urban heat island effect.”
Large bodies of water, such as the Great Lakes in the United States and Canada, can also have microclimates. Cities on the southern side of Lake Ontario, for example, are cloudier and receive much more snow than cities on the northern shore. This “lake effect” is a result of cold winds blowing across warmer lake water.
Climate Classification
In 1948, American climatologist Charles Thornthwaite developed a climate classification system that scientists still use today. Thornthwaite’s system relies on a region’s water budget and potential evapotranspiration. Potential evapotranspiration describes the amount of water evaporated from a vegetated piece of land. Indices such as humidity and precipitation help determine a region’s moisture index. The lower its moisture index value, the more arid a region’s climate.
The major classifications in Thornthwaite’s climate classification are microthermal, mesothermal, and megathermal.
Microthermal climates are characterized by cold winters and low potential evapotranspiration. Most geographers apply the term exclusively to the northern latitudes of North America, Europe, and Asia. A microthermal climate may include the temperate climate of Boston, Massachusetts; the coniferous forests of southern Scandinavia; and the boreal ecosystem of northern Siberia.
Mesothermal regions have moderate climates. They are not cold enough to sustain a layer of winter snow, but are also not remain warm enough to support flowering plants (and, thus, evapotranspiration) all year. Mesothermal climates include the Mediterranean Basin, most of coastal Australia, and the Pampas region of South America.
Megathermal climates are hot and humid. These regions have a high moisture index and support rich vegetation all year. Megathermal climates include the Amazon Basin; many islands in Southeast Asia, such as New Guinea and the Philippines; and the Congo Basin in Africa.
Köppen Classification System
Although many climatologists think the Thornthwaite system is an efficient, rigorous way of classifying climate, it is complex and mapping it is difficult. The system is rarely used outside scientific publishing.
The most popular system of classifying climates was proposed in 1900 by Russian-German scientist Wladimir Köppen. Köppen observed that the type of vegetation in a region depended largely on climate. Studying vegetation, temperature, and precipitation data, he and other scientists developed a system for naming climate regions.
According to the Köppen climate classification system, there are five climate groups: tropical, dry, mild, continental, and polar. These climate groups are further divided into climate types. The following list shows the climate groups and their types:
* Tropical
** Wet (rainforest)
** Monsoon
** Wet and dry (savanna)
* Dry
* Arid
* Semiarid
* Mild
** Mediterranean
** Humid subtropical
** Marine
* Continental
** Warm summer
** Cool summer
** Subarctic (boreal)
* Polar
** Tundra
** Ice cap
Tropical Climates
There are three climate types in the tropical group: tropical wet; tropical monsoon; and tropical wet and dry.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2317) Inflorescence
Gist:
An inflorescence, in a flowering plant, is a group or cluster of flowers arranged on a stem that is composed of a main branch or a system of branches. An inflorescence is categorized on the basis of the arrangement of flowers on a main axis (peduncle) and by the timing of its flowering (determinate and indeterminate).
What are the two types of inflorescences?
Inflorescence is divided into two main types:
* Racemose: In racemose types of inflorescence, the main axis grows continuously and flowers are present laterally on the floral axis. Flowers are present in an acropetal manner.
* Cymose: In the cymose type of inflorescence, the main axis does not grow continuously.
Summary
Inflorescence, in a flowering plant, is a cluster of flowers on a branch or a system of branches. An inflorescence is categorized on the basis of the arrangement of flowers on a main axis (peduncle) and by the timing of its flowering (determinate and indeterminate).
Determinate inflorescence.
In determinate (cymose) inflorescences, the youngest flowers are at the bottom of an elongated axis or on the outside of a truncated axis. At the time of flowering, the apical meristem (the terminal point of cell division) produces a flower bud, thus arresting the growth of the peduncle.
A cyme is a flat-topped inflorescence in which the central flowers open first, followed by the peripheral flowers, as in the onion (genus Allium).
A dichasium is one unit of a cyme and is characterized by a stunted central flower and two lateral flowers on elongated pedicels, as in the wood stichwort (species Stellaria nemorum).
Indeterminate inflorescence.
In indeterminate inflorescences, the youngest flowers are at the top of an elongated axis or on the centre of a truncated axis. An indeterminate inflorescence may be a raceme, panicle, spike, catkin, corymb, umbel, spadix, or head.
In a raceme a flower develops at the upper angle (axil) between the stem and branch of each leaf along a long, unbranched axis. Each flower is borne on a short stalk, called a pedicel. An example of a raceme is found in the snapdragon (Antirrhinum majus).
A panicle is a branched raceme in which each branch has more than one flower, as in the astilbe (Astilbe).
A spike is a raceme, but the flowers develop directly from the stem and are not borne on pedicels, as in barley (Hordeum).
A catkin (or ament) is a spike in which the flowers are either male (staminate) or female (carpellate). It is usually pendulous, and the perianth may be reduced or absent, as in oaks (Quercus).
A corymb is a raceme in which the pedicels of the lower flowers are longer than those of the upper flowers so that the inflorescence has a flat-topped appearance overall, as in hawthorn (Crataegus).
In an umbel, each of the pedicels initiates from about the same point at the tip of the peduncle, giving the appearance of an umbrella-like shape, as in the wax flowers (Hoya).
A spadix is a spike borne on a fleshy stem and is common in the family Araceae (e.g., Philodendron). The subtending bract is called a spathe.
A head (capitulum) is a short dense spike in which the flowers are borne directly on a broad, flat peduncle, giving the inflorescence the appearance of a single flower, as in the dandelion (Taraxacum).
Details
An inflorescence, in a flowering plant, is a group or cluster of flowers arranged on a stem that is composed of a main branch or a system of branches. An inflorescence is categorized on the basis of the arrangement of flowers on a main axis (peduncle) and by the timing of its flowering (determinate and indeterminate).
Morphologically, an inflorescence is the modified part of the shoot of seed plants where flowers are formed on the axis of a plant. The modifications can involve the length and the nature of the internodes and the phyllotaxis, as well as variations in the proportions, compressions, swellings, adnations, connations and reduction of main and secondary axes.
One can also define an inflorescence as the reproductive portion of a plant that bears a cluster of flowers in a specific pattern.
General characteristics
Inflorescences are described by many different characteristics including how the flowers are arranged on the peduncle, the blooming order of the flowers, and how different clusters of flowers are grouped within it. These terms are general representations as plants in nature can have a combination of types. Because flowers facilitate plant reproduction, inflorescence characteristics are largely a result of natural selection.
The stem holding the whole inflorescence is called a peduncle. The main axis (also referred to as major stem) above the peduncle bearing the flowers or secondary branches is called the rachis. The stalk of each flower in the inflorescence is called a pedicel. A flower that is not part of an inflorescence is called a solitary flower and its stalk is also referred to as a peduncle. Any flower in an inflorescence may be referred to as a floret, especially when the individual flowers are particularly small and borne in a tight cluster, such as in a pseudanthium. The fruiting stage of an inflorescence is known as an infructescence. Inflorescences may be simple (single) or complex (panicle). The rachis may be one of several types, including single, composite, umbel, spike or raceme.
In some species the flowers develop directly from the main stem or woody trunk, rather than from the plant's main shoot. This is called cauliflory and is found across a number of plant families. An extreme version of this is flagelliflory where long, whip-like branches grow from the main trunk to the ground and even below it. Inflorescences form directly on these branches.
Terminal flower
Plant organs can grow according to two different schemes, namely monopodial or racemose and sympodial or cymose. In inflorescences these two different growth patterns are called indeterminate and determinate respectively, and indicate whether a terminal flower is formed and where flowering starts within the inflorescence.
Indeterminate inflorescence: Monopodial (racemose) growth. The terminal bud keeps growing and forming lateral flowers. A terminal flower is never formed.
Determinate inflorescence: Sympodial (cymose) growth. The terminal bud forms a terminal flower and then dies out. Other flowers then grow from lateral buds.
Indeterminate and determinate inflorescences are sometimes referred to as open and closed inflorescences respectively. The indeterminate patterning of flowers is derived from determinate flowers. It is suggested that indeterminate flowers have a common mechanism that prevents terminal flower growth. Based on phylogenetic analyses, this mechanism arose independently multiple times in different species.
In an indeterminate inflorescence there is no true terminal flower and the stem usually has a rudimentary end. In many cases the last true flower formed by the terminal bud (subterminal flower) straightens up, appearing to be a terminal flower. Often a vestige of the terminal bud may be noticed higher on the stem.
In determinate inflorescences the terminal flower is usually the first to mature (precursive development), while the others tend to mature starting from the base of the stem. This pattern is called acropetal maturation. When flowers start to mature from the top of the stem, maturation is basipetal, whereas when the central mature first, maturation is divergent.
Phyllotaxis
As with leaves, flowers can be arranged on the stem according to many different patterns.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2318) Chromophobia
Gist
Chromophobia (also known as chromatophobia or chrematophobia) is a persistent, irrational fear of, or aversion to, colors and is usually a conditioned response. While actual clinical phobias to color are rare, colors can elicit hormonal responses and psychological reactions.
Chromophobia (or chromatophobia) is an intense fear of colors. Most people with this disorder are afraid of one or two colors in particular. Others have a phobia of all colors, or they may only be sensitive to bright colors.
Summary
What are the other Names for this Condition? (Also known as/Synonyms):
* Chrematophobia
* Chromatophobia
* Fear of Colors
What is Chromophobia? (Definition/Background Information)
* Chromophobia or Chromatophobia is an excessive and irrational fear of colors or specific colors, which can cause significant anxiety and avoidance behavior. Individuals with this phobia may fear being exposed to certain colors, or feel uncomfortable or overwhelmed by certain color combinations
* Individuals of any age group or gender may be affected. Presently, the cause of the development of Chromophobia is not well-understood. However, similar to other phobias, a combination of genetic, environmental, and psychological factors may be contributory
* The signs and symptoms of Chromophobia may include intense anxiety or panic attacks, sweating, rapid heartbeat, nausea, dizziness, fear of losing control or going insane, avoidance of certain colors or color combinations, difficulty concentrating, and an overwhelming need for reassurance
Chromophobia can be treated with various psychotherapeutic and pharmacological interventions. The most effective treatments depend on the severity of the symptoms, the individual's preferences, and their response to earlier therapies.
The prognosis of Chromophobia may vary depending on the severity of the phobia, the individual's response to treatment, and their level of commitment to therapy. Some individuals may require long-term therapy or maintenance treatment to prevent relapse
Who gets Chromophobia? (Age and gender Distribution)
* Chromophobia can affect any individual, regardless of age and gender
* Worldwide, no particular race or ethnicity preference is observed
What are the Risk Factors for Chromophobia? (Predisposing Factors)
Several factors can increase the risk of developing Chromophobia, including:
* Traumatic experiences related to colors, such as being exposed to a traumatic event associated with a certain color
* Family history of anxiety disorders
* High levels of stress or anxiety
* Being overly sensitive to stimuli or sensory overload
* Certain personality traits, characterized by a tendency towards negative emotions such as anxiety, depression, and worry, including neuroticism or introversion
It is important to note that having a risk factor does not mean that one will get the condition. A risk factor increases one’s chances of getting a condition compared to an individual without the risk factors. Some risk factors are more important than others.
Also, not having a risk factor does not mean that an individual will not get the condition. It is always important to discuss the effect of risk factors with your healthcare provider.
What are the Causes of Chromophobia? (Etiology)
The exact cause of Chromophobia is presently unknown.
* However, similar to other phobias, it may be caused by a combination of genetic, environmental, and psychological factors
* Some studies suggest that those with a family history of anxiety disorders or traumatic experiences related to colors may be more prone to developing this condition
What are the Signs and Symptoms of Chromophobia?
Individuals with Chromophobia may experience various physical and psychological symptoms when exposed to colors or color-related situations. These may include:
* Intense anxiety or panic attacks
* Sweating or trembling
* Rapid heartbeat or shortness of breath
* Nausea or dizziness
* Fear of losing control or going insane
* Avoidance of certain colors or color combinations
* Difficulty concentrating or thinking clearly
* Overwhelming need for safety or reassurance
How is Chromophobia Diagnosed?
* Chromophobia is usually diagnosed based on a thorough psychological evaluation by a mental health professional
* The healthcare professional may ask questions about the individual's medical history, symptoms, and the impact of the fear on their daily life
* In some cases, standardized assessment tools, such as the “Color Phobia questionnaire”, may be used to help diagnose the condition
* Many clinical conditions may have similar signs and symptoms. Your healthcare provider may perform additional tests to rule out other clinical conditions to arrive at a definitive diagnosis.
Details
Chromophobia (also known as chromatophobia or chrematophobia) is a persistent, irrational fear of, or aversion to, colors and is usually a conditioned response.[2] While actual clinical phobias to color are rare, colors can elicit hormonal responses and psychological reactions.
Chromophobia may also refer to an aversion of use of color in products or design. Within cellular biology, "chromophobic" cells are a classification of cells that do not attract hematoxylin, and is related to chromatolysis.
Terminology
Names exist that mean fear of specific colors such as erythrophobia for the fear of red, xanthophobia for the fear of yellow and leukophobia for the fear of white. A fear of the color red may be associated with a fear of blood.
Overview
In his book Chromophobia published in 2000, David Batchelor says that in Western culture, color has often been treated as corrupting, foreign or superficial. Michael Taussig states that the cultural aversion to color can be traced back a thousand years, with Batchelor stating that it can be traced back to Aristotle's privileging of line over color.
In a study, hatchling loggerhead sea turtles were found to have an aversion to lights in the yellow wave spectrum which is thought to be a characteristic that helps orient themselves toward the ocean. The Mediterranean sand smelt, Atherina hepsetus, has shown an aversion to red objects placed next to a tank while it will investigate objects of other colors. In other experiments, geese have been conditioned to have adverse reactions to foods of a particular color, although the reaction was not observed in reaction to colored water.
The title character in Alfred Hitchcock's Marnie has an aversion to the color red caused by a trauma during her childhood which Hitchcock presents through expressionistic techniques, such as a wash of red coloring a close up of Marnie.
The term colorphobia can also be used refer to its literal etymological origin to refer to an apprehension towards image processing on one's vision and its visual perceptual property. However, the term's association with a racial component has been used by public figures such as Frederick Douglass.
Leukophobia often takes the form of a fixation on pale skin. Those with the phobia may make implausible assumptions such as paleness necessarily representing ill health or a ghost. In other cases, leukophobia is directed more towards the symbolic meaning of whiteness, for instance in individuals who associate the color white with chastity and are opposed to or fear chastity. In Paul Beatty's novel Slumberland, leukophobia refers to racism.
Additional Information
Chromophobia is an intense fear of colors. Most people with this disorder have an extreme aversion to one or two colors in particular — or they may only fear bright colors. People with chromophobia have severe anxiety or panic attacks when they see a color they’re afraid of. Therapy and medications can help manage symptoms.
Overview:
What is chromophobia?
Chromophobia (or chromatophobia) is an intense fear of colors. Most people with this disorder are afraid of one or two colors in particular. Others have a phobia of all colors, or they may only be sensitive to bright colors.
People with chromophobia experience extreme discomfort or anxiety when they see a color that triggers their phobia. They may have trouble breathing, sweat a lot or even have a panic attack. Some people may avoid leaving their house and interacting with others. This can damage relationships and impact a person’s ability to work. Therapy and medications can help people manage this disorder.
What is a phobia?
Phobias cause people to be afraid of a situation or an object that isn’t harmful. They are a type of anxiety disorder. People with phobias have unrealistic fears and abnormal reactions to things other people don’t find scary.
Chromophobia is a specific phobia disorder. People with specific phobia disorders have extreme reactions to a certain object or situation. They go to great lengths to avoid the things that cause their discomfort or fear.
How common is chromophobia?
It’s hard knowing exactly how many people have a specific phobia, like chromophobia (fear of colors). Many people may keep this fear to themselves or may not recognize they have it. We do know that about 1 in 10 American adults and 1 in 5 teenagers will deal with a specific phobia disorder at some point in their lives, though.
What colors are people afraid of?
Although it’s possible to be afraid of all colors, people with chromophobia are more likely to be fearful or anxious about one or two colors in particular. Specific color phobias include:
* Chrysophobia, fear of the color orange or gold.
* Cyanophobia, fear of the color blue.
* Kastanophobia, fear of the color brown.
* Leukophobia, fear of the color white.
* Prasinophobia, fear of the color green.
* Rhodophobia, fear of the color pink.
* Melanophobia, fear of the color black.
* Xanthophobia, fear of the color yellow.
Symptoms and Causes:
Who is at risk of chromophobia?
You have a higher risk of developing a chromophobia if you have:
* Autism spectrum disorder (ASD) or sensory processing disorder.
* Anxiety disorders such as generalized anxiety disorder (GAD).
* History of panic attacks or panic disorder.
* Mental illness, including obsessive-compulsive disorder (OCD).
* Mood disorders such as depression.
* Other phobias.
* Substance abuse disorder.
What causes chromophobia?
Healthcare providers aren’t sure what exactly causes chromophobia. Like other specific phobia disorders, it probably results from a combination of genetics and environmental factors. People who have mental illness or anxiety problems are more likely to develop a phobia. Mental illness, mood disorders and phobias tend to run in families, too. So, you have a higher risk of these conditions if you have a relative who has them.
Chromophobia and other types of phobias can happen along with post-traumatic stress disorder (PTSD). If someone experiences a traumatic event that they associate with a specific color, an intense fear of that color can result. They remember the terrible feelings the event caused and connect those feelings to the color itself. As a result, every time the person sees that color, the bad feelings return.
People with autism, Asperger’s or sensory processing issues sometimes have an aversion to one color in particular. Although they may not actually have chromophobia, their symptoms may be similar. They may prefer certain colors and avoid colors that disgust them or cause discomfort.
What are the symptoms of chromophobia?
Children and adults with chromophobia have symptoms ranging from intense discomfort to a full panic attack. When they see a color they’re afraid of, they may have:
* Chills.
* Dizziness and lightheadedness.
* Excessive sweating (hyperhidrosis).
* Heart palpitations.
* Nausea.
* Shortness of breath (dyspnea).
* Trembling or shaking.
* Upset stomach or indigestion (dyspepsia).
People with this disorder may stay indoors because they worry about coming into contact with the color that triggers the phobia. Their fear of colors can lead to another type of anxiety disorder called agoraphobia. This disorder causes people to avoid certain situations from which they can’t escape. They often stay inside their homes and away from crowded places.
Diagnosis and Tests:
How do healthcare providers diagnose chromophobia?
Healthcare providers diagnose chromophobia and other types of phobias during a thorough mental health evaluation. Your healthcare provider will discuss your symptoms, when they started and what triggers them.
Your healthcare provider may recommend talking to a mental health professional over the course of several weeks. A healthcare provider who specializes in anxiety disorders will ask about other phobias, mental illness or mood disorders. They’ll also ask about your family history of phobias, anxiety disorders and other mental illness.
Generally, people receive a diagnosis of chromophobia or another specific phobic disorder if they:
* Experience extreme anxiety or panic attacks that last for six months or longer.
* Go to great lengths to avoid the situation or object that’s causing distress.
* Have symptoms that significantly impact their quality of life or damage relationships.
Management and Treatment:
How do I manage or treat chromophobia?
Some therapies, techniques and treatments can help people with color phobia manage their symptoms. These include:
* Cognitive behavioral therapy (CBT), to help you think about your fears differently, gain a new perspective and control your response to them.
* Exposure therapy, which gradually increases your contact with certain colors. Your healthcare provider may show you certain colors for a few seconds at a time to lower your sensitivity.
* Hypnotherapy, which uses guided relaxation while you’re in a hypnotic (calm and responsive) state. During this time, your mind is more open to thinking about fears in a different way.
* Psychotherapy, which allows you to talk about your fears and find strategies that can help you overcome them.
* Medications, which can treat panic attacks and help manage other mental health disorders. Your healthcare provider may recommend anti-anxiety medications or medications to treat depression.
* Relaxation techniques, breathing exercises, and meditation, which can help you control anxiety. You may also benefit from yoga and mindfulness exercises.
What are the complications of chromophobia?
Severe chromophobia can have a devastating impact on your overall quality of life. People with this disorder tend to avoid everyday activities. In doing so, they may harm relationships with friends and family — or lose their jobs. These losses can lead to social isolation, serious depression and worsening mental health.
Outlook / Prognosis:
What is the prognosis (outlook) for people who have chromophobia?
Therapy and medication can help people with chromophobia manage their symptoms and improve their quality of life. Ongoing treatment may be necessary.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2319) Flight Attendant
Gist
A Flight Attendant is a person whose job is to help passengers who are traveling in an airplane.
Flight attendants work for private and commercial airline companies to keep passengers safe and comfortable. They help passengers get seated, demonstrate how to use the plane's safety equipment, including seat belts, and provide snacks, beverages and other services.
Flight attendants typically need a high school diploma or the equivalent and work experience in customer service. Applicants must meet minimum age requirements, typically 18 or 21; be eligible to work in the United States; have a valid passport; and pass a background check and drug test.
Details
A flight attendant, also known as a steward (MASC) or stewardess (FEM), or air host (MASC) or hostess (FEM), is a member of the aircrew aboard commercial flights, many business jets and some government aircraft. Collectively called cabin crew, flight attendants are primarily responsible for passenger safety and comfort.
History
The role of a flight attendant derives from that of similar positions on passenger ships or passenger trains, but has more direct involvement with passengers because of the confined quarters on aircraft. Additionally, the job of a flight attendant revolves around safety to a much greater extent than those of similar staff on other forms of transportation. Flight attendants on board a flight collectively form a cabin crew, as distinguished from pilots and engineers in the front part of the aircraft, spacecfart, or submersible from which a pilot controls the vehicle.
The German Heinrich Kubis was the world's first flight attendant, in 1912 aboard a Zeppelin. Kubis first attended to the passengers on board the DELAG Zeppelin LZ 10 Schwaben. He also attended to the famous LZ 129 Hindenburg and was on board when it burst into flames. He survived by jumping out a window when it neared the ground.
Origins of the word "steward" in transportation are reflected in the term "chief steward" as used in maritime transport terminology. The term purser and chief steward are often used interchangeably describing personnel with similar duties among seafaring occupations. This lingual derivation results from the international British maritime tradition (i.e. chief mate) dating back to the 14th century and the civilian United States Merchant Marine on which U.S. aviation is somewhat modelled. Due to international law, conventions and agreements, in which all ships' personnel who sail internationally are similarly documented, see Merchant Mariner's Document, by their respective countries, the U.S. Merchant Marine assigns such duties to the chief steward in the overall rank and command structure of which pursers are not positionally represented or rostered.
Imperial Airways of the United Kingdom had "cabin boys" or "stewards"; in the 1920s. In the US, Stout Airways was the first to employ stewards in 1926, working on Ford Trimotor planes between Detroit and Grand Rapids, Michigan. Western Airlines (1928) and Pan American World Airways (Pan Am) (1929) were the first US carriers to employ stewards to serve food. Ten-passenger Fokker aircraft used in the Caribbean had stewards in the era of gambling trips to Havana, Cuba from Key West, Florida. Lead flight attendants would in many instances also perform the role of purser, steward, or chief steward in modern aviation terminology.
The first female flight attendant was a 25-year-old registered nurse named Ellen Church. Hired by United Airlines in 1930, she also first envisioned nurses on aircraft. Other airlines followed suit, hiring nurses to serve as flight attendants, then called "stewardesses" or "air hostesses", on most of their flights. In the United States, the job was one of only a few in the 1930s to permit women, which, coupled with the Great Depression, led to large numbers of applicants for the few positions available. Two thousand women applied for just 43 positions offered by Transcontinental and Western Airlines in December 1935.
Female flight attendants rapidly replaced male ones, and by 1936, they had all but taken over the role. They were selected not only for their knowledge but also for their physical characteristics. A 1936 article in The New York Times described the requirements:
The girls who qualify for hostesses must be petite; weight 100 to 118 pounds; height 5 feet to 5 feet 4 inches; age 20 to 26 years. Add to that the rigid physical examination each must undergo four times every year, and you are assured of the bloom that goes with perfect health.
Three decades later, a 1966 New York Times classified ad for stewardesses at Eastern Airlines listed these requirements:
A high school graduate, single (widows and divorcees with no children considered), 20 years of age (girls 191⁄2 may apply for future consideration). 5'2" but no more than 5'9 weight 105 to 135 in proportion to height and have at least 20/40 vision without glasses.
Appearance was considered one of the most important factors to become a stewardess. At that time, airlines believed that the exploitation of female sexuality would increase their profits; thus the uniforms of female flight attendants were often formfitting, complete with white gloves and high heels.
In the United States, they were required to be unmarried and were fired if they decided to marry. The requirement to be a registered nurse on an American airline was relaxed as more women were hired, and disappeared almost entirely during World War II as many nurses joined military nurse corps.
Ruth Carol Taylor was the first African-American flight attendant in the United States. Hired in December 1957, on 11 February 1958, Taylor was the flight attendant on a Mohawk Airlines flight from Ithaca to New York, the first time such a position had been held by an African American. She was let go within six months as a result of Mohawk's then-common marriage ban. Patricia Banks Edmiston became the first black flight attendant for Capitol Airlines in 1960 following a legal complaint which resulted in the airline being required to hire her.
The U.S. Equal Employment Opportunity Commission's (EEOC) first complainants were female flight attendants complaining of age discrimination, weight requirements, and bans on marriage. (Originally female flight attendants were fired if they reached age 32 or 35 depending on the airline, were fired if they exceeded weight regulations, and were required to be single upon hiring and fired if they got married.) In 1968, the EEOC declared age restrictions on flight attendants' employment to be illegal gender discrimination under Title VII of the Civil Rights Act of 1964. Also in 1968, the EEOC ruled that gender was not a bona fide occupational requirement to be a flight attendant. The restriction of hiring only women was lifted at all airlines in 1971 due to the decisive court case of Diaz v. Pan Am. The Airline Deregulation Act was passed in 1978, and the no-marriage rule was eliminated throughout the US airline industry by the 1980s. The last such broad categorical discrimination, the weight restrictions, were relaxed in the 1990s through litigation and negotiations. Airlines still often have vision and height requirements and may require flight attendants to pass a medical evaluation.
Overview
The role of a flight attendant is to "provide routine services and respond to emergencies to ensure the safety and comfort of airline passengers".
Typically flight attendants require holding a high school diploma or equivalent, and in the United States, the median annual wage for flight attendants was $50,500 in May 2017, higher than the median for all workers of $37,690.
The number of flight attendants required on flights is mandated by each country's regulations. In the U.S., for light planes with 19 or fewer seats, or, if weighing more than 7,500 lb (3,400 kg), 9 or fewer seats, no flight attendant is needed; on larger aircraft, one flight attendant per 50 passenger seats is required.
The majority of flight attendants for most airlines are female, though a substantial number of males have entered the industry since 1980.
Responsibilities
Prior to each flight, flight attendants and pilots go over safety and emergency checklists, the locations of emergency equipment and other features specific to that aircraft type. Boarding particulars are verified, such as special needs passengers, small children travelling alone, or VIPs. Weather conditions are discussed including anticipated turbulence. A safety check is conducted to ensure equipment such as life-vests, torches (flash lights) and firefighting equipment are on board and in proper condition. They monitor the cabin for any unusual smells or situations. They assist with the loading of carry-on baggage, checking for weight, size and dangerous goods. They make sure those sitting in emergency exit rows are willing and able to assist in an evacuation. They then give a safety demonstration or monitor passengers as they watch a safety video. They then must "secure the cabin" ensuring tray tables are stowed, seats are in their upright positions, armrests down and carry-ons stowed correctly and seat belts are fastened prior to take-off.
Once up in the air, flight attendants will usually serve drinks and/or food to passengers using an airline service trolley. The duty has led to the mildly derogatory slang term "trolley dolly". When not performing customer service duties, flight attendants must periodically conduct cabin checks and listen for any unusual noises or situations. Checks must also be done on the lavatory to ensure the smoke detector has not been disabled or destroyed and to restock supplies as needed. Regular math checks must be done to ensure the health and safety of the pilot(s). They must also respond to call lights dealing with special requests. During turbulence, flight attendants must ensure the cabin is secure. Prior to landing, all loose items, trays and rubbish must be collected and secured along with service and galley equipment. All hot liquids must be disposed of. A final cabin check must then be completed prior to landing. It is vital that flight attendants remain aware as the majority of emergencies occur during take-off and landing. Upon landing, flight attendants must remain stationed at exits and monitor the aircraft and cabin as passengers disembark the plane. They also assist any special needs passengers and small children off the aeroplane and escort children, while following the proper paperwork and ID process to escort them to the designated person picking them up.
Flight attendants are trained to deal with a wide variety of emergencies, and are trained in first aid. More frequent situations may include a bleeding nose, illness, small injuries, intoxicated passengers, aggressive and anxiety stricken passengers. Emergency training includes rejected take-offs, emergency landings, cardiac and in-flight medical situations, smoke in the cabin, fires, depressurisation, on-board births and deaths, dangerous goods and spills in the cabin, emergency evacuations, hijackings, and water landings.
Cabin chimes and overhead panel lights
On most commercial airliners, flight attendants receive various forms of notification on board the aircraft in the form of audible chimes and coloured lights above their stations. While the colours and chimes are not universal and may vary between airlines and aircraft types, these colours and chimes are generally the most commonly used:
* Pink (Boeing) or Red (Airbus): interphone calls from the math to a flight attendant and/or interphone calls between two flight attendants, the latter case if a green light is not present or being used for the same purpose (steady with high-low chime), or all services emergency call (flashing with repeated high-low chime). On some airlines Airbus' aircraft (such as Delta Air Lines), this light is accompanied by a high-medium-low chime to call the purser. The Boeing 787 Dreamliner uses a separate red light to indicate a sterile flight deck while using pink for interphone calls from the area on the front part of an aircraft, spacecraft, or submersible, from which pilot controls the vehicle.
* Blue: call from passenger in seat (steady with single high chime).
* Amber: call from passenger in lavatory (steady with single high chime), or lavatory smoke detector set off (flashing with repeated high chime).
* Green: on some aircraft (some airlines Airbus aircraft, and the Boeing 787), this colour is used to indicate interphone calls between two flight attendants, distinguishing them from the pink or red light used for interphone calls made from the flight deck to a flight attendant, and is also accompanied with a high-low chime like the pink or red light. On the Boeing 787, a flashing green light with a repeated high-low chime is used to indicate a call to all flight attendant stations.
Chief purser
The chief purser (CP), also titled as in-flight service manager (ISM), flight service manager (FSM), customer service manager (CSM) or cabin service director (CSD) is the senior flight attendant in the chain of command of flight attendants. While not necessarily the most-senior crew members on a flight (in years of service to their respective carrier), chief pursers can have varying levels of "in-flight" or "on board" bidding seniority or tenure in relation to their flying partners. To reach this position, a crew member requires some minimum years of service as flight attendant. Further training is mandatory, and chief pursers typically earn a higher salary than flight attendants because of the added responsibility and managerial role.
Purser
The purser is in charge of the cabin crew, in a specific section of a larger aircraft, or the whole aircraft itself (if the purser is the highest ranking). On board a larger aircraft, pursers assist the chief purser in managing the cabin. Pursers are flight attendants or a related job, typically with an airline for several years prior to application for, and further training to become a purser, and normally earn a higher salary than flight attendants because of the added responsibility and supervisory role.
Qualifications:
Training
Minimum entry requirements for a career as a flight attendant is usually the completion of the final year of high school; e.g. the International Baccalaureate. Many prospective attendants have a post-secondary school diploma in an area such as tourism and a number hold degrees having worked in other occupations, often as teachers. Graduates holding degrees, including those with studies in one or more foreign languages, communication studies, business studies, public relations or nursing can be favoured by employers.
Flight attendants are normally trained in the hub or headquarters city of an airline over a period that may run from four weeks to six months, depending on the country and airline. The main focus of training is safety, and attendants are evaluated for each type of aircraft in which they work. One of the most elaborate training facilities was Breech Academy, which Trans World Airlines (TWA) opened in 1969 in Overland Park, Kansas. Other airlines also sent their attendants to the school. However, during the fare wars, the school's viability declined and it closed around 1988.
Safety training includes, but is not limited to: emergency passenger evacuation management, use of evacuation slides / life rafts, in-flight firefighting, first aid, CPR, defibrillation, ditching/emergency landing procedures, decompression emergencies, crew resource management, and security.
In the United States, the Federal Aviation Administration requires flight attendants on aircraft with 20 or more seats and used by an air carrier for transportation to hold a Certificate of Demonstrated Proficiency. It shows that a level of required training has been met. It is not limited to the air carrier at which the attendant is employed (although some initial documents showed the airlines where the holders were working), and is the attendant's personal property. It does have two ratings, Group 1 and Group 2 (listed on the certificate as "Group I" and "Group II"). Either or both of these may be earned depending upon the general type of aircraft, (propeller or turbojet), on which the holder has trained.
There are also training schools, not affiliated with any particular airline, where students generally not only undergo generic, though otherwise practically identical, training to flight attendants employed by an airline, but also take curriculum modules to help them gain employment. These schools often use actual airline equipment for their lessons, though some are equipped with full simulator cabins capable of replicating a number of emergency situations. In some countries, such as France, a degree is required, together with the Certificat de formation à la sécurité (Safety training certificate).
Language
Multilingual flight attendants are often in demand to accommodate international travellers. The languages most in demand, other than English, are French, Russian, Hindi, Spanish, Mandarin, Cantonese, Bengali, Japanese, Arabic, German, Portuguese, Italian, and Turkish. In the United States, airlines with international routes pay an additional stipend for language skills on top of flight pay, and some airlines hire specifically for certain languages when launching international destinations. Carole Middleton recalled when interviewed in 2018 that "you had to be able to speak another language" when working in the industry in the 1970s.
Height
Most airlines have height requirements for safety reasons, making sure that all flight attendants can reach overhead safety equipment. Typically, the acceptable height for this is over 152 cm (60 in) but under 185 cm (73 in) tall. Regional carriers using small aircraft with low ceilings can have height restrictions. Some airlines, such as EVA Air, have height requirements for purely aesthetic purposes.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2320) Liver Transplant
Gist
A liver transplant is surgery to remove your diseased or injured liver and replace it with a healthy liver from another person, called a donor. If your liver stops working properly, called liver failure, a liver transplant can save your life.
Summary:
What is a liver transplant?
A liver transplant is surgery to replace a diseased liver with a healthy liver from another person. A whole liver may be transplanted or just part of one.
In most cases the healthy liver will come from an organ donor who has just died (a deceased donor).
Sometimes a healthy living person will donate part of their liver. A living donor may be a family member. Or it may be someone who isn't related to you but whose blood type is a good match.
People who donate part of their liver can have healthy lives with the liver that is left.
The liver is the only organ in the body that can replace lost or injured tissue (regenerate). The donor’s liver will soon grow back to normal size after surgery. The part that you receive as a new liver will also grow to normal size in a few weeks.
Why might I need a liver transplant?
You can’t live without a working liver. If your liver stops working correctly, you may need a transplant.
A liver transplant may be advised if you have end-stage liver disease (chronic liver failure). This is a serious, life-threatening liver disease. It can be caused by several liver conditions.
Cirrhosis is a common cause of end-stage liver disease. It's a chronic liver disease. It happens when healthy liver tissue is replaced with scar tissue. This stops the liver from working correctly.
Other diseases that may lead to end-stage liver disease or other reasons for liver transplant include:
* Acute hepatic necrosis. This is when tissue in the liver dies. Possible reasons include acute infections and reactions to medicine, illegal drugs, or toxins. For instance, an overdose of acetaminophen.
* Biliary atresia. A rare disease of the liver and bile ducts that occurs in newborns.
* Viral hepatitis. Hepatitis B or C are common causes.
* Alcoholic hepatitis. This results from long-term alcohol use.
* NAFLD (nonalcoholic fatty liver disease) or NASH (nonalcoholic steatohepatitis). With NAFLD, too much fat builds up in the liver and damages it. This isn't caused by alcohol use. NASH is a form of NAFLD that includes fat buildup, hepatitis, and liver cell damage.
* Bile duct cancer. Transplant may be an option for some people in very specific circumstances.
* Metabolic diseases. Disorders that change the chemical activity in cells affected by the liver.
* Cancer of the liver. This includes primary liver cancer, which is when tumors start in the liver. Having cirrhosis puts you at risk of liver cancer.
* Autoimmune hepatitis. A redness or swelling (inflammation) of the liver. It happens when your body’s disease-fighting system (immune system) attacks your liver.
Details
Liver transplantation or hepatic transplantation is the replacement of a diseased liver with the healthy liver from another person (allograft). Liver transplantation is a treatment option for end-stage liver disease and acute liver failure, although availability of donor organs is a major limitation. Liver transplantation is highly regulated, and only performed at designated transplant medical centers by highly trained transplant physicians. Favorable outcomes require careful screening for eligible recipients, as well as a well-calibrated live or deceased donor match.
Medical uses
Liver transplantation is a potential treatment for acute or chronic conditions which cause irreversible and severe ("end-stage") liver dysfunction. Since the procedure carries relatively high risks, is resource-intensive, and requires major life modifications after surgery, it is reserved for dire circumstances.
Judging the appropriateness/effectiveness of liver transplant on case-by-case basis is critically important (see Contraindications), as outcomes are highly variable.
The Model for End Stage Liver Disease (MELD score) for adults and the Pediatric End Stage Liver Disease (PELD score) for children younger than 12 years old are clinical scoring tools that take various clinical criteria into consideration and are used to assess the need for a liver transplant. Higher scores for each clinical scoring tool indicates a higher severity of liver disease, and thus a greater need for a liver transplant. In those with chronic liver disease, a decompensating event such as hepatic encephalopathy, variceal bleeding, ascites, or spontaneous bacterial peritonitis may also signal a new need for a liver transplant.
Contraindications
Although liver transplantation is the most effective treatment for many forms of end-stage liver disease, the tremendous limitation in allograft (donor) availability and widely variable post-surgical outcomes make case selection critically important. Assessment of a person's transplant eligibility is made by a multi-disciplinary team that includes surgeons, medical doctors, psychologists and other providers.
The first step in evaluation is to determine whether the patient has irreversible liver-based disease which will be cured by getting a new liver. Thus, those with diseases which are primarily based outside the liver or have spread beyond the liver are generally considered poor candidates. Some examples include:
* someone with advanced liver cancer, with known/likely spread beyond the liver. Or those with cancer of any type, if the cancer cannot be treated successfully without rendering them unsuitable for transplant (other than skin cancers).
* active illicit substance use
* anatomic abnormalities that prevent liver transplantation
* severe heart/lung disease, whether it is primary heart/lung disease, or brought on by the liver disease (unless the team thinks they can still proceed)
* HIV/AIDS, especially if it is not well-managed (some persons with HIV/AIDS that have very low or undetectable viral loads could still be eligible)
Importantly, many contraindications to liver transplantation are considered reversible; a person initially deemed "transplant-ineligible" may later become a favorable candidate if the circumstances change. Some examples include:
* partial treatment of liver cancer, such that risk of spread beyond liver is decreased (for those with primary liver cancer or secondary spread to the liver, the medical team will likely rely heavily on the opinion of the patient's primary provider, the oncologist, and the radiologist)
* cessation of substance use (time period of abstinence is variable)
* improvement in heart function, e.g. by percutaneous coronary intervention or bypass surgery
* treated HIV infection
Other conditions, including hemodynamic instability requiring vasopressor support, large liver cancers or those with invasion to blood vessels, intrahepatic cholangiocarcinoma, frailty, fulminant liver failure with suspected brain injury, alcohol use disorder with recent alcohol consumption, cigarette smoking, inadequate social support, and nonadherence to medical management may disqualify someone from liver transplantation, however these cases are usually evaluated by the multi-disciplinary transplant team on an individual basis.
Risks/complications:
Graft rejection
After a liver transplantation, immune-mediated rejection (also known as rejection) of the allograft may happen at any time. Rejection may present with lab findings: elevated AST, ALT, GGT; abnormal liver function values such as prothrombin time, ammonia level, bilirubin level, albumin concentration; and abnormal blood glucose. Physical findings may include encephalopathy, jaundice, bruising and bleeding tendency. Other nonspecific presentation may include malaise, anorexia, muscle ache, low fever, slight increase in white blood count and graft-site tenderness.
Three types of graft rejection may occur: hyperacute rejection, acute rejection, and chronic rejection.
* Hyperacute rejection is caused by preformed anti-donor antibodies. It is characterized by the binding of these antibodies to antige* ns on vascular endothelial cells. Complement activation is involved and the effect is usually profound. Hyperacute rejection happens within minutes to hours after the transplant procedure.
* Acute rejection is mediated by T cells (versus B-cell-mediated hyperacute rejection). It involves direct cytotoxicity and cytokine mediated pathways. Acute rejection is the most common and the primary target of immunosuppressive agents. Acute rejection is usually seen within days or weeks of the transplant.
* Chronic rejection is the presence of any sign and symptom of rejection after one year. The cause of chronic rejection is still unknown, but an acute rejection is a strong predictor of chronic rejections.
Biliary complications
Biliary complications include biliary stenosis, biliary leak, and ischemic cholangiopathy. The risk of ischemic cholangiopathy increases with longer durations of cold ischemia time, which is the time that the organ does not receive blood flow (after death/removal until graft placement). Biliary complications are routinely treated with Endoscopic Retrograde Cholangiopancreatography (ERCP), percutaneous drainage, or sometimes re-operation.
Vascular complications
Vascular complications include thrombosis, stenosis, pseudoaneurysm, and rupture of the hepatic artery. Venous complications occur less often compared with arterial complications, and include thrombosis or stenosis of the portal vein, hepatic vein, or vena cava.
Technique
Before transplantation, liver-support therapy might be indicated (bridging-to-transplantation). Artificial liver support like liver dialysis or bioartificial liver support concepts are currently under preclinical and clinical evaluation. Virtually all liver transplants are done in an orthotopic fashion; that is, the native liver is removed and the new liver is placed in the same anatomic location. The transplant operation can be conceptualized as consisting of the hepatectomy (liver removal) phase, the anhepatic (no liver) phase, and the postimplantation phase. The operation is done through a large incision in the upper abdomen. The hepatectomy involves division of all ligamentous attachments to the liver, as well as the common bile duct, hepatic artery, all three hepatic veins and portal vein. Usually, the retrohepatic portion of the inferior vena cava is removed along with the liver, although an alternative technique preserves the recipient's vena cava ("piggyback" technique).
The donor's blood in the liver will be replaced by an ice-cold organ storage solution, such as UW (Viaspan) or HTK, until the allograft liver is implanted. Implantation involves anastomoses (connections) of the inferior vena cava, portal vein, and hepatic artery. After blood flow is restored to the new liver, the biliary (bile duct) anastomosis is constructed, either to the recipient's own bile duct or to the small intestine. The surgery usually takes between five and six hours, but may be longer or shorter due to the difficulty of the operation and the experience of the surgeon.
The large majority of liver transplants use the entire liver from a non-living donor for the transplant, particularly for adult recipients. A major advance in pediatric liver transplantation was the development of reduced size liver transplantation, in which a portion of an adult liver is used for an infant or small child. Further developments in this area included split liver transplantation, in which one liver is used for transplants for two recipients, and living donor liver transplantation, in which a portion of a healthy person's liver is removed and used as the allograft. Living donor liver transplantation for pediatric recipients involves removal of approximately 20% of the liver (Couinaud segments 2 and 3).
Further advance in liver transplant involves only resection of the lobe of the liver involved in tumors and the tumor-free lobe remains within the recipient. This speeds up the recovery and the patient stay in the hospital quickly shortens to within 5–7 days.
Radiofrequency ablation of the liver tumor can be used as a bridge while awaiting liver transplantation.
Cooling
Between removal from donor and transplantation into the recipient, the allograft liver is stored in a temperature-cooled preservation solution. The reduced temperature slows down the process of deterioration from normal metabolic processes, and the storage solution itself is designed to counteract the unwanted effects of cold ischemia. Although "static" cold storage method has long been standard technique, various dynamic preservation methods are under investigation. For example, systems which use a machine to pump blood through the explanted liver (after it is harvested from the body) during a transfer have met some success.
Living donor transplantation
Living donor liver transplantation (LDLT) has emerged in recent decades as a critical surgical option for patients with end stage liver disease, such as cirrhosis and/or hepatocellular carcinoma often attributable to one or more of the following: long-term alcohol use disorder, long-term untreated hepatitis C infection, long-term untreated hepatitis B infection. The concept of LDLT is based on the remarkable regenerative capacities of the human liver and the widespread shortage of cadaveric livers for patients awaiting transplant. In LDLT, a piece of healthy liver is surgically removed from a living person and transplanted into a recipient, immediately after the recipient's diseased liver has been entirely removed.
Historically, LDLT began with terminal pediatric patients, whose parents were motivated to risk donating a portion of their compatible healthy livers to replace their children's failing ones. The first report of successful LDLT was by Silvano Raia at the University of São Paulo Faculty of Medicine in July 1989. It was followed by Christoph Broelsch at the University of Chicago Medical Center in November 1989, when two-year-old Alyssa Smith received a portion of her mother's liver. Surgeons eventually realized that adult-to-adult LDLT was also possible, and now the practice is common in a few reputable medical institutes. It is considered more technically demanding than even standard, cadaveric donor liver transplantation, and also poses the ethical problems underlying the indication of a major surgical operation (hemihepatectomy or related procedure) on a healthy human being. In various case series, the risk of complications in the donor is around 10%, and very occasionally a second operation is needed. Common problems are biliary fistula, gastric stasis and infections; they are more common after removal of the right lobe of the liver. Death after LDLT has been reported at 0% (Japan), 0.3% (USA) and <1% (Europe), with risks likely to decrease further as surgeons gain more experience in this procedure. Since the law was changed to permit altruistic non-directed living organ donations in the UK in 2006, the first altruistic living liver donation took place in Britain in December 2012.
In a typical adult recipient LDLT, 55 to 70% of the liver (the right lobe) is removed from a healthy living donor. The donor's liver will regenerate approaching 100% function within 4–6 weeks, and will almost reach full volumetric size with recapitulation of the normal structure soon thereafter. It may be possible to remove up to 70% of the liver from a healthy living donor without harm in most cases. The transplanted portion will reach full function and the appropriate size in the recipient as well, although it will take longer than for the donor.
Living donors are faced with risks and/or complications after the surgery. Blood clots and biliary problems have the possibility of arising in the donor post-op, but these issues are remedied fairly easily. Although death is a risk that a living donor must be willing to accept prior to the surgery, the mortality rate of living donors in the United States is low. The LDLT donor's immune system does diminish as a result of the liver regenerating, so certain foods which would normally cause an upset stomach could cause serious illness.
Donor requirements
CT scan performed for evaluation of a potential donor. The image shows an unusual variation of hepatic artery. The left hepatic artery supplies not only left lobe but also segment 8. The anatomy makes right lobe donation impossible. Even used as left lobe or lateral segment donation, it would be very technically challenging in anastomosing the small arteries.
Any member of the family, parent, sibling, child, spouse or a volunteer can donate their liver. The criteria for a liver donation include:
* Being in good health
* Having a blood type that matches or is compatible with the recipient's, although some centres now perform blood group incompatible transplants with special immunosuppression protocols.
* Having a charitable desire of donation without financial motivation
Being between 20 and 60 years old (18 to 60 years old in some places)
* Have an important personal relationship with the recipient
* Being of similar or larger size than the recipient
* Before one becomes a living donor, the donor must undergo testing to ensure that the individual is physically fit, in excellent health, and not having uncontrolled high blood pressure, liver disease, diabetes or heart disease. Sometimes CT scans or MRIs are done to image the liver. In most cases, the work up is done in 2–3 weeks.
Complications
Living donor surgery is done at a major center. Very few individuals require any blood transfusions during or after surgery. All potential donors should know there is a 0.5 to 1.0 percent chance of death. Other risks of donating a liver include bleeding, infection, painful incision, possibility of blood clots and a prolonged recovery. The vast majority of donors enjoy complete and full recovery within 2–3 months.
Pediatric transplantation
In children, due to their smaller abdominal cavity, there is only space for a partial segment of liver, usually the left lobe of the donor's liver. This is also known as a "split" liver transplant. There are four anastomoses required for a "split" liver transplant: hepaticojejunostomy (biliary drainage connecting to a roux limb of jejunum), portal venous anatomosis, hepatic arterial anastomosis, and inferior vena cava anastomosis.
In children, living liver donor transplantation has become very accepted. The accessibility of adult parents who want to donate a piece of the liver for their children/infants has reduced the number of children who would have otherwise died waiting for a transplant. Having a parent as a donor also has made it a lot easier for children – because both patients are in the same hospital and can help boost each other's morale.
Benefits
There are several advantages of living liver donor transplantation over cadaveric donor transplantation, including:
* Transplant can be done on an elective basis because the donor is readily available
* There are fewer possibilities for complications and death than there would be while waiting for a cadaveric organ donor
* Because of donor shortages, UNOS has placed limits on cadaveric organ allocation to foreigners who seek medical help in the USA. With the availability of living donor transplantation, this will now allow foreigners a new opportunity to seek medical care in the USA.
Screening for donors
Living donor transplantation is a multidisciplinary approach. All living liver donors undergo medical evaluation. Every hospital which performs transplants has dedicated nurses that provide specific information about the procedure and answer questions that families may have. During the evaluation process, confidentiality is assured on the potential donor. Every effort is made to ensure that organ donation is not made by coercion from other family members. The transplant team provides both the donor and family thorough counseling and support which continues until full recovery is made.
All donors are assessed medically to ensure that they can undergo the surgery. Blood type of the donor and recipient must be compatible but not always identical. Other things assessed prior to surgery include the anatomy of the donor liver. However, even with mild variations in blood vessels and bile duct, surgeons today are able to perform transplantation without problems. The most important criterion for a living liver donor is to be in excellent health.
Post-transplant immunosuppression
Like most other allografts, a liver transplant will be rejected by the recipient unless immunosuppressive drugs are used. The immunosuppressive regimens for all solid organ transplants are fairly similar, and a variety of agents are now available. Most liver transplant recipients receive corticosteroids plus a calcineurin inhibitor such as tacrolimus or ciclosporin, (also spelled cyclosporine and cyclosporin) plus a purine antagonist such as mycophenolate mofetil. Clinical outcome is better with tacrolimus than with ciclosporin during the first year of liver transplantation. If the patient has a co-morbidity such as active hepatitis B, high doses of hepatitis B immunoglubins are administered in liver transplant patients.
Due to both the pharmacological immunosuppression and the immunosuppression of underlying liver disease, vaccinations against vaccination-preventable diseases are highly recommended before and after liver transplantation. Vaccine hesitancy in transplant recipients is less than in the general population. Vaccinations are preferably administered to the recipient before the transplant, as post-transplant immunosuppression leads to reduced vaccine effectiveness.
Liver transplantation is unique in that the risk of chronic rejection also decreases over time, although the great majority of recipients need to take immunosuppressive medication for the rest of their lives. It is possible to be slowly taken off anti rejection medication but only in certain cases. It is theorized that the liver may play a yet-unknown role in the maturation of certain cells pertaining to the immune system. There is at least one study by Thomas E. Starzl's team at the University of Pittsburgh which consisted of bone marrow biopsies taken from such patients which demonstrate genotypic chimerism in the bone marrow of liver transplant recipients.
Recovery and outcomes
The prognosis following liver transplant is variable, depending on overall health, technical success of the surgery, and the underlying disease process affecting the liver. There is no exact model to predict survival rates; those with transplant have a 58% chance of surviving 15 years. Failure of the new liver (primary nonfunction in liver transplantation or PNF) occurs in 10% to 15% of all cases. These percentages are contributed to by many complications. Early graft failure is probably due to preexisting disease of the donated organ. Others include technical flaws during surgery such as revascularization that may lead to a nonfunctioning graft.
Additional Information:
Overview
* In a liver transplant, a donor liver connects to your portal vein and vena cava to replace your liver.
* You may need a liver transplant if you have liver cancer or liver failure. A liver transplant may save your life.
What is a liver transplant?
A liver transplant is surgery to replace an unhealthy liver with a healthy one. You may need a liver transplant if you have liver failure or liver cancer. Liver transplants are treatments for adults and children.
Liver transplants are the third most common type of organ donation. There were more than 10,000 liver transplants in the U.S. in 2023. Each week, between 200 and more than 300 people join the liver transplant waitlist. Almost all (94%) liver transplants involve whole livers from deceased donors. About 5% of people receive partial liver transplants from living donors.
Why would someone need a liver transplant?
Your liver is one of the most important internal organs. It manages essential tasks like removing toxins from your blood, metabolizing nutrients and making proteins, In short, you can’t live without a functioning liver. If your liver is failing, a liver transplant could save your life.
How hard is it to get a liver transplant?
It’s not easy. In general, more people are eligible for a liver transplant than there are donor livers. Unfortunately, in the wait for a donor liver, about 16% of people who meet the medical criteria for a liver transplant become too sick to go through surgery or die before they can be matched with a donor liver.
Who is not eligible for a liver transplant?
Not everyone who has liver failure or liver cancer will be a candidate. If you have certain medical conditions, you can’t have a liver transplant. Those include:
* Cancer that’s outside your liver. You may be able to have a liver transplant if cancer treatment cures the condition and follow-up tests show cancer hasn’t come back.
* Congestive heart failure.
* Infections that medication can’t control and that a liver transplant can’t cure.
* Dementia.
* Severe lung diseases.
* Severe pulmonary hypertension.
* Severe, unmanaged mental health disorders with psychosis.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2321) Bachelor of Science in Nursing
Gist
The BSc Nursing course duration is a four-year undergraduate course that prepares candidates who wish to become registered nurses. BSc Nursing typically includes a combination of classroom instruction and clinical training in various healthcare settings.
Summary
B. Sc. Nursing is a four-year undergraduate programme focused on developing critical care advanced thinking skills proficiency and values necessary for the practice of professional nursing and midwifery as mentioned in the National Health Policy 2002. The programme is streamlined to enable students to address the health needs of the nation society and individuals.The programme prepares students to become responsible citizens by following a code of moral values and conduct at all times while fulfilling personal social and professional responsibilities to respond to national objectives
B.Sc Nursing also comprises a Post-Basic course which is further divided into 2 categories, viz Regular and Distance. B.Sc Nursing Post-Basic Regular course is of 2-year duration and requires 10+2 education along with General Nursing and Midwifery (G.N.M) whereas distance course is of 3-year duration and requires 10+2 level of education along with G.N.M and 2 years of experience.
B. Sc. Nursing is a four-year undergraduate programme focused on developing critical care advanced thinking skills proficiency and values necessary for the practice of professional nursing and midwifery as mentioned in the National Health Policy 2002. The programme is streamlined to enable students to address the health needs of the nation society and individuals.The programme prepares students to become responsible citizens by following a code of moral values and conduct at all times while fulfilling personal social and professional responsibilities to respond to national objectives
B.Sc Nursing also comprises a Post-Basic course which is further divided into 2 categories, viz Regular and Distance. B.Sc Nursing Post-Basic Regular course is of 2-year duration and requires 10+2 education along with General Nursing and Midwifery (G.N.M) whereas distance course is of 3-year duration and requires 10+2 level of education along with G.N.M and 2 years of experience.
Details
The Bachelor of Science in Nursing (BSN, BScN) also known in some countries as a Bachelor of Nursing (BN) or Bachelor of Science (BS) with a Major in Nursing is an academic degree in the science and principles of nursing, granted by an accredited tertiary education provider. The course of study is typically three or four years. The difference in degree designation may relate to the amount of basic science courses required as part of the degree, with BScN and BSN degree curriculums requiring completion of more courses on math and natural sciences that are more typical of BSc degrees (e.g. calculus, physics, chemistry, biology) and BN curriculums more focused on nursing theory, nursing process, and teaching versions of general science topics that are adapted to be more specific and relevant to nursing practice. Nursing school students are generally required to take courses in social and behavioral sciences and liberal arts, including nutrition, anatomy, chemistry, mathematics, and English. In addition to those courses, experience in physical and social sciences, communication, leadership, and critical thinking is required for a bachelor's degree. BSN programs typically last 2–4 years. Someone who holds a BSN can work in private or public medical and surgical hospitals, physician's offices, home health care services, and nursing facilities. Having a BSN can result in more opportunities and better salary than just an associate degree.
The bachelor's degree prepares nurses for a wide variety of professional roles and graduate study. Course work includes nursing science, research, leadership, and related areas that inform the practice of nursing. It also provides the student with general education in math, humanities and social sciences. An undergraduate degree affords opportunities for greater career advancement and higher salary options. It is often a prerequisite for teaching, administrative, consulting and research roles.
A Bachelor of Science in Nursing is not currently required for entry into professional nursing in all countries. In the US, there has been an effort for it to become the entry-level degree since 1964, when the American Nurses Association (ANA) advanced the position that the minimum preparation for beginning professional nursing practice should be a baccalaureate degree education in nursing. The Institute of Medicine (IOM) affirmed in 2010 that nurses should achieve higher levels of education and training through an improved education system that promotes seamless academic progression.
Accreditation
The Commission on Collegiate Nursing Education (CCNE) and the Accreditation Commission for Education in Nursing (ACEN) are the accreditation bodies for Bachelor of Science in Nursing programs in the United States. Both Commissions are officially recognized as national accreditation agencies that ensure quality standards for undergraduate to graduate nursing programs by the United States Secretary of Education.
Accelerated BSN programs
Accelerated Bachelor of Science in Nursing programs allow those who already have a bachelor's degree in a non-nursing field to obtain their nursing degree at an accelerated rate, which is why they are also commonly referred to as "Second Degree Nursing Programs." These programs usually have strict prerequisites because the program coursework focuses solely on nursing. Accelerated BSN programs are typically anywhere from 12 to 24 months.
BSN Completion or "RN to BSN" Programs
These programs are intended specifically for nurses with a diploma or associate degree in nursing who wish to "top-up" their current academic qualifications to a Bachelor of Science in Nursing. A majority of these RN to BSN programs are offered online through colleges, universities, or other e-learning providers. pro In order to keep the programs up-to-date and relevant to the current healthcare system, the course material is updated regularly with feedback from registered nurses, nurse managers, healthcare professionals and even patients.
BSN entry level into nursing in the future
In 2011, The Institute of Medicine recommended that by 2020, 80 percent of RNs hold a Bachelor of Science in Nursing (BSN) degree. This was also noted in a report titled: Institute of Medicine's report on the Future of Nursing, and has been followed by a campaign to implement its recommendations. In this report a 2nd recommendation was made to focus on increasing the proportion of registered nurses (RNs) with a baccalaureate degree to 80% by 2020. Towards that effort the report recommends that educational associations, colleges, delivery organizations, governmental organizations, and funders develop the resources necessary to support this goal. These recommendations are consistent with other policy initiatives currently underway; for example, legislation requiring that nurses receive a baccalaureate degree within 10 years of initial licensure has been considered in New York, New Jersey, and Rhode Island.
Many of these recommendations are being driven by recent studies regarding patient outcomes and nursing education. Hospitals employing higher percentages of BSN-prepared nurses have shown an associated decrease in morbidity, mortality, and failure-to-rescue rates. Increasing the percentage of BSN nurses employed decreases by 10 percent the 30-day inpatient mortality and failure-to-rescue rates. Studies that provide this type of evidence-based practice encompass the ultimate purpose of a higher level of educated nurse workforce. It adds to support the ultimate mission of the Texas Board of Nursing (BON or Board), which is to protect and promote the welfare of the people of Texas by ensuring that each person holding a license as a nurse in this state is competent to practice safely.
Many healthcare leaders and institutions have increased expectations for evidence-based practice (EBP). The Institute of Medicine (IOM) aim was for 90% of clinical decisions to be evidence-based by 2020 (IOM, 2010).
Additional Information
Nursing is a rewarding, in-demand profession ideal for anyone who wants to make a difference in health care. Nurses have many career options, including working in clinical or non-clinical settings. One way to achieve a career in nursing is to earn a Bachelor of Science in Nursing (BSN) degree.
While you can pursue various educational pathways in nursing, a BSN degree is the preferred educational qualification by many employers. It is also the base prerequisite for the Master of Science in Nursing (MSN) degree and many advanced certifications you may want to pursue as you progress through your career.
What is a bachelor's in nursing degree?
A Bachelor of Science in Nursing degree, also known as a BSN, is a four-year undergraduate degree intended for learners who want to pursue a career as a registered nurse (RN) or beyond. It’s the foundational undergraduate degree for most careers in clinical nursing. With a BSN, you'll prepare for a career in nursing with foundational skills in pharmacology, anatomy, ethics in health care, and microbiology. BSN degrees have a clinical component where learners complete a set number of clinical hours before graduation. BSN students may also choose to concentrate on a specific field of nursing.
BSN degree programs can be completed online or in person, depending on the school. Some schools also offer bridge programs and accelerated BSN programs that may not take the entire four years to complete. Admittance requirements for a BSN include a high school or associate degree. GPA requirements, coursework, and previous experience in health care vary by program. Be sure to choose a BSN program that is accredited by the Commission on Collegiate Nursing Education (CCNE).
What is the difference between an RN and a BSN?
RN (registered nurse) is a licensure that qualifies you to practice nursing. To become an RN, you need to graduate from an accredited nursing program and pass the National Council Licensure Examination for Registered Nurses (NCLEX-RN).
BSN (Bachelor of Science in Nursing) is a four-year undergraduate degree that prepares students for registered nursing practice. While a BSN is not mandatory for becoming an RN, it is the preferred credential for most nursing employers and opens doors to a wider range of career opportunities.
In short, BSN is a degree, while RN is a license to practice nursing.
How long does it take to earn a bachelor's in nursing?
It takes four years to earn a traditional BSN degree. Other types of BSN degrees will not take a full four years. Accelerated BSN degrees allow students who hold a bachelor's degree in another discipline to earn a BSN in 11 to 18 months, depending on the program. Bridge programs may also be available to RNs who hold an associate degree and want to earn a BSN degree. These programs are called RN to BSN and typically take about the same as other accelerated programs to complete, between 11 and 18 months.
Typical requirements for a BSN degree program
Requirements vary by BSN program, but typical requirements include GPA, SAT scores, degree program application, personal references, and relevant experiences related to nursing. Some specific admittance requirements that are typical of a BSN degree program include:
* High school diploma or GED
* GPA of 3.00-3.25 or higher (depends on school and program type)
* Successful completion of prerequisite high school and college coursework (chemistry, biology, and other science and math courses)
* Passing the SAT or TEAS (score requirements vary by school)
* Submission of nursing school application, which may include varying components such as an essay, letters of recommendation, and volunteer experience
Research requirements for entry prior to applying. Prepare to explain to the admissions department why you want to become a nurse and how this program can help you with your personal and professional goals. Most programs include an essay component, which gives you the chance to show your personality.
How much does a BS in nursing cost?
The average total cost of tuition to attend a bachelor's in nursing program ranges from $89,556 to over $200,000, with an average annual cost of $30,884, according to the National Center for Education Statistics . Prepare to pay registration, application, and technology fees. Most BSN students pay for their books and supplies, scrubs/uniforms, immunizations and physical examinations, insurance, and room and board if living on campus.
Factors that impact the total tuition cost for a BSN degree include whether the school is in or out of state and whether it is a public or private university. Public schools typically have lower overall tuition costs and more financial aid opportunities. Financial aid opportunities may be available to students through scholarships, government-funded aid, school-funded aid, and military discounts.
Common BSN concentrations
You may be able to choose a specialization or concentration while pursuing a BSN degree. Concentration options vary by school. Choosing to specialize in a certain area of nursing can positively impact your earning potential after graduation. With your BSN degree concentration, you may pursue specific nursing careers that may require additional certification, education, or experience. Here are a few common concentrations to consider:
* Adult health nursing: This concentration focuses on providing nursing care to adults, including assessing and managing chronic illnesses, acute medical conditions, and other health problems.
* Pediatric nursing: You will provide nursing care to children and adolescents, including the assessment and management of common pediatric health conditions.
* Mental health nursing: You'll provide nursing care for patients with mental health conditions, including assessment, intervention, and prevention of mental health problems.
* Obstetrics and gynecology nursing: You'll focus on caring for women during pregnancy, childbirth, and postpartum and addressing their health concerns throughout their lifespans.
* Community health nursing: This concentration focuses on health promotion and disease prevention in the community. You'll learn to work with populations to address public health concerns, such as health education, community outreach, and disease prevention.
* Geriatric nursing: You will provide nursing care to older adults, including assessing and managing the unique health challenges faced by the elderly population.
* Nursing leadership: This concentration focuses on developing the skills and knowledge needed to effectively manage nursing teams, including leadership strategies, budgeting, staffing, and organizational behavior.
* Critical care nursing: You will prepare to provide specialized nursing care to critically ill or injured patients, often in settings such as intensive care units (ICUs). Students in this concentration learn to manage complex medical devices, interpret lab results, and make quick and informed decisions.
* Oncology nursing: You will care for patients with cancer. Students in this concentration learn about cancer biology, treatments, symptom management, and patient education. They also learn to provide compassionate care and support to patients and their families.
* Emergency nursing: This concentration focuses on providing nursing care in emergency situations, such as trauma, cardiac arrest, or other life-threatening conditions. You'll have the opportunity to learn to work in fast-paced and high-stress environments and to assess and stabilize patients quickly.
Common BSN coursework
Coursework as part of a BSN program helps you gain the skills necessary to provide medical care in various settings and understand the health care system. This coursework focuses on the foundational skills every nurse needs, whether working as an RN or specialized nurse. Common core BSN coursework may include:
* Biology
* Chemistry
* Clinical reasoning
* Ethics
* Foundations of nursing practice
* Human development
* Mental health
* Microbiology
* Nursing management
* Nutrition and diet
* Psychology
* Science and technology of nursing
* Statistics
BSN nursing students also complete a set number of clinical hours. The amount of hours required depends on the program you attend and ranges from 300 and 700 hours. Where you complete your clinical experience depends on your concentration, among other factors like location and availability.
Benefits of a bachelor's in nursing degree
Nurses are a necessity in health care. BSN degrees are the foundational degree for several jobs in health care, even those outside of nursing, and the first step to pursuing a career in nursing. Nurses are generally well respected in health care and get the opportunity to help others daily. You can create a long-term career in health care and reap the benefits as a BSN degree graduate, which might include:
* Higher earnings
BSN degree holders earn more than associate degree holders in the same position. Eligibility requirements for higher-paying nursing jobs likely include a BSN degree. As you gain experience in nursing, you can pursue higher-paying positions in specialized fields of nursing and even leadership positions. Many of these positions pay more and include additional benefits like bonuses, profit sharing, paid time off, and insurance options.
* Career opportunities
BSN graduates who pass the NCLEX-RN are open to various bachelor in nursing career paths. RNs are needed in many different settings, which provide BSN degree holders the opportunity to work with diverse populations. RNs can work in physicians’ offices, hospitals, nursing homes, or patients' homes. Employers of nurses include nonprofit organizations, government agencies, educational organizations, and more.
Advanced career opportunities in nursing will likely require a BSN degree to enroll in a master’s degree program in nursing or earn advanced certifications. For example, all APRN positions require an MSN degree and certification. Leadership positions also require further education. A BSN degree is a core requirement before attaining either of these qualifications. While a BSN degree is a core requirement before achieving these qualifications, some schools offer direct MSN programs that allow students to earn a master's in nursing with a non-nursing bachelor's degree.
* Job security
Most employers prefer hiring candidates with a BSN degree over an associate in nursing or other qualifications. Hospitals and other health care providers seek highly trained professionals to meet the demands of health care needs in the US. When you earn a BSN degree, you’re improving your chances of being hired and increasing job security.
* Advanced skills
BSN degree programs prepare students for the reality of a nursing career, which likely involves a skill set balanced in technical and personal skills key to professional success. It’s critical to know technical skills like the dynamics of human anatomy, but it’s also important to learn effective patient communication, critical thinking, and leadership skills. The coursework and clinical experience part of an accredited BSN degree program specifically focus on in-demand skills nurses need, meaning BSN graduates leave with a more advanced skill set than other non-nursing or lesser degrees.
Better patient outcomes
Multiple reports have shown that BSN-educated nurses provide better-quality care to patients. Higher-quality care equates to better patient outcomes, ranging from lower mortality rates to fewer errors. Employers seek BSN graduates for the advanced education tailored to the nursing profession. With a BSN degree diploma comes the knowledge that this candidate has real-world experience in clinical nursing and the training necessary to offer a higher level of care to patients of varying needs.
BSN job outlook
A BSN degree sets you up for a career in the nursing field. When choosing a BSN job, prepare to have many directions to take after graduation. Non-clinical nursing jobs in administration or operations, specialized nursing positions that focus on a certain disorder, condition, or disease, and nursing jobs providing care to a specific population are all possible with a BSN degree.
Most BSN graduates take the NCLEX-RN exam, which means you're officially a registered nurse (RN). This licensure is the prerequisite if you plan to enroll in a master’s program in nursing or gain certain specialized certifications. Most specialized nursing positions beyond an RN require an MSN degree and certification.
The outlook for BSN jobs is optimistic, as the need for medical care is continual. Between 2022 and 2032, the US Bureau of Labor Statistics (BLS) expects a 6 percent growth rate for RNs .
Is a bachelor's in nursing right for you?
If you see yourself working in nursing or health care long-term, a BSN degree can provide you with the qualifications and credentials you’ll need to advance your career in health care. Nurses who gain professional experience can become specialized nurses who deliver babies, administer anesthesia, and even create and implement hospital policies. With so many directions to take your nursing career, a BSN is an assurance that you meet the educational requirements to pursue more advanced degrees and credentials.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2322) Universe
Gist
The universe is everything. It includes all of space, and all the matter and energy that space contains. It even includes time itself and, of course, it includes you. Earth and the Moon are part of the universe, as are the other planets and their many dozens of moons.
The physical universe is defined as all of space and time (collectively referred to as spacetime) and their contents. Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space.
The Universe is thought to consist of three types of substance: normal matter, 'dark matter' and 'dark energy'. Normal matter consists of the atoms that make up stars, planets, human beings and every other visible object in the Universe.
Summary
Universe is the whole cosmic system of matter and energy of which Earth, and therefore the human race, is a part. Humanity has traveled a long road since societies imagined Earth, the Sun, and the Moon as the main objects of creation, with the rest of the universe being formed almost as an afterthought. Today it is known that Earth is only a small ball of rock in a space of unimaginable vastness and that the birth of the solar system was probably only one event among many that occurred against the backdrop of an already mature universe. This humbling lesson has unveiled a remarkable fact, one that endows the minutest particle in the universe with a rich and noble heritage: events that occurred in the first few minutes of the creation of the universe 13.7 billion years ago turn out to have had a profound influence on the birth, life, and death of galaxies, stars, and planets. Indeed, a line can be drawn from the forging of the matter of the universe in a primal “big bang” to the gathering on Earth of atoms versatile enough to serve as the basis of life. The intrinsic harmony of such a worldview has great philosophical and aesthetic appeal, and it may explain why public interest in the universe has always endured.
The “observable universe” is the region of space that humans can actually or theoretically observe with the aid of technology. It can be thought of as a bubble with Earth at its centre. It is differentiated from the entirety of the universe, which is the whole cosmic system of matter and energy, including the human race. Unlike the observable universe, the universe is possibly infinite and without spatial ed
Details
The universe is all of space and time[a] and their contents. It comprises all of existence, any fundamental interaction, physical process and physical constant, and therefore all forms of matter and energy, and the structures they form, from sub-atomic particles to entire galactic filaments. Since the early 20th century, the field of cosmology establishes that space and time emerged together at the Big Bang 13.787±0.020 billion years ago and that the universe has been expanding since then. The portion of the universe that can be seen by humans is approximately 93 billion light-years in diameter at present, but the total size of the universe is not known.
Some of the earliest cosmological models of the universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus's work as well as Johannes Kepler's laws of planetary motion and observations by Tycho Brahe.
Further observational improvements led to the realization that the Sun is one of a few hundred billion stars in the Milky Way, which is one of a few hundred billion galaxies in the observable universe. Many of the stars in a galaxy have planets. At the largest scale, galaxies are distributed uniformly and the same in all directions, meaning that the universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the universe had a beginning and has been expanding since then.
According to the Big Bang theory, the energy and matter initially present have become less dense as the universe expanded. After an initial accelerated expansion called the inflationary epoch at around {10}^{-32} seconds, and the separation of the four known fundamental forces, the universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Giant clouds of hydrogen and helium were gradually drawn to the places where matter was most dense, forming the first galaxies, stars, and everything else seen today.
From studying the effects of gravity on both matter and light, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter. In the widely accepted ΛCDM cosmological model, dark matter accounts for about 25.8%±1.1% of the mass and energy in the universe while about 69.2%±1.2% is dark energy, a mysterious form of energy responsible for the acceleration of the expansion of the universe. Ordinary ('baryonic') matter therefore composes only 4.84%±0.1% of the universe. Stars, planets, and visible gas clouds only form about 6% of this ordinary matter.
There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which the universe might be one among many.
Definition
The physical universe is defined as all of space and time[a] (collectively referred to as spacetime) and their contents. Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space. The universe also includes the physical laws that influence energy and matter, such as conservation laws, classical mechanics, and relativity.
The universe is often defined as "the totality of existence", or everything that exists, everything that has existed, and everything that will exist. In fact, some philosophers and scientists support the inclusion of ideas and abstract concepts—such as mathematics and logic—in the definition of the universe. The word universe may also refer to concepts such as the cosmos, the world, and nature.
Etymology
The word universe derives from the Old French word univers, which in turn derives from the Latin word universus, meaning 'combined into one'. The Latin word 'universum' was used by Cicero and later Latin authors in many of the same senses as the modern English word is used.
Synonyms
A term for universe among the ancient Greek philosophers from Pythagoras onwards was τὸ πᾶν (tò pân) 'the all', defined as all matter and all space, and (tò hólon) 'all things', which did not necessarily include the void. Another synonym was ὁ κόσμος (ho kósmos) meaning 'the world, the cosmos'. Synonyms are also found in Latin authors (totum, mundus, natura) and survive in modern languages, e.g., the German words Das All, Weltall, and Natur for universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and nature (as in natural laws or natural philosophy).
Chronology and the Big Bang
The prevailing model for the evolution of the universe is the Big Bang theory. The Big Bang model states that the earliest state of the universe was an extremely hot and dense one, and that the universe subsequently expanded and cooled. The model is based on general relativity and on simplifying assumptions such as the homogeneity and isotropy of space. A version of the model with a cosmological constant (Lambda) and cold dark matter, known as the Lambda-CDM model, is the simplest model that provides a reasonably good account of various observations about the universe.
The initial hot, dense state is called the Planck epoch, a brief period extending from time zero to one Planck time unit of approximately {10}^{-43} seconds. During the Planck epoch, all types of matter and all types of energy were concentrated into a dense state, and gravity—currently the weakest by far of the four known forces—is believed to have been as strong as the other fundamental forces, and all the forces may have been unified. The physics controlling this very early period (including quantum gravity in the Planck epoch) is not understood, so we cannot say what, if anything, happened before time zero. Since the Planck epoch, the universe has been expanding to its present scale, with a very short but intense period of cosmic inflation speculated to have occurred within the first {10}^{-32} seconds. This initial period of inflation would explain why space appears to be very flat.
Within the first fraction of a second of the universe's existence, the four fundamental forces had separated. As the universe continued to cool from its inconceivably hot state, various types of subatomic particles were able to form in short periods of time known as the quark epoch, the hadron epoch, and the lepton epoch. Together, these epochs encompassed less than 10 seconds of time following the Big Bang. These elementary particles associated stably into ever larger combinations, including stable protons and neutrons, which then formed more complex atomic nuclei through nuclear fusion.
This process, known as Big Bang nucleosynthesis, lasted for about 17 minutes and ended about 20 minutes after the Big Bang, so only the fastest and simplest reactions occurred. About 25% of the protons and all the neutrons in the universe, by mass, were converted to helium, with small amounts of deuterium (a form of hydrogen) and traces of lithium. Any other element was only formed in very tiny quantities. The other 75% of the protons remained unaffected, as hydrogen nuclei.
After nucleosynthesis ended, the universe entered a period known as the photon epoch. During this period, the universe was still far too hot for matter to form neutral atoms, so it contained a hot, dense, foggy plasma of negatively charged electrons, neutral neutrinos and positive nuclei. After about 377,000 years, the universe had cooled enough that electrons and nuclei could form the first stable atoms. This is known as recombination for historical reasons; electrons and nuclei were combining for the first time. Unlike plasma, neutral atoms are transparent to many wavelengths of light, so for the first time the universe also became transparent. The photons released ("decoupled") when these atoms formed can still be seen today; they form the cosmic microwave background (CMB).
As the universe expands, the energy density of electromagnetic radiation decreases more quickly than does that of matter because the energy of each photon decreases as it is cosmologically redshifted. At around 47,000 years, the energy density of matter became larger than that of photons and neutrinos, and began to dominate the large scale behavior of the universe. This marked the end of the radiation-dominated era and the start of the matter-dominated era.
In the earliest stages of the universe, tiny fluctuations within the universe's density led to concentrations of dark matter gradually forming. Ordinary matter, attracted to these by gravity, formed large gas clouds and eventually, stars and galaxies, where the dark matter was most dense, and voids where it was least dense. After around 100–300 million years, the first stars formed, known as Population III stars. These were probably very massive, luminous, non metallic and short-lived. They were responsible for the gradual reionization of the universe between about 200–500 million years and 1 billion years, and also for seeding the universe with elements heavier than helium, through stellar nucleosynthesis.
The universe also contains a mysterious energy—possibly a scalar field—called dark energy, the density of which does not change over time. After about 9.8 billion years, the universe had expanded sufficiently so that the density of matter was less than the density of dark energy, marking the beginning of the present dark-energy-dominated era. In this era, the expansion of the universe is accelerating due to dark energy.
Physical properties
Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales.
The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation. This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction. These laws are Gauss's law and the non-divergence of the stress–energy–momentum pseudotensor.
Size and regions
Due to the finite speed of light, there is a limit (known as the particle horizon) to how far light can travel over the age of the universe. The spatial region from which we can receive light is called the observable universe. The proper distance (measured at a fixed time) between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). Although the distance traveled by light from the edge of the observable universe is close to the age of the universe times the speed of light, 13.8 billion light-years (4.2× {10}^{9} pc), the proper distance is larger because the edge of the observable universe and the Earth have since moved further apart.
For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.
Because humans cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite. Estimates suggest that the whole universe, if finite, must be more than 250 times larger than a Hubble sphere. Some disputed estimates for the total size of the universe, if finite, reach as high as
megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal.Additional Information
The universe is everything. It includes all of space, and all the matter and energy that space contains. It even includes time itself and, of course, it includes you.
Earth and the Moon are part of the universe, as are the other planets and their many dozens of moons. Along with asteroids and comets, the planets orbit the Sun. The Sun is one among hundreds of billions of stars in the Milky Way galaxy, and most of those stars have their own planets, known as exoplanets.
The Milky Way is but one of billions of galaxies in the observable universe — all of them, including our own, are thought to have supermassive black holes at their centers. All the stars in all the galaxies and all the other stuff that astronomers can’t even observe are all part of the universe. It is, simply, everything.
Though the universe may seem a strange place, it is not a distant one. Wherever you are right now, outer space is only 62 miles (100 kilometers) away. Day or night, whether you’re indoors or outdoors, asleep, eating lunch or dozing off in class, outer space is just a few dozen miles above your head. It’s below you too. About 8,000 miles (12,800 kilometers) below your feet — on the opposite side of Earth — lurks the unforgiving vacuum and radiation of outer space.
In fact, you’re technically in space right now. Humans say “out in space” as if it’s there and we’re here, as if Earth is separate from the rest of the universe. But Earth is a planet, and it’s in space and part of the universe just like the other planets. It just so happens that things live here and the environment near the surface of this particular planet is hospitable for life as we know it. Earth is a tiny, fragile exception in the cosmos. For humans and the other things living on our planet, practically the entire cosmos is a hostile and merciless environment.
How old is Earth?
Our planet, Earth, is an oasis not only in space, but in time. It may feel permanent, but the entire planet is a fleeting thing in the lifespan of the universe. For nearly two-thirds of the time since the universe began, Earth did not even exist. Nor will it last forever in its current state. Several billion years from now, the Sun will expand, swallowing Mercury and Venus, and filling Earth’s sky. It might even expand large enough to swallow Earth itself. It’s difficult to be certain. After all, humans have only just begun deciphering the cosmos.
While the distant future is difficult to accurately predict, the distant past is slightly less so. By studying the radioactive decay of isotopes on Earth and in asteroids, scientists have learned that our planet and the solar system formed around 4.6 billion years ago.
How old is the universe?
The universe, on the other hand, appears to be about 13.8 billion years old. Scientists arrived at that number by measuring the ages of the oldest stars and the rate at which the universe expands. They also measured the expansion by observing the Doppler shift in light from galaxies, almost all of which are traveling away from us and from each other. The farther the galaxies are, the faster they’re traveling away. One might expect gravity to slow the galaxies’ motion from one another, but instead they’re speeding up and scientists don’t know why. In the distant future, the galaxies will be so far away that their light will not be visible from Earth.
Put another way, the matter, energy and everything in the universe (including space itself) was more compact last Saturday than it is today. The same can be said about any time in the past — last year, a million years ago, a billion years ago. But the past doesn’t go on forever.
By measuring the speed of galaxies and their distances from us, scientists have found that if we could go back far enough, before galaxies formed or stars began fusing hydrogen into helium, things were so close together and hot that atoms couldn’t form and photons had nowhere to go. A bit farther back in time, everything was in the same spot. Or really the entire universe (not just the matter in it) was one spot.
Don't spend too much time considering a mission to visit the spot where the universe was born, though, as a person cannot visit the place where the Big Bang happened. It's not that the universe was a dark, empty space and an explosion happened in it from which all matter sprang forth. The universe didn’t exist. Space didn’t exist. Time is part of the universe and so it didn’t exist. Time, too, began with the big bang. Space itself expanded from a single point to the enormous cosmos as the universe expanded over time.
What is the universe made of?
The universe contains all the energy and matter there is. Much of the observable matter in the universe takes the form of individual atoms of hydrogen, which is the simplest atomic element, made of only a proton and an electron (if the atom also contains a neutron, it is instead called deuterium). Two or more atoms sharing electrons is a molecule. Many trillions of atoms together is a dust particle. Smoosh a few tons of carbon, silica, oxygen, ice, and some metals together, and you have an asteroid. Or collect 333,000 Earth masses of hydrogen and helium together, and you have a Sun-like star.
For the sake of practicality, humans categorize clumps of matter based on their attributes. Galaxies, star clusters, planets, dwarf planets, rogue planets, moons, rings, ringlets, comets, meteorites, raccoons — they’re all collections of matter exhibiting characteristics different from one another but obeying the same natural laws.
Scientists have begun tallying those clumps of matter and the resulting numbers are pretty wild. Our home galaxy, the Milky Way, contains at least 100 billion stars, and the observable universe contains at least 100 billion galaxies. If galaxies were all the same size, that would give us 10 thousand billion billion (or 10 sextillion) stars in the observable universe.
But the universe also seems to contain a bunch of matter and energy that we can’t see or directly observe. All the stars, planets, comets, sea otters, black holes and dung beetles together represent less than 5 percent of the stuff in the universe. About 27 percent of the remainder is dark matter, and 68 percent is dark energy, neither of which are even remotely understood. The universe as we understand it wouldn’t work if dark matter and dark energy didn’t exist, and they’re labeled “dark” because scientists can’t seem to directly observe them. At least not yet.
How has our view of the universe changed over time?
Human understanding of what the universe is, how it works and how vast it is has changed over the ages. For countless lifetimes, humans had little or no means of understanding the universe. Our distant ancestors instead relied upon myth to explain the origins of everything. Because our ancestors themselves invented them, the myths reflect human concerns, hopes, aspirations or fears rather than the nature of reality.
Several centuries ago, however, humans began to apply mathematics, writing and new investigative principles to the search for knowledge. Those principles were refined over time, as were scientific tools, eventually revealing hints about the nature of the universe. Only a few hundred years ago, when people began systematically investigating the nature of things, the word “scientist” didn’t even exist (researchers were instead called “natural philosophers” for a time). Since then, our knowledge of the universe has repeatedly leapt forward. It was only about a century ago that astronomers first observed galaxies beyond our own, and only a half-century has passed since humans first began sending spacecraft to other worlds.
In the span of a single human lifetime, space probes have voyaged to the outer solar system and sent back the first up-close images of the four giant outermost planets and their countless moons; rovers wheeled along the surface on Mars for the first time; humans constructed a permanently crewed, Earth-orbiting space station; and the first large space telescopes delivered jaw-dropping views of more distant parts of the cosmos than ever before. In the early 21st century alone, astronomers discovered thousands of planets around other stars, detected gravitational waves for the first time and produced the first image of a black hole.
With ever-advancing technology and knowledge, and no shortage of imagination, humans continue to lay bare the secrets of the cosmos. New insights and inspired notions aid in this pursuit, and also spring from it. We have yet to send a space probe to even the nearest of the billions upon billions of other stars in the galaxy. Humans haven’t even explored all the worlds in our own solar system. In short, most of the universe that can be known remains unknown.
The universe is nearly 14 billion years old, our solar system is 4.6 billion years old, life on Earth has existed for maybe 3.8 billion years, and humans have been around for only a few hundred thousand years. In other words, the universe has existed roughly 56,000 times longer than our species has. By that measure, almost everything that’s ever happened did so before humans existed. So of course we have loads of questions — in a cosmic sense, we just got here.
Our first few decades of exploring our own solar system are merely a beginning. From here, just one human lifetime from now, our understanding of the universe and our place in it will have undoubtedly grown and evolved in ways we can today only imagine.
Physical properties
Main articles: Observable universe, Age of the universe, and Expansion of the universe
Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales.[46]: 1470
The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation.[47] This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction.[48] These laws are Gauss's law and the non-divergence of the stress–energy–momentum pseudotensor.[49]
Size and regions
See also: Observational cosmology
Illustration of the observable universe, centered on the Sun. The distance scale is logarithmic. Due to the finite speed of light, we see more distant parts of the universe at earlier times.
Due to the finite speed of light, there is a limit (known as the particle horizon) to how far light can travel over the age of the universe. The spatial region from which we can receive light is called the observable universe. The proper distance (measured at a fixed time) between Earth and the edge of the observable universe is 46 billion light-years[50][51] (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs).[50] Although the distance traveled by light from the edge of the observable universe is close to the age of the universe times the speed of light, 13.8 billion light-years (4.2×109 pc), the proper distance is larger because the edge of the observable universe and the Earth have since moved further apart.[52]
For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs).[53] As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter,[54][55] and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.[56]
Because humans cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite.[3][57][58] Estimates suggest that the whole universe, if finite, must be more than 250 times larger than a Hubble sphere.[59] Some disputed[60] estimates for the total size of the universe, if finite, reach as high as
10
10
10
122
{\displaystyle 10^{10^{10^{122}}}} megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal.[61]
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2323) Peripheral
Gist
A peripheral is a a piece of equipment that is connected to a computer, for example a printer.
A peripheral device, or simply peripheral, is an auxiliary hardware device that a computer uses to transfer information externally.
There are many different peripheral devices, but they fall into three general categories:
* Input devices, such as a mouse and a keyboard.
* Output devices, such as a monitor and a printer.
* Storage devices, such as a hard drive or flash drive.
Summary
A peripheral device, or simply peripheral, is an auxiliary hardware device that a computer uses to transfer information externally.[1] A peripheral is a hardware component that is accessible to and controlled by a computer but is not a core component of the computer.
A peripheral can be categorized based on the direction in which information flows relative to the computer:
* The computer receives data from an input device; examples: mouse, keyboard, scanner, game controller, microphone and webcam
* The computer sends data to an output device; examples: monitor, printer, headphones, and speakers
* The computer sends and receives data via an input/output device; examples: storage device (such as disk drive, solid-state drive, USB flash drive, memory card and tape drive), modem, router, gateway and network adapter
Many modern electronic devices, such as Internet-enabled digital watches, video game consoles, smartphones, and tablet computers, have interfaces for use as a peripheral.
Details
A peripheral device is any of various devices (including sensors) used to enter information and instructions into a computer for storage or processing and to deliver the processed data to a human operator or, in some cases, a machine controlled by the computer. Such devices make up the peripheral equipment of modern digital computer systems.
Peripherals are commonly divided into three kinds: input devices, output devices, and storage devices (which partake of the characteristics of the first two). An input device converts incoming data and instructions into a pattern of electrical signals in binary code that are comprehensible to a digital computer. An output device reverses the process, translating the digitized signals into a form intelligible to the user. At one time punched-card and paper-tape readers were extensively used for inputting, but these have now been supplanted by more efficient devices.
Input devices include typewriter-like keyboards; handheld devices such as the mouse, trackball, joystick, trackpad, and special pen with pressure-sensitive pad; microphones, webcams, and digital cameras. They also include sensors that provide information about their environment—temperature, pressure, and so forth—to a computer. Another direct-entry mechanism is the optical laser scanner (e.g., scanners used with point-of-sale terminals in retail stores) that can read bar-coded data or optical character fonts.
Output equipment includes video display terminals, ink-jet and laser printers, loudspeakers, headphones, and devices such as flow valves that control machinery, often in response to computer processing of sensor input data. Some devices, such as video display terminals and USB hubs, may provide both input and output. Other examples are devices that enable the transmission and reception of data between computers—e.g., modems and network interfaces.
Most auxiliary storage devices—as, for example, CD-ROM and DVD drives, flash memory drives, and external disk drives also double as input/output devices (see computer memory). Even devices such as smartphones, tablet computers, and wearable devices like fitness trackers and smartwatches can be considered as peripherals, albeit ones that can function independently.
Various standards for connecting peripherals to computers exist. For example, serial advanced technology attachment (SATA) is the most common interface, or bus, for magnetic disk drives. A bus (also known as a port) can be either serial or parallel, depending on whether the data path carries one bit at a time (serial) or many at once (parallel). Serial connections, which use relatively few wires, are generally simpler than parallel connections. Universal serial bus (USB) is a common serial bus.
Additional Information
Generally peripheral devices, however, are not essential for the computer to perform its basic tasks, they can be thought of as an enhancement to the user’s experience. A peripheral device is a device that is connected to a computer system but is not part of the core computer system architecture. Generally, more people use the term peripheral more loosely to refer to a device external to the computer case.
What Does Peripheral Device Mean?
A Peripheral Device is defined as a device that provides input/output functions for a computer and serves as an auxiliary computer device without computing-intensive functionality.
A peripheral device is also called a peripheral, computer peripheral, input-output device, or I/O device.
Classification of Peripheral devices
It is generally classified into 3 basic categories which are given below:
1. Input Devices:
The input device is defined as it converts incoming data and instructions into a pattern of electrical signals in binary code that are comprehensible to a digital computer. Example:
Keyboard, mouse, scanner, microphone etc.
* Keyboard: A keyboard is an input device that allows users to enter text and commands into a computer system.
* Mouse: A mouse is an input device that allows users to control the cursor on a computer screen.
* Scanner: A scanner is an input device that allows users to convert physical documents and images into digital files.
* Microphone: A microphone is an input device that allows users to record audio.
2. Output Devices:
An output device is generally the reverse of the input process and generally translates the digitized signals into a form intelligible to the user. The output device is also performed for sending data from one computer system to another. For some time punched card and paper tape readers were extensively used for input, but these have now been supplanted by more efficient devices.
Example:
Monitors, headphones, printers etc.
Monitor: A monitor is an output device that displays visual information from a computer system.
Printer: A printer is an output device that produces physical copies of documents or images.
Speaker: A speaker is an output device that produces audio.
3. Storage Devices:
* Storage devices are used to store data in the system which is required for performing any operation in the system. The storage device is one of the most required devices and also provides better compatibility.
Example:
Hard disk, magnetic tape, Flash memory etc.
* Hard Drive: A hard drive is a storage device that stores data and files on a computer system.
* USB Drive: A USB drive is a small, portable storage device that connects to a computer system to provide additional storage space.
* Memory Card: A memory card is a small, portable storage device that is commonly used in digital cameras and smartphones.
* External Hard Drive: An external hard drive is a storage device that connects to a computer system to provide additional storage space.
4. Communication Devices:
Communication devices are used to connect a computer system to other devices or networks. Examples of communication devices include:
Modem: A modem is a communication device that allows a computer system to connect to the internet.
Network Card: A network card is a communication device that allows a computer system to connect to a network.
Router: A router is a communication device that allows multiple devices to connect to a network.
Advantages of Peripherals Devices
Peripherals devices provide more features due to this operation of the system is easy. These are given below:
* It is helpful for taking input very easily.
* It is also provided a specific output.
* It has a storage device for storing information or data
* It also improves the efficiency of the system.
FAQs on Peripherals Devices in Computer Organization
Q1. What are 6 examples of a peripheral device?
Answer- The most commonly used 5 peripheral devices are –
* Printer
* Scanner
* Keyboard
* Mouse
* Tape device
* Microphone
Q2. Is Ram a peripheral device?
Answer- Yes, RAM is an internal device. An internal device processes data or executes a program. The term ‘internal’ describes a device installed inside the computer, unlike the external peripheral devices.
Q3. What is the difference between CPU and peripheral devices?
Answer- The CPU is an hardware device. Peripheral hardware includes Graphics cards, external hard drives, pen drives, USB, and other devices.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2324) Wheel
Gist
A wheel is a circular frame or disk arranged to revolve on an axis, as on or in vehicles or machinery. any machine, apparatus, instrument, etc., shaped like this or having a circular frame, disk, or revolving drum as an essential feature: a potter's wheel; roulette wheel; spinning wheel.
i) A wheel is a circular frame or disk arranged to revolve on an axis, as on or in vehicles or machinery.
ii) A wheel is any machine, apparatus, instrument, etc., shaped like this or having a circular frame, disk, or revolving drum as an essential feature.
Summary
Without the wheel most of the world’s work would stop. Automobiles, trains, streetcars, farm machines, wagons, and nearly all factory and mine equipment would be useless. On land, loads would be moved only by carrying them or by using sledges or the backs of animals.
No one knows when the wheel was invented or who invented it, but the wheel was evidently the result of long development. A Sumerian pictograph, dated about 3500 bc, shows a sledge equipped with wheels. The use of a wheel (turntable) for pottery making had also developed in Mesopotamia by 3500 bc.
The pictures show the steps in the wheel’s development. Early in history it was discovered that a heavy load could be moved rather easily if a roller was placed under it. It was also found that placing runners under a load made it easier to drag. Thus the sledge was invented.
Combining the roller and the sledge for heavy loads is believed to have been the next step. As the sledge moved forward over the first roller, a second roller was placed under the front end to carry the load when it moved off the first roller. A model of a sledge with such rollers is in the Smithsonian Institution.
After long use the sledge runners wear grooves in the roller. It was then learned that these deep grooves enabled the sledge to move forward a longer distance before the roller required shifting. The principle behind the grooved rollers can be explained by a simple example. If the circumference of a roller is 4 feet (1.2 meters), but the grooves are worn until their circumference is only 2 feet (0.6 meter), then for each turn the roller would move 4 feet over the ground. The sledge runners, however, would move forward through the grooves only 2 feet.
The next step was the change of the roller into a wheel. The wood between the grooves of the roller has been cut away to make an axle, and wooden pegs have been driven into the runners on each side of the axle. The runners can no longer roll forward. When the wheels turn, the axle revolves in the space between the pegs. This makes a primitive cart. The last picture shows how the cart was improved. In place of the pegs, holes for the axle were bored through the frame of the cart. Axle and wheels were now made separately. The wheels were simply sections cut from a log. At first, the wheels were firmly fixed on the axle, and it was the axle that turned in the holes of the cart frame. Grease may have been used to reduce friction. Later, the axle was fastened so that it could not turn, and the wheels revolved on its ends. When the wheels revolve on a fixed axle, it is easier to make turns. Both wheels turn, but the outside wheel simply revolves more rapidly than the inside one.
By the time that wheels were made separate from the axle, the idea of the wheel was fully developed. What remained to be done was to improve the structure of the wheel itself. The drawings show various ways of building up wheels to make them stronger and longer-wearing than the plain slice from a log.
A wheel is composed of three essential elements—hub, spokes, and rim. The hub is made long to provide a bearing that will not wobble on the axle. The spokes are separately made. One end of each spoke is fastened into a socket in the hub, the other end into a socket in the rim. The rim itself is made of six curved pieces of wood (called fellies or felloes). Together they form a complete circle. The ends of these fellies are firmly joined with straps, dowel pins, or lap joints.
This type of wheel made its appearance in Egyptian chariots of about 2000 bc. At this time, wheels for carrying heavy loads were still of the solid type. The spoked wheel was not strong enough until men learned to bind the rim and hold the fellies together with overlapping strips of metal serving as a tire. And many centuries passed before the spoked wheel reached its maximum strength with a tire made in one piece—a hoop of iron or steel, heated red-hot and shrunk on to the rim of the wheel as it cooled.
The Assyrians probably kept pace with the Egyptians in the use of the wheel. The Greeks picked up the idea for wheels from Egypt and added a few improvements. The Romans developed the greatest variety of wheeled vehicles. They had chariots for war, hunting, and racing, two-wheeled farm carts, covered carriages, and fashionable gigs, heavy four-wheeled freight wagons and passenger coaches. Until the modern inventions of pneumatic rubber tires as well as ball and roller bearings, there had been few improvements in the wheel itself since ancient Roman days.
No trace has been found of wheels in the New World before the coming of Europeans. Why the ingenious Incas and Aztecs failed to make this simple invention no one can tell. The fact that those native peoples had no draft animals may be part of the explanation.
Details
A wheel is a rotating component (typically circular in shape) that is intended to turn on an axle bearing. The wheel is one of the key components of the wheel and axle which is one of the six simple machines. Wheels, in conjunction with axles, allow heavy objects to be moved easily facilitating movement or transportation while supporting a load, or performing labor in machines. Wheels are also used for other purposes, such as a ship's wheel, steering wheel, potter's wheel, and flywheel.
Common examples can be found in transport applications. A wheel reduces friction by facilitating motion by rolling together with the use of axles. In order for wheels to rotate, a moment needs to be applied to the wheel about its axis, either by way of gravity or by the application of another external force or torque.
Terminology
The English word wheel comes from the Old English word hwēol, from Proto-Germanic *hwehwlaz, from Proto-Indo-European *kwékwlos, an extended form of the root *kwel- 'to revolve, move around'. Cognates within Indo-European include Icelandic hjól 'wheel, tyre', Greek kúklos, and Sanskrit chakra, the last two both meaning 'circle' or 'wheel'.
History
Mesopotamian civilization was credited with the invention of the wheel by several, mainly old sources. Research in the 2000s suggests that the wheel was invented independently in Eastern Europe, with the oldest potter's wheel and oldest wheels for vehicles found in the Cucuteni–Trypillia culture, which predates Mesopotamia finds by several hundred years. In 2024, a team of researchers reported what may be the earliest evidence of wheel-like stones in what is now Israel, being 12,000 years old.
The invention of the solid wooden disk wheel falls into the late Neolithic, and may be seen in conjunction with other technological advances that gave rise to the early Bronze Age. This implies the passage of several wheel-less millennia after the invention of agriculture and of pottery, during the Aceramic Neolithic.
* 4500–3300 BCE (Copper Age): invention of the potter's wheel; earliest solid wooden wheels (disks with a hole for the axle); earliest wheeled vehicles
* 3300–2200 BCE (Early Bronze Age)
* 2200–1550 BCE (Middle Bronze Age): invention of the spoked wheel and the chariot; domestication of the horse
The Halaf culture of 6500–5100 BCE is sometimes credited with the earliest depiction of a wheeled vehicle, but there is no evidence of Halafians using either wheeled vehicles or pottery wheels. Potter's wheels are thought to have been used in the 4th millennium BCE in the Middle East. The oldest surviving example of a potter's wheel was thought to be one found in Ur (modern day Iraq) dating to approximately 3100 BCE. However, a potter's wheel found in western Ukraine, of the Cucuteni–Trypillia culture, dates to the middle of the 5th millennium BCE which pre-dates the earliest use of the potter's wheel in Mesopotamia. Wheels of uncertain dates have been found in the Indus Valley civilization of the late 4th millennium BCE covering areas of present-day India and Pakistan.
The oldest indirect evidence of wheeled movement was found in the form of miniature clay wheels north of the Black Sea before 4000 BCE. From the middle of the 4th millennium BCE onward, the evidence is condensed throughout Europe in the form of toy cars, depictions, or ruts, with the oldest find in Northern Germany dating back to around 3400 BCE. In Mesopotamia, depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization are dated to c. 3500–3350 BCE. In the second half of the 4th millennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern (Maykop culture) and South Caucasus and Eastern Europe (Cucuteni-Trypillian culture).
Depictions of a wheeled vehicle appeared between 3631 and 3380 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2 m wide door was constructed for wagon entry; this barn was 40 m long with three doors, dated to 5000 BCE, and belonged to the neolithic Linear Pottery culture. Surviving evidence of a wheel-axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known: a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c. 3200–3000 BCE. Some historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE.
Early wheels were simple wooden disks with a hole for the axle. Some of the earliest wheels were made from horizontal slices of tree trunks. Because of the uneven structure of wood, a wheel made from a horizontal slice of a tree trunk will tend to be inferior to one made from rounded pieces of longitudinal boards.
The spoked wheel was invented more recently and allowed the construction of lighter and swifter vehicles. The earliest known examples of wooden spoked wheels are in the context of the Sintashta culture, dating to c. 2000 BCE (Krivoye Lake). Soon after this, horse cultures of the Caucasus region used horse-drawn spoked-wheel war chariots for the greater part of three centuries. They moved deep into the Greek peninsula where they joined with the existing Mediterranean peoples to give rise, eventually, to classical Greece after the breaking of Minoan dominance and consolidations led by pre-classical Sparta and Athens. Celtic chariots introduced an iron rim around the wheel in the 1st millennium BCE.
In China, wheel tracks dating to around 2200 BCE have been found at Pingliangtai, a site of the Longshan Culture. Similar tracks were also found at Yanshi, a city of the Erlitou culture, dating to around 1700 BCE. The earliest evidence of spoked wheels in China comes from Qinghai, in the form of two wheel hubs from a site dated between 2000 and 1500 BCE. Wheeled vehicles were introduced to China from the west.
In Britain, a large wooden wheel, measuring about 1 m (3.3 ft) in diameter, was uncovered at the Must Farm site in East Anglia in 2016. The specimen, dating from 1,100 to 800 BCE, represents the most complete and earliest of its type found in Britain. The wheel's hub is also present. A horse's spine found nearby suggests the wheel may have been part of a horse-drawn cart. The wheel was found in a settlement built on stilts over wetland, indicating that the settlement had some sort of link to dry land.
Although large-scale use of wheels did not occur in the Americas prior to European contact, numerous small wheeled artifacts, identified as children's toys, have been found in Mexican archeological sites, some dating to approximately 1500 BCE. Some argue that the primary obstacle to large-scale development of the wheel in the Americas was the absence of domesticated large animals that could be used to pull wheeled carriages. The closest relative of cattle present in Americas in pre-Columbian times, the American bison, is difficult to domesticate and was never domesticated by Native Americans; several horse species existed until about 12,000 years ago, but ultimately became extinct. The only large animal that was domesticated in the Western hemisphere, the llama, a pack animal, was not physically suited to use as a draft animal to pull wheeled vehicles, and use of the llama did not spread far beyond the Andes by the time of the arrival of Europeans.
On the other hand, Mesoamericans never developed the wheelbarrow, the potter's wheel, nor any other practical object with a wheel or wheels. Although present in a number of toys, very similar to those found throughout the world and still made for children today ("pull toys"), the wheel was never put into practical use in Mesoamerica before the 16th century. Possibly the closest the Mayas came to the utilitarian wheel is the spindle whorl, and some scholars believe that these toys were originally made with spindle whorls and spindle sticks as "wheels" and "axes".
Aboriginal Australians traditionally used circular discs rolled along the ground for target practice.
Nubians from after about 400 BCE used wheels for spinning pottery and as water wheels. It is thought that Nubian waterwheels may have been ox-driven. It is also known that Nubians used horse-drawn chariots imported from Egypt.
Starting from the 18th century in West Africa, wheeled vehicles were mostly used for ceremonial purposes in places like Dahomey. The wheel was barely used for transportation, with the exception of Ethiopia and Somalia in Sub-Saharan Africa well into the 19th century.
The spoked wheel was in continued use without major modification until the 1870s, when wire-spoked wheels and pneumatic tires were invented.[46] Pneumatic tires can greatly reduce rolling resistance and improve comfort. Wire spokes are under tension, not compression, making it possible for the wheel to be both stiff and light. Early radially-spoked wire wheels gave rise to tangentially-spoked wire wheels, which were widely used on cars into the late 20th century. Cast alloy wheels are now more commonly used; forged alloy wheels are used when weight is critical.
The invention of the wheel has also been important for technology in general, important applications including the water wheel, the cogwheel (see also antikythera mechanism), the spinning wheel, and the astrolabe or torquetum. More modern descendants of the wheel include the propeller, the jet engine, the flywheel (gyroscope) and the turbine.
Mechanics and function
A wheeled vehicle requires much less work to move than simply dragging the same weight. The low resistance to motion is explained by the fact that the frictional work done is no longer at the surface that the vehicle is traversing, but in the bearings. In the simplest and oldest case the bearing is just a round hole through which the axle passes (a "plain bearing"). Even with a plain bearing, the frictional work is greatly reduced because:
* The normal force at the sliding interface is same as with simple dragging.
* The sliding distance is reduced for a given distance of travel.
* The coefficient of friction at the interface is usually lower.
Example:
* If a 100 kg object is dragged for 10 m along a surface with the coefficient of friction μ = 0.5, the normal force is 981 N and the work done (required energy) is (work=force x distance) 981 × 0.5 × 10 = 4905 joules.
* Now give the object 4 wheels. The normal force between the 4 wheels and axles is the same (in total) 981 N. Assume, for wood, μ = 0.25, and say the wheel diameter is 1000 mm and axle diameter is 50 mm. So while the object still moves 10 m the sliding frictional surfaces only slide over each other a distance of 0.5 m. The work done is 981 × 0.25 × 0.5 = 123 joules; the work done has reduced to 1/40 of that of dragging.
Additional energy is lost from the wheel-to-road interface. This is termed rolling resistance which is predominantly a deformation loss. It depends on the nature of the ground, of the material of the wheel, its inflation in the case of a tire, the net torque exerted by the eventual engine, and many other factors.
A wheel can also offer advantages in traversing irregular surfaces if the wheel radius is sufficiently large compared to the irregularities.
The wheel alone is not a machine, but when attached to an axle in conjunction with bearing, it forms the wheel and axle, one of the simple machines. A driven wheel is an example of a wheel and axle. Wheels pre-date driven wheels by about 6000 years, themselves an evolution of using round logs as rollers to move a heavy load—a practice going back in pre-history so far that it has not been dated.
Alternatives
While wheels are very widely used for ground transport, there are alternatives, some of which are suitable for terrain where wheels are ineffective. Alternative methods for ground transport without wheels include:
* Maglev
* Sled, ski or travois
* Hovercraft and ekranoplans
* Walking pedestrian, Litter (vehicle) or a walking machine
* Horse riding
* Caterpillar tracks (operated by wheels)
* Pedrail wheels, using aspects of both wheel and caterpillar track
* Spheres, as used by Dyson vacuum cleaners and hamster balls
* Screw-propelled vehicle
Symbolism
The wheel has also become a strong cultural and spiritual metaphor for a cycle or regular repetition (see chakra, reincarnation, Yin and Yang among others). As such and because of the difficult terrain, wheeled vehicles were forbidden in old Tibet. The wheel in ancient China is seen as a symbol of health and strength and used by some villages as a tool to predict future health and success. The diameter of the wheel is indicator of one's future health. The Kalachakra or wheel of time is also a subject in some forms of Buddhism, along with the dharmachakra.
The winged wheel is a symbol of progress, seen in many contexts including the coat of arms of Panama, the logo of the Ohio State Highway Patrol and the State Railway of Thailand. The wheel is also the prominent figure on the flag of India. The wheel in this case represents law (dharma). It also appears in the flag of the Romani people, hinting to their nomadic history and their Indian origins.
The introduction of spoked (chariot) wheels in the Middle Bronze Age appears to have carried somewhat of a prestige. The sun cross appears to have a significance in Bronze Age religion, replacing the earlier concept of a solar barge with the more 'modern' and technologically advanced solar chariot. The wheel was also a solar symbol for the Ancient Egyptians.
In modern usage, the 'invention of the wheel' can be considered as a symbol of one of the first technologies of early civilization, alongside farming and metalwork, and thus be used as a benchmark to grade the level of societal progress.
Some Neopagans such as Wiccans have adopted the Wheel of the Year into their religious practices.
Additional Information
A wheel is a circular frame of hard material that may be solid, partly solid, or spoked and that is capable of turning on an axle.
The idea of wheeled transportation may have come from the use of logs for rollers, but the oldest known wheels were wooden disks consisting of three carved planks clamped together by transverse struts.
Spoked wheels appeared about 2000 bc, when they were in use on chariots in Asia Minor. Later developments included iron hubs (centerpieces) turning on greased axles, and the introduction of a tire in the form of an iron ring that was expanded by heat and dropped over the rim and that on cooling shrank and drew the members tightly together.
The use of a wheel (turntable) for pottery had also developed in Mesopotamia by 3500 bc.
The early waterwheels, used for lifting water from a lower to a higher level for irrigation, consisted of a number of pots tied to the rim of a wheel that was caused to rotate about a horizontal axis by running water or by a treadmill. The lower pots were submerged and filled in the running stream; when they reached their highest position, they poured their contents into a trough that carried the water to the fields.
The three power sources used in the Middle Ages—animal, water, and wind—were all exploited by means of wheels. One method of driving millstones for grinding grain was to fit a long horizontal arm to the vertical shaft connected to the stone and pull or push it with a horse or other beast of burden. Waterwheels and windmills were also used to drive millstones.
Because the wheel made controlled rotary motion possible, it was of decisive importance in machine design. Rotating machines for performing repetitive operations driven by steam engines were important elements in the Industrial Revolution. Rotary motion permits a continuity in magnitude and direction that is impossible with linear motion, which in a machine always involves reversals and changes in magnitude.
More Information
A key element of every vehicle, the wheel allows your car, truck or bicycle to effortlessly and quickly move from place to place. Lauded by many as man’s most important creation, the wheel was fabricated a very long time ago with the main purpose of allowing things to roll. From the early wheelbarrows created by the Greeks to the progressive lightweight aluminum wheels of today, this rounded invention has undergone huge changes and upgrades. Modern rims these days aren’t just engineered to overcome the toughest driving applications—thanks to a multitude of established designs, they effectively boost the overall style of your vehicle. In this article, we discuss the impact of the wheel on transportation, agriculture, machinery and other significant industries in modern society.
When was the wheel invented?
With no organic example found in nature, it took a while for the concept of the wheel to be conceived. Several innovations like the rope, woven cloths and boats were already established about a thousand years before the wheel was created. Archaeologists contend that wheels started being utilized some time around 3500 BC in Mesopotamia, where they were initially built for pottery. It was about 300 years later before humans began utilizing them to move chariots. The first wheeled vehicles to be established were bullock carts, war chariots, and four-wheeled carts. The wheel serving as a transportation component began when two of them were utilized to form the first crude cart in the world. This was done with a tree trunk joined by an axle fastened to a platform of wood.
On its own, the wheel was not very useful but when an axle is fitted at the centre of the wheel, the end result is an early system of transportation. With this system invented, moving heavy loads across distances, which was a task done manually by man or by the animals he tamed, became easier. Around 2000 BC, the spoke wheel was invented to significantly decrease the overall weight. Further passage of time resulted in an even more progressive build that is both sturdier and lighter. The spokes and rims of today’s wheels are typically manufactured using high-quality iron for efficiency in not just heavy-duty transportation but in sport as well.
Who invented the wheel?
The creation of the wheel as a form of locomotion cannot be credited to a single inventor. Although there’s evidence of wheels dating back over five millenniums, there’s not enough archaeological proof as to who thought of utilizing a circular component to simplify difficult tasks. The early wheelbarrow that utilizes a simple cart and a single wheel to move goods and equipment is credited to the Greeks. A high-priced product at that time, the wheelbarrow significantly reduced labour work and made daily tasks easier.
There is also archaeological proof supporting the existence of wheeled carts in China and Medieval Europe. The Bronocice Pot, a piece of pottery discovered in Poland that featured early drawings of cattle-drawn carts, suggested the early existence of wheels in Central Europe. The sedentary Cucuteni-Tripolye culture has been known to have produced four-wheeled toys in modern-day Ukraine and Romania. Ancient Mesoamericans have also been credited for producing small wheeled figurines without having any known contact with their old world neighbors. To summarize, it’s highly suggested that the creation of the wheel was made possible through many groups independently.
Which city was the site of the first Ferris wheel in 1893?
Apart from being an integral part of vehicle functionality, the wheel’s blueprint was also utilized for other essential applications. This includes water wheels that produce hydropower for watermills and the gyroscope for navigation. In order to rival the awesome architecture of the Eiffel Tower, George Washington Gale Ferris Jr. engineered the very first Ferris wheel in 1893 at the Chicago World Columbian Fair. This large revolving wheel, which was also referred to as the Chicago Wheel, would later become a trademark structure in the carnival scene. The original design is measured to be 250 feet in diameter and is capable of carrying approximately 2,160 people per trip.
A simple yet vital mechanical part, the wheel has certainly come a long way from being just a pottery-utilized machine. With technological advancements, different wheel types have been manufactured to not only make transportation possible but to ensure ride quality is both comfortable and precise, regardless if they’re integrated for everyday drives, heavy-duty truck applications or the most demanding races. For incredible prices on the most highly efficient wheel designs, look no further than Automotive Stuff. As a verified vendor, we carry the latest rims from globally renowned brands such as American Racing, Moto Metal, Mickey Thompson, Advanti Racing and Mamba. Exceptional deals coupled by quick shipment and superb customer service, for an exceptionally reliable brand in America, choose Automotive Stuff.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline