Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2451 2025-02-04 00:08:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2350) Dental Implant

Gist

An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. For example, an implant may be a rod, used to strengthen weak bones.

A dental implant is a metal post that replaces the root portion of a missing tooth. A dental professional places an artificial tooth, also known as a crown, on an extension of the post of the dental implant, giving you the look of a real tooth.

Summary:

Overview

Dental implant, abutment and new tooth in jawbone to replace missing tooth

A dental implant replaces a missing tooth root. Once the implant heals, your dentist can restore it with an artificial tooth.

What are dental implants?

Dental implants are small, threaded posts that surgically replace missing teeth. In addition to filling in gaps in your smile, dental implants improve chewing function and overall oral health. Once healed, implants work much like natural teeth.

A dental implant has three main parts:

* Threaded post: You can think of this like an artificial tooth root. A provider places it in your jawbone during an oral surgery procedure.
* Abutment: This is a tiny connector post. It screws into the threaded post and extends slightly beyond your gums. It serves as the foundation for your new artificial tooth.
* Restoration: A dental restoration is any prosthetic that repairs or replaces teeth. Common dental implant restorations are crowns, bridges and dentures.

Most dental implants are titanium, but some are ceramic. Both materials are safe and biocompatible (friendly to the tissues inside of your mouth).

Missing teeth can take a toll on your oral health. But it also impacts your mental and emotional well-being. Do you avoid social situations? Or cover your mouth when you laugh? Do you rarely smile for photos? Dental implants can restore your smile and your confidence, so you don’t have to miss out on the things you enjoy.

What conditions are treated with dental implants?

Dental implants treat tooth loss, which can happen due to:

* Cavities.
* Cracked teeth.
* Gum disease.
* Teeth that never develop (anodontia).
* Teeth grinding or clenching (bruxism).

How common are dental implants?

Dental implants are a popular choice for tooth replacement. In the United States, dental providers place over 3 million implants each year.

Details

A dental implant (also known as an endosseous implant or fixture) is a prosthesis that interfaces with the bone of the jaw or skull to support a dental prosthesis such as a crown, bridge, denture, or facial prosthesis or to act as an orthodontic anchor. The basis for modern dental implants is a biological process called osseointegration, in which materials such as titanium or zirconia form an intimate bond to the bone. The implant fixture is first placed so that it is likely to osseointegrate, then a dental prosthetic is added. A variable amount of healing time is required for osseointegration before either the dental prosthetic (a tooth, bridge, or denture) is attached to the implant or an abutment is placed which will hold a dental prosthetic or crown.

Success or failure of implants depends primarily on the thickness and health of the bone and gingival tissues that surround the implant, but also on the health of the person receiving the treatment and drugs which affect the chances of osseointegration. The amount of stress that will be put on the implant and fixture during normal function is also evaluated. Planning the position and number of implants is key to the long-term health of the prosthetic since biomechanical forces created during chewing can be significant. The position of implants is determined by the position and angle of adjacent teeth, by lab simulations or by using computed tomography with CAD/CAM simulations and surgical guides called stents. The prerequisites for long-term success of osseointegrated dental implants are healthy bone and gingiva. Since both can atrophy after tooth extraction, pre-prosthetic procedures such as sinus lifts or gingival grafts are sometimes required to recreate ideal bone and gingiva.

The final prosthetic can be either fixed, where a person cannot remove the denture or teeth from their mouth, or removable, where they can remove the prosthetic. In each case an abutment is attached to the implant fixture. Where the prosthetic is fixed, the crown, bridge or denture is fixed to the abutment either with lag screws or with dental cement. Where the prosthetic is removable, a corresponding adapter is placed in the prosthetic so that the two pieces can be secured together.

The risks and complications related to implant therapy divide into those that occur during surgery (such as excessive bleeding or nerve injury, inadequate primary stability), those that occur in the first six months (such as infection and failure to osseointegrate) and those that occur long-term (such as peri-implantitis and mechanical failures). In the presence of healthy tissues, a well-integrated implant with appropriate biomechanical loads can have 5-year plus survival rates from 93 to 98 percent and 10-to-15-year lifespans for the prosthetic teeth. Long-term studies show a 16- to 20-year success (implants surviving without complications or revisions) between 52% and 76%, with complications occurring up to 48% of the time. Artificial intelligence is relevant as the basis for clinical decision support systems at the present time. Intelligent systems are used as an aid in determining the success rate of implants.

Medical uses

The primary use of dental implants is to support dental prosthetics (i.e. false teeth). Modern dental implants work through a biologic process where bone fuses tightly to the surface of specific materials such as titanium and some ceramics. The integration of implant and bone can support physical loads for decades without failure.

The US has seen an increasing use of dental implants, with usage increasing from 0.7% of patients missing at least one tooth (1999–2000), to 5.7% (2015–2016), and was projected to potentially reach 26% in 2026. Implants are used to replace missing individual teeth (single tooth restorations), multiple teeth, or to restore edentulous (toothless) dental arches (implant retained fixed bridge, implant-supported overdenture). While use of dental implants in the US has increased, other treatments to tooth loss exist.

Dental implants are also used in orthodontics to provide anchorage (orthodontic mini implants). Orthodontic treatment might be required prior to placing a dental implant.

An evolving field is the use of implants to retain obturators (removable prostheses used to fill a communication between the oral and maxillary or nasal cavities). Facial prosthetics, used to correct facial deformities (e.g. from cancer treatment or injuries), can use connections to implants placed in the facial bones. Depending on the situation the implant may be used to retain either a fixed or removable prosthetic that replaces part of the face.

Single tooth implant restoration

Single tooth restorations are individual freestanding units not connected to other teeth or implants, used to replace missing individual teeth. For individual tooth replacement, an implant abutment is first secured to the implant with an abutment screw. A crown (the dental prosthesis) is then connected to the abutment with dental cement, a small screw, or fused with the abutment as one piece during fabrication.  Dental implants, in the same way, can also be used to retain a multiple tooth dental prosthesis either in the form of a fixed bridge or removable dentures.

There is limited evidence that implant-supported single crowns perform better than tooth-supported fixed partial dentures (FPDs) on a long-term basis. However, taking into account the favorable cost-benefit ratio and the high implant survival rate, dental implant therapy is the first-line strategy for single-tooth replacement. Implants preserve the integrity of the teeth adjacent to the edentulous area, and it has been shown that dental implant therapy is less costly and more efficient over time than tooth-supported FPDs for the replacement of one missing tooth. The major disadvantage of dental implant surgery is the need for a surgical procedure.

Implant retained fixed bridge or implant supported bridge

An implant supported bridge (or fixed denture) is a group of teeth secured to dental implants so the prosthetic cannot be removed by the user. They are similar to conventional bridges, except that the prosthesis is supported and retained by one or more implants instead of natural teeth. Bridges typically connect to more than one implant and may also connect to teeth as anchor points. Typically the number of teeth will outnumber the anchor points with the teeth that are directly over the implants referred to as abutments and those between abutments referred to as pontics. Implant supported bridges attach to implant abutments in the same way as a single tooth implant replacement. A fixed bridge may replace as few as two teeth (also known as a fixed partial denture) and may extend to replace an entire arch of teeth (also known as a fixed full denture). In both cases, the prosthesis is said to be fixed because it cannot be removed by the denture wearer.

Implant-supported overdenture

A removable implant-supported denture (also an implant-supported overdenture) is a removable prosthesis which replaces teeth, using implants to improve support, retention and stability. They are most commonly complete dentures (as opposed to partial), used to restore edentulous dental arches. The dental prosthesis can be disconnected from the implant abutments with finger pressure by the wearer. To enable this, the abutment is shaped as a small connector (a button, ball, bar or magnet) which can be connected to analogous adapters in the underside of the dental prosthesis.

Orthodontic mini-implants (TAD)

Dental implants are used in orthodontic patients to replace missing teeth (as above) or as a temporary anchorage device (TAD) to facilitate orthodontic movement by providing an additional anchorage point. For teeth to move, a force must be applied to them in the direction of the desired movement. The force stimulates cells in the periodontal ligament to cause bone remodeling, removing bone in the direction of travel of the tooth and adding it to the space created. In order to generate a force on a tooth, an anchor point (something that will not move) is needed. Since implants do not have a periodontal ligament, and bone remodelling will not be stimulated when tension is applied, they are ideal anchor points in orthodontics. Typically, implants designed for orthodontic movement are small and do not fully osseointegrate, allowing easy removal following treatment.[20] They are indicated when needing to shorten treatment time, or as an alternative to extra-oral anchorage. Mini-implants are frequently placed between the roots of teeth, but may also be sited in the roof of the mouth. They are then connected to a fixed brace to help move the teeth.

Small-diameter implants (mini-implants)

The introduction of small-diameter implants has provided dentists the means of providing edentulous and partially edentulous patients with immediate functioning transitional prostheses while definitive restorations are being fabricated. Many clinical studies have been done on the success of long-term usage of these implants. Based on the findings of many studies, mini dental implants exhibit excellent survival rates in the short to medium term (3–5 years). They appear to be a reasonable alternative treatment modality to retain mandibular complete overdentures from the available evidence.

Composition

A typical conventional implant consists of a titanium screw (resembling a tooth root) with a roughened or smooth surface. The majority of dental implants are made of commercially pure titanium, which is available in four grades depending upon the amount of carbon, nitrogen, oxygen and iron contained. Cold work hardened CP4 (maximum impurity limits of N .05 percent, C .10 percent, H .015 percent, Fe .50 percent, and O .40 percent) is the most commonly used titanium for implants. Grade 5 titanium, Titanium 6AL-4V (signifying the titanium alloy containing 6 percent aluminium and 4 percent vanadium alloy) is slightly harder than CP4 and used in the industry mostly for abutment screws and abutments.  Most modern dental implants also have a textured surface (through etching, anodic oxidation or various-media blasting) to increase the surface area and osseointegration potential of the implant. If C.P. titanium or a titanium alloy has more than 85% titanium content, it will form a titanium-biocompatible titanium oxide surface layer or veneer that encloses the other metals, preventing them from contacting the bone.

Ceramic (zirconia-based) implants exist in one-piece (combining the screw and the abutment) or two-piece systems - the abutment being either cemented or screwed – and might lower the risk for peri‐implant diseases, but long-term data on success rates is missing.

Additional Information

Dental implant surgery replaces tooth roots with metal, screwlike posts and replaces damaged or missing teeth with artificial teeth that look and work much like real ones. Dental implant surgery can be a helpful choice when dentures or bridgework fit poorly. This surgery also can be an option when there aren't enough natural teeth roots to support dentures or build bridgework tooth replacements.

The type of implant and the condition of the jawbone guide how dental implant surgery is done. This surgery may involve several procedures. The major benefit of implants is solid support for the new teeth — a process that requires the bone to heal tightly around the implant. Because this bone healing requires time, the process can take many months.

Why it's done

Dental implants are surgically placed in your jawbone and serve as the roots of missing teeth. Because the titanium in the implants fuses with your jawbone, the implants won't slip, make noise or cause bone damage like fixed bridgework or dentures might. And the materials can't decay like your own teeth.

Dental implants may be right for you if you:

* Have one or more missing teeth.
* Have a jawbone that's reached full growth.
* Have enough bone to secure the implants or can have a bone graft.
* Have healthy tissues in your mouth.
* Don't have health conditions that can affect bone healing.
* Aren't able or willing to wear dentures.
* Want to improve your speech.
* Are willing to commit several months to the process.
* Don't smoke tobacco.

dental-implant-system.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2452 2025-02-06 21:54:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2351) Astrophysics/Astrophysicist

Gist

Astrophysics is a branch of space science that applies the laws of physics and chemistry to seek to understand the universe and our place in it. The field explores topics such as the birth, life and death of stars, planets, galaxies, nebulae and other objects in the universe.

Astronomers or astrophysicists study the universe to help us understand the physical matter and processes in our own solar system and other galaxies. It involves studying large objects, such as planets, as well as tiny particles.

Summary

An astrophysicist is a scientist, who studies the physical properties, processes and physics of things beyond the Earth. This includes the moon, the sun, the planets in our solar system and the galaxies which aren’t obvious to the human eye.

Working as an astrophysicist, you’ll use the expert knowledge of physics, astronomy and mathematics and apply it to explore the wonders of space such as black holes, superclusters and dark matter for example. The main purpose of the role is to figure out the origins of the universe, how it all works, what our place is within it, search for life on other planets around other stars and some, even look to predict the universe's ending.

It’s a highly skilled role where astrophysicists are educated across many disciplines.

Two types of astrophysicists complement each other's roles and they include the theoretical astrophysicist and the observational astrophysicist. Theoretical astrophysicists seek to explain observational results and observational astrophysicists help to confirm theories.

* Theoretical astrophysicists: These are the theoretical side of astrophysics, hence the name. They develop analytical or computer models to describe astronomical objects, and then use the models to pose theories about them. As what they’re analysing is too far away from reach, they’ll use properties of maths and physics to test the theories.
* Observational astrophysicists: These are the more practical side of astrophysics. Their focus is on acquiring data from celestial object observation and then analysing it, using physical. The work is similar to an astronomer, the role of observing the objects in outer space.

Responsibilities

As an astrophysicist, the responsibilities can vary, but generally, they include:

* Collaborating with other astrophysicists and working on research projects.
* Observing and analysing celestial bodies.
* Creating theories based on observations and the laws of physics.
* Testing your theories to find answers in a better understanding of the universe.
* Writing up research and essays on discoveries.
* Attending various lectures and conferences about research discoveries.
* The ability to use ground-based equipment and telescopes to explore space.
* Analysing research data and determining its purpose and significance.
* Performing presentations of your discoveries and research.
* Reviewing research from other scientists.
* Helping raise funds for scientific research and writing grant proposals.
* Teaching and training PhD candidates.
* The ability to measure emissions included infrared, gamma and x-ray from extraterrestrial sources.
* Assisting with calculating orbits and figuring out shapes, brightness, sizes and more.

Qualifications

An astrophysicist job is a master’s graduate career. Most employers at least require master’s astrophysics degrees, whilst many also require a doctoral PhD degree.

There are many relevant degrees you can study to aid your career as an astrophysicist. Before applying for your master’s you’ll need to acquire a bachelor’s (BA) degree. Any BA degree in a scientific field is useful such as astronomy degrees, physics degrees, maths degrees or a similar subject.

The next step is applying for your master’s degree to continue your scientific studies. The master’s degree will need to be in a specific subject such as astrophysics or astronomy degrees and will take around a year to complete. The last step in qualifying is gaining your PhD in astrophysics, which can take around three to four years to complete. As part of your PhD, you will need to create a dissertation which showcases all the research and findings you’ve made when studying.

Training and development

As astrophysicists, training is usually provided when studying and on the job. Entry-level astrophysicists can expect to work closely with their supervisors and professional astrophysicists to gain industry experience and see what the day-to-day role entails.

It’s also essential for astrophysicists to take responsibility into their own hands and keep up to date with the latest industry findings, and how you can apply them to new projects. You can do this by reading relevant journals, and scientific papers, watching documentaries and staying in the know with industry news.

Skills

As an astrophysicist, skills can vary from theory-based to practical skills. These are the skills required to become an astrophysicist:

* Strong analytical skills when conducting research projects, acquiring data and writing up reports of findings.
* The ability of a good researcher, to test your theories and report back to all other professionals in your team.
* Excellent mathematics skills to help test theories and report on data.
* Good at problem-solving, relating to your research and being able to identify problems in the first place.
* Create a hypothesis and take the steps to either prove or disprove a theory.
* Confident with using computers and various programmes.
* Strong written skills and verbal communication skills.
* Great knowledge of using astrophysics equipment and tools.
* In-depth understanding of the process of raising funds for scientific research.

Career prospects

As a qualified astrophysicist, you will initially work either within a university or research institution. Through years of experience, your position may become permanent and you will move up the ranks into a more senior role, in that university, research institution or observatory.

You could also go into teaching either public or privately in schools, colleges or universities as a lecturer. There’s the option to move into journalism to share your astrophysics expertise. Alternatively, you could head down the scientific research job route in a private company, or travel around to present your research and theories worldwide.

Based on your qualifications, there’s also the option to go into other fields including private/public research and development, healthcare technology and energy production, plus many more.

Details

Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space—what they are, rather than where they are", which is studied in celestial mechanics.

Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium, and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.

In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics.

History

Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle. During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions.

For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.

Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.

In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.

In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.

Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = m{c}^2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.

In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery.

By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves.

Observational astrophysics

Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.

Most astrophysical observations are made using the electromagnetic spectrum.

* Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes.
* Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies.
* Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae.
* Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope.

Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.

Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.

The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.

The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.

Theoretical astrophysics

Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.

Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.

Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.

Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.

Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics.

Popularization

The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics. The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.

Additional Information

The branch of astronomy called astrophysics is a new approach to an ancient field. For centuries astronomers studied the movements and interactions of the sun, the moon, planets, stars, comets, and meteors. Advances in technology have made it possible for scientists to study their properties and structure. Astrophysicists collect particles from meteorites and use telescopes on land, in balloons, and in satellites to gather data. They apply chemical and physical laws to explore what celestial objects consist of and how they formed and evolved.

Spectroscopy and photography, adopted for astronomical research in the 19th century, let investigators measure the quantity and quality of light emitted by stars and nebulas (clouds of interstellar gas and dust). That allowed them to study the brightness, temperature, and chemical composition of such objects in space. Investigators soon recognized that the properties of all celestial bodies, including the planets of the solar system, could only be understood in terms of what goes on inside and around them. The trend toward using physics and chemistry to interpret celestial observations gained momentum in the early 1920s, and many astronomers began referring to themselves as astrophysicists. Since the 1960s the field has developed more rapidly.

The major areas of current interest—X-ray astronomy, gamma-ray astronomy, infrared astronomy, and radio astronomy— depend heavily on engineering for the construction of telescopes, space probes, and related equipment. The scope of both observation and theory has expanded greatly because of such technological advances as electronic radar and radio units, high-speed computers, electronic radiation detectors, Earth-orbiting observatories, and long-range planetary probes.

Fiona.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2453 2025-02-07 16:30:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2352) Catering/Catering Technology

Gist

Catering technology can be defined as 'the application of science to the art of catering'. For this purpose catering is taken to mean the feeding of people in large groups and includes restaurants, hotels, work canteens, schools meals and hospitals as well as take-away meals such as fish and chip shops.

Summary

Before you consider starting a catering business, it is crucial to understand the types of catering and what is catering. Knowing the basic concepts of catering will help you run a successful catering business, be a better caterer and draft your catering business plan.

So, what is catering? First, let’s review all you need to know about catering business basics and the types of catering.

What Is Catering?

Catering is the process or business of preparing food and providing food services for clients at remote locations, such as hotels, restaurants, offices, concerts, and events. Companies that offer food, drinks, and other services to various customers, typically for special occasions, make up the catering sector.

Some restaurant businesses may contract their cooking to catering businesses or even offer catering services to customers. For instance, customers may love a particular dish so much that they want the same food to be served at their event.

Catering is more than just preparing food and cleaning up after the party. Sometimes, catering branches into event planning and management. For example, if you offer corporate catering services, you will be required to work with large crowds and handle the needs of corporate clients.

A catering business may use its chefs to create food or buy food from a vendor or third party to deliver to the client. In addition, you may be asked to plan the food menu for corporate events such as picnics, holiday celebrations, and other functions. So, what is a caterer? Let’s find out.

What Is a Caterer?

A caterer is a person or business that prepares, cooks, and serves food and beverages to clients at remote locations and events. The caterer may be asked to prepare seasonal menu options and provide the equipment such as dishes, spoons, place settings, and wine glasses needed to serve guests at an event.

Starting a catering business is the ideal venture for you if you enjoy interacting with guests and producing a wide range of dishes that are delicious to eat as well as beautiful to look at. A caterer is inventive in novel recipes, culinary presentations, and menus.

In addition, caterers excel at multitasking. For instance, if professional wait staff will be serving each course of dinner to guests, the caterer must be ready to prepare all the dishes for the event at once.

To ensure attendees enjoy their time at events, caterers always offer a delicious, relaxing dinner. Additionally, caterers may deal with particular demands and design menus for unique events directly with clients.

Usually, a catering service sends waiters, waitresses, and busboys to set tables and serve meals during sit-down dining occasions. The caterer may send staff to prepare chafing dishes, bowls, and platters filled with food for buffets and casual gatherings, replace them, and serve food to guests.

4 Types of Catering

It is essential to choose a catering specialty when starting your catering business. With many catering types to choose from, it’s only logical to research your options and pick a niche that will suit your target market and improve your unique selling proposition.

Let’s look at the types of catering:

What Is Event Catering?

Event catering is planning a menu, preparing, delivering, and serving food at social events and parties. Catering is an integral part of any event.

As you know, events revolve around the food and drink menu. Party guests may even say that the success of any event depends on the catering services.

Birthday celebrations, retirement parties, grand openings, housewarming parties, weddings, and baby showers are a few exceptional events that fall under this category. In addition, catering packages for event catering sometimes include things like appetizers, decorations, bartenders, and servers.

Types of Event Catering

* Stationary Platters
* Hors D’oeuvres
* Small Plates and Stations
* Three-Course Plated Dinner
* Buffet
* Outdoor BBQ

What Is Full-Service Catering?

Full-service catering manages every facet of an event, including meal preparation, decorations, and clean-up following the event. Unlike regular event catering, where the caterer just prepares and serves food and drinks, a full-service caterer handles every event detail based on clients' specifications.

Some logistics, such as dinnerware, linens, serving utensils, and dedicated staff to help on-site, are handled by full-service catering. The head caterer oversees every aspect of the event according to what will appeal to each guest.

What Does a Full-service Catering Business Offer?

* Venue setup
* Menu planning
* Dining setup
* Food preparation
* After-party cleanup.

Details

Catering is the business of providing food services at a remote site or a site such as a hotel, hospital, pub, aircraft, cruise ship, park, festival, filming location or film studio.

History of catering

The earliest account of major services being catered in the United States was an event for William Howe of Philadelphia in 1778. The event served local foods that were a hit with the attendees, who eventually popularized catering as a career. The official industry began to be recognized around the 1820’s, with the caterers being disproportionately African-American. The catering business began to form around 1820, centered in Philadelphia.

Robert Bogle

The industry began to professionalize under the reigns of Robert Bogle who is recognized as "the originator of catering." Catering was originally done by servants of wealthy elites. Butlers and house slaves, which were often black, were in a good position to become caterers. Essentially, caterers in the 1860s were "public butlers" as they organized and executed the food aspect of a social gathering. A public butler was a butler working for several households. Bogle took on the role of public butler and took advantage of the food service market in the hospitality field.

Caterers like Bogle were involved with events likely to be catered today, such as weddings and funerals. Bogle also is credited with creating the Guild of Caterers and helping train other black caterers. This is important because catering provided not only jobs to black people but also opportunities to connect with elite members of Philadelphia society. Over time, the clientele of caterers became the middle class, who could not afford lavish gatherings and increasing competition from white caterers led to a decline in black catering businesses.

Evolution of catering

By the 1840s many restaurant owners began to combine catering services with their shops. Second-generation caterers grew the industry on the East Coast, becoming more widespread.  Common usage of the word "caterer" came about in the 1880s at which point local directories began to use these term to describe the industry. White businessmen took over the industry by the 1900’s, with the Black Catering population disappearing.

In the 1930s, the Soviet Union, creating more simple menus, began developing state public catering establishments as part of its collectivization policies. A rationing system was implemented during World War II, and people became used to public catering. After the Second World War, many businessmen embraced catering as an alternative way of staying in business after the war. By the 1960s, the home-made food was overtaken by eating in public catering establishments.

By the 2000s, personal chef services started gaining popularity, with more women entering the workforce.[citation needed] People between 15 and 24 years of age spent as little as 11–17 minutes daily on food preparation and clean-up activities in 2006-2016, according to figures revealed by the American Time Use Survey conducted by the US Bureau of Labor Statistics. There are many types of catering, including Event catering, Wedding Catering and Corporate Catering.

Event catering

An event caterer serves food at indoor and outdoor events, including corporate and workplace events and parties at home and venues.

Mobile catering

A mobile caterer serves food directly from a vehicle, cart or truck which is designed for the purpose. Mobile catering is common at outdoor events such as concerts, workplaces, and downtown business districts. Mobile catering services require less maintenance costs when compared with other catering services. Mobile caterers may also be known as food trucks in some areas. Mobile catering is popular throughout New York City, though sometimes can be unprofitable. Ice cream vans are a familiar example of a catering truck in Canada, the United States and the United Kingdom.

Seat-back catering

Seat-back catering was a service offered by some charter airlines in the United Kingdom (e.g., Court Line, which introduced the idea in the early 1970s, and Dan-Air) that involved embedding two meals in a single seat-back tray. "One helping was intended for each leg of a charter flight, but Alan Murray, of Viking Aviation, had earlier revealed that 'with the ingenious use of a nail file or coin, one could open the inbound meal and have seconds'. The intention of participating airlines was to "save money, reduce congestion in the cabin and give punters the chance to decide when to eat their meal". By requiring less galley space on board, the planes could offer more passenger seats.

According to TravelUpdate's columnist, "The Flight Detective", "Salads and sandwiches were the usual staples," and "a small pellet of dry ice was put into the compartment for the return meal to try to keep it fresh." However, in addition to the fact that passengers on one leg were able to consume the food intended for other passengers on the following leg, there was a "food hygiene" problem, and the concept was discontinued by 1975.

Canapé catering

A canapé caterer serves canapés at events. They have become a popular type of food at events, Christmas parties and weddings. A canapé is a type of hors d'oeuvre, a small, prepared, and often decorative food, consisting of a small piece of bread or pastry. They should be easier to pick up and not be bigger than one or two bites. The bite-sized food is usually served before the starter or main course or alone with drinks at a drinks party.

Wedding catering

A wedding caterer provides food for a wedding reception and party, traditionally called a wedding breakfast. A wedding caterer can be hired independently or can be part of a package designed by the venue. Catering service providers are often skilled and experienced in preparing and serving high-quality cuisine. They offer a diverse and rich selection of food, creating a great experience for their customers. There are many different types of wedding caterers, each with their approach to food.

Shipboard catering

Merchant ships – especially ferries, cruise liners, and large cargo ships – often carry Catering Officers. In fact, the term "catering" was in use in the world of the merchant marine long before it became established as a land-bound business.

Catering-hire-Johannesburg-2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2454 2025-02-08 00:03:31

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2353) Bun

Gist

A bun is a type of bread roll, typically filled with savory fillings (for example hamburger). A bun may also refer to a sweet cake in certain parts of the world. Though they come in many shapes and sizes, buns are most commonly round, and are generally hand-sized or smaller.

Summary:

Ingredient List

For the food processor buns:

* 3 cups bread or all-purpose flour (may be part whole wheat flour)
* 2 tablespoons granulated sugar
* 1 teaspoon salt
* 1 (¼ ounce) package instant yeast
* 3 tablespoons unsalted butter or margarine, cubed
* 1 cup lukewarm water (90°F)

Instructions

For the food processor buns:

In bowl of food processor fitted with dough blade, add flour, sugar, salt, yeast and butter. Place lid on processor and pulse 10 seconds.

Begin processing, pouring 1 cup warm water through tube. When dough forms a ball, stop adding water. All may not be needed. Process dough an additional 60 seconds to knead.

Remove dough and smooth into a ball; cover with bowl and let rest 15 minutes.

Divide dough into 8 buns and flatten into 3 ½” disks. Place buns two inches apart on greased or parchment-lined baking sheet. Cover; let rise in a warm place until doubled. Near the end of the rise, preheat oven to 400°F.

Bake 12 – 15 minutes, until golden and internal temperature registers 190°F – 195°F.  Remove buns to rack and cool before slicing.

Nutrition Information Per Serving (1 Bun, 91g): 240 calories, 45 calories from fat, 5g total fat, 3g saturated fat, 0g trans fat, 10mg cholesterol, 300mg sodium, 41g total carbohydrate, 1g dietary fiber, 3g sugars, 7g protein, 97mcg folate, 2mg vitamin C, 2mg iron.

Details

A bun is a type of bread roll, typically filled with savory fillings (for example hamburger). A bun may also refer to a sweet cake in certain parts of the world. Though they come in many shapes and sizes, buns are most commonly round, and are generally hand-sized or smaller.

In the United Kingdom, the usage of the term differs greatly in different regions. In Southern England, a bun is a hand-sized sweet cake, while in Northern England, it is a small round of ordinary bread. In Ireland, a bun refers to a sweet cake, roughly analogous to an American cupcake.

Buns are usually made from a dough of flour, milk, yeast and small amounts of sugar and/or butter. Sweet bun dough is distinguished from bread dough by the addition of sugar, butter and sometimes egg. Common sweet varieties contain small fruit or nuts, topped with icing or caramel, and filled with jam or cream.

Chinese baozi, with savory or sweet fillings, are often referred to as "buns" in English.

Additional Information:

How to Make Hamburger Buns

It couldn't be easier to make these homemade hamburger buns. You'll find the step-by-step recipe below — but here's a brief overview of what you can expect:

* Make the dough.
* Let the dough rise.
* Form the buns.
* Bake the buns, then let them cool completely.
* Slice the buns in half lengthwise.

Begin by proofing the yeast with some of the flour. Once the yeast is foamy, add the remaining flour, an egg, butter, sugar, and salt. Fit a stand mixer with a dough hook and knead the dough, scraping the sides often, until it's soft and sticky.

Transfer the dough to a floured surface and form into a smooth, round ball. Tuck the ends underneath and return the dough to an oil-drizzled stand mixer bowl. Ensure the dough is coated with oil, then cover and allow to rise in a warm place until it has doubled in size.

Transfer the dough back to the floured work surface. Pat into a slightly rounded rectangle, then cut into eight equal squares. Use your hands to shape and pat the squares into discs. Arrange the buns on a floured baking sheet, cover, and let rise until they've doubled in size.

Brush the buns with an egg wash and sprinkle with sesame seeds. Bake in a preheated oven until lightly browned. Remove from the oven, allow the buns to cool, and slice in half lengthwise to serve.

How to Store Hamburger Buns

Store the homemade hamburger buns in an airtight container or wrapped in foil at room temperature for up to five days. Avoid short term refrigeration, as this will dry them out.

Can You Freeze Hamburger Buns?

Yes, you can freeze homemade hamburger buns (though they're best enjoyed fresh). Transfer the cooled buns to a freezer-safe container, then wrap in a layer of foil. Label with the date and freeze for up to two months. Thaw on a paper towel at room temperature. When it's about halfway thawed, flip the bun, and replace the paper towel.

Allrecipes Community Tips and Praise

"This recipe was so simple and fun to make," according to Kris Allfrey. "The hamburger buns were so light and the crumb texture was perfect. I followed the recipe and Chef John's instructions to the letter. I wouldn't change a thing about this recipe. They are excellent for pulled-pork sandwiches."

"I didn't make any changes except for what I put on top of the buns. I put dried onion, parsley, sesame seeds, and poppy seeds," says Ron Doty-Tolaro. "Also, as a rule of thumb, when making any kind of rising bread, lightly push in on the dough with one finger. If the dough is ready it will bounce back out. If the indentation stays in, knead it more."

AR-6761-tasty-buns-DDMFS-4x3-87727702d7944a07897f59ba6d3ab21d.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2455 2025-02-08 20:04:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2354) Hypertension

Gist

Hypertension (high blood pressure) is when the pressure in your blood vessels is too high (140/90 mmHg or higher). It is common but can be serious if not treated. People with high blood pressure may not feel symptoms. The only way to know is to get your blood pressure checked.

Summary

High blood pressure (also called hypertension) can lead to serious problems like heart attacks or strokes. But lifestyle changes and blood pressure medicines can help you stay healthy.

Check if you're at risk of high blood pressure

High blood pressure is very common, especially in older adults. There are usually no symptoms, so you may not realise you have it.

Things that increase your chances of having high blood pressure include:

* your age – you're more likely to get high blood pressure as you get older
* having close relatives with high blood pressure
* your ethnicity – you're at higher risk if you have a Black African, Black Caribbean or South Asian ethnic background
* having an unhealthy diet – especially a diet that's high in salt
* being overweight
* smoking
* drinking too much alcohol
* feeling stressed over a long period

Non-urgent advice:Get your blood pressure checked at a pharmacy or GP surgery if:

* you think you might have high blood pressure or might be at risk of having high blood pressure
* you're aged 40 or over and have not had your blood pressure checked for more than 5 years.

Some pharmacies may charge for a blood pressure check.

Some workplaces also offer blood pressure checks. Check with your employer.

Symptoms of high blood pressure

High blood pressure does not usually cause any symptoms.

Many people have it without realising it.

Rarely, high blood pressure can cause symptoms such as:

* headaches
* blurred vision
* chest pain

But the only way to find out if you have high blood pressure is to get your blood pressure checked.

Details

Hypertension, also known as high blood pressure, is a long-term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms itself. It is, however, a major risk factor for stroke, coronary artery disease, heart failure, atrial fibrillation, peripheral arterial disease, vision loss, chronic kidney disease, and dementia. Hypertension is a major cause of premature death worldwide.

High blood pressure is classified as primary (essential) hypertension or secondary hypertension. About 90–95% of cases are primary, defined as high blood pressure due to nonspecific lifestyle and genetic factors. Lifestyle factors that increase the risk include excess salt in the diet, excess body weight, smoking, physical inactivity and alcohol use. The remaining 5–10% of cases are categorized as secondary hypertension, defined as high blood pressure due to a clearly identifiable cause, such as chronic kidney disease, narrowing of the kidney arteries, an endocrine disorder, or the use of birth control pills.

Blood pressure is classified by two measurements, the systolic (first number) and diastolic (second number) pressures. For most adults, normal blood pressure at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. For most adults, high blood pressure is present if the resting blood pressure is persistently at or above 130/80 or 140/90 mmHg. Different numbers apply to children. Ambulatory blood pressure monitoring over a 24-hour period appears more accurate than office-based blood pressure measurement.

Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, physical exercise, decreased salt intake, reducing alcohol intake, and a healthy diet. If lifestyle changes are not sufficient, blood pressure medications are used. Up to three medications taken concurrently can control blood pressure in 90% of people. The treatment of moderately high arterial blood pressure (defined as >160/100 mmHg) with medications is associated with an improved life expectancy. The effect of treatment of blood pressure between 130/80 mmHg and 160/100 mmHg is less clear, with some reviews finding benefit and others finding unclear benefit. High blood pressure affects 33% of the population globally. About half of all people with high blood pressure do not know that they have it. In 2019, high blood pressure was believed to have been a factor in 19% of all deaths (10.4 million globally).

Signs and symptoms

Hypertension is rarely accompanied by symptoms. Half of all people with hypertension are unaware that they have it. Hypertension is usually identified as part of health screening or when seeking healthcare for an unrelated problem.

Some people with high blood pressure report headaches, as well as lightheadedness, vertigo, tinnitus (buzzing or hissing in the ears), altered vision or fainting episodes. These symptoms, however, might be related to associated anxiety rather than the high blood pressure itself.

Long-standing untreated hypertension can cause organ damage with signs such as changes in the optic fundus seen by ophthalmoscopy. The severity of hypertensive retinopathy correlates roughly with the duration or the severity of the hypertension. Other hypertension-caused organ damage include chronic kidney disease and thickening of the heart muscle.

Secondary hypertension

Secondary hypertension is hypertension due to an identifiable cause, and may result in certain specific additional signs and symptoms. For example, as well as causing high blood pressure, Cushing's syndrome frequently causes truncal obesity, glucose intolerance, moon face, a hump of fat behind the neck and shoulders (referred to as a buffalo hump), and purple abdominal stretch marks. Hyperthyroidism frequently causes weight loss with increased appetite, fast heart rate, bulging eyes, and tremor. Renal artery stenosis may be associated with a localized abdominal bruit to the left or right of the midline, or in both locations. Coarctation of the aorta frequently causes a decreased blood pressure in the lower extremities relative to the arms, or delayed or absent femoral arterial pulses. Pheochromocytoma may cause abrupt episodes of hypertension accompanied by headache, palpitations, pale appearance, and excessive sweating.

Hypertensive crisis

Severely elevated blood pressure (equal to or greater than a systolic 180 mmHg or diastolic of 120 mmHg) is referred to as a hypertensive crisis. Hypertensive crisis is categorized as either hypertensive urgency or hypertensive emergency, according to the absence or presence of end organ damage, respectively.

In hypertensive urgency, there is no evidence of end organ damage resulting from the elevated blood pressure. In these cases, oral medications are used to lower the BP gradually over 24 to 48 hours.

In hypertensive emergency, there is evidence of direct damage to one or more organs. The most affected organs include the brain, kidney, heart and lungs, producing symptoms which may include confusion, drowsiness, chest pain and breathlessness. In hypertensive emergency, the blood pressure must be reduced more rapidly to stop ongoing organ damage; however, there is a lack of randomized controlled trial evidence for this approach.

Pregnancy

Hypertension occurs in approximately 8–10% of pregnancies. Two blood pressure measurements six hours apart of greater than 140/90 mmHg are diagnostic of hypertension in pregnancy. High blood pressure in pregnancy can be classified as pre-existing hypertension, gestational hypertension, or pre-eclampsia. Women who have chronic hypertension before their pregnancy are at increased risk of complications such as premature birth, low birthweight or stillbirth. Women who have high blood pressure and had complications in their pregnancy have three times the risk of developing cardiovascular disease compared to women with normal blood pressure who had no complications in pregnancy.

Pre-eclampsia is a serious condition of the second half of pregnancy and following delivery characterised by increased blood pressure and the presence of protein in the urine. It occurs in about 5% of pregnancies and is responsible for approximately 16% of all maternal deaths globally. Pre-eclampsia also doubles the risk of death of the baby around the time of birth. Usually there are no symptoms in pre-eclampsia and it is detected by routine screening. When symptoms of pre-eclampsia occur the most common are headache, visual disturbance (often "flashing lights"), vomiting, pain over the stomach, and swelling. Pre-eclampsia can occasionally progress to a life-threatening condition called eclampsia, which is a hypertensive emergency and has several serious complications including vision loss, brain swelling, seizures, kidney failure, pulmonary edema, and disseminated intravascular coagulation (a blood clotting disorder).

In contrast, gestational hypertension is defined as new-onset hypertension during pregnancy without protein in the urine.

There have been significant findings on how exercising can help reduce the effects of hypertension just after one bout of exercise. Exercising can help reduce hypertension as well as pre-eclampsia and eclampsia.

The acute physiological responses include an increase in cardiac output (CO) of the individual (increased heart rate and stroke volume). This increase in CO can inadvertently maintain the amount of blood going into the muscles, improving functionality of the muscle later. Exercising can also improve systolic and diastolic blood pressure making it easier for blood to pump to the body. Through regular bouts of physical activity, blood pressure can reduce the incidence of hypertension.

Aerobic exercise has been shown to regulate blood pressure more effectively than resistance training. It is recommended to see the effects of exercising, that a person should aim for 5-7 days/ week of aerobic exercise. This type of exercise should have an intensity of light to moderate, utilizing ~85% of max heart rate (220-age). Aerobic has shown a decrease in SBP by 5-15mmHg, versus resistance training showing a decrease of only 3-5mmHg. Aerobic exercises such as jogging, rowing, dancing, or hiking can decrease SBP the greatest. The decrease in SBP can regulate the effect of hypertension ensuring the baby will not be harmed. Resistance training takes a toll on the cardiovascular system in untrained individuals, leading to a reluctance in prescription of resistance training for hypertensive reduction purposes.

Children

Failure to thrive, seizures, irritability, lack of energy, and difficulty in breathing can be associated with hypertension in newborns and young infants. In older infants and children, hypertension can cause headache, unexplained irritability, fatigue, failure to thrive, blurred vision, nosebleeds, and facial paralysis.

Causes:

Primary hypertension

Primary (also termed essential) hypertension results from a complex interaction of genes and environmental factors. More than 2000 common genetic variants with small effects on blood pressure have been identified in association with high blood pressure, as well as some rare genetic variants with large effects on blood pressure. There is also evidence that DNA methylation at multiple nearby CpG sites may link some sequence variation to blood pressure, possibly via effects on vascular or renal function.

Blood pressure rises with aging in societies with a western diet and lifestyle, and the risk of becoming hypertensive in later life is substantial in most such societies. Several environmental or lifestyle factors influence blood pressure. Reducing dietary salt intake lowers blood pressure; as does weight loss, exercise training, vegetarian diets, increased dietary potassium intake and high dietary calcium supplementation. Increasing alcohol intake is associated with higher blood pressure, but the possible roles of other factors such as caffeine consumption, and vitamin D deficiency are less clear. Average blood pressure is higher in the winter than in the summer.

Depression is associated with hypertension and loneliness is also a risk factor. Periodontal disease is also associated with high blood pressure. Chemical element As exposure through drinking water is associated with elevated blood pressure. Air pollution is associated with hypertension. Whether these associations are causal is unknown. Gout and elevated blood uric acid are associated with hypertension and evidence from genetic (Mendelian Randomization) studies and clinical trials indicate this relationship is likely to be causal. Insulin resistance, which is common in obesity and is a component of syndrome X (or metabolic syndrome), can cause hyperuricemia and gout and is also associated with elevated blood pressure.

Events in early life, such as low birth weight, maternal smoking, and lack of breastfeeding may be risk factors for adult essential hypertension, although strength of the relationships is weak and the mechanisms linking these exposures to adult hypertension remain unclear.

Secondary hypertension

Secondary hypertension results from an identifiable cause. Kidney disease is the most common secondary cause of hypertension. Hypertension can also be caused by endocrine conditions, such as Cushing's syndrome, hyperthyroidism, hypothyroidism, acromegaly, Conn's syndrome or hyperaldosteronism, renal artery stenosis (from atherosclerosis or fibromuscular dysplasia), hyperparathyroidism, and pheochromocytoma. Other causes of secondary hypertension include obesity, sleep apnea, pregnancy, coarctation of the aorta, excessive eating of liquorice, excessive drinking of alcohol, certain prescription medicines, herbal remedies, and stimulants such as cocaine and methamphetamine.

A 2018 review found that any alcohol increased blood pressure in males while over one or two drinks increased the risk in females.

Additional Information

Hypertension is a condition that arises when the blood pressure is abnormally high. Hypertension occurs when the body’s smaller blood vessels (the arterioles) narrow, causing the blood to exert excessive pressure against the vessel walls and forcing the heart to work harder to maintain the pressure. Although the heart and blood vessels can tolerate increased blood pressure for months and even years, eventually the heart may enlarge (a condition called hypertrophy) and be weakened to the point of failure. Injury to blood vessels in the kidneys, brain, and eyes also may occur.

Blood pressure is actually a measure of two pressures, the systolic and the diastolic. The systolic pressure (the higher pressure and the first number recorded) is the force that blood exerts on the artery walls as the heart contracts to pump the blood to the peripheral organs and tissues. The diastolic pressure (the lower pressure and the second number recorded) is residual pressure exerted on the arteries as the heart relaxes between beats. A diagnosis of hypertension is made when blood pressure reaches or exceeds 140/90 mmHg (read as “140 over 90 millimeters of mercury”).

Classification

When there is no demonstrable underlying cause of hypertension, the condition is classified as essential hypertension. (Essential hypertension is also called primary or idiopathic hypertension.) This is by far the most common type of high blood pressure, occurring in 90 to 95 percent of patients. Genetic factors appear to play a major role in the occurrence of essential hypertension. Secondary hypertension is associated with an underlying disease, which may be renal, neurologic, or endocrine in origin; examples of such diseases include Bright disease (glomerulonephritis; inflammation of the urine-producing structures in the kidney), atherosclerosis of blood vessels in the brain, and Cushing syndrome (hyperactivity of the adrenal glands). In cases of secondary hypertension, correction of the underlying cause may cure the hypertension. Various external agents also can raise blood pressure. These include cocaine, amphetamines, cold remedies, thyroid supplements, corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), and oral contraceptives.

Malignant hypertension is present when there is a sustained or sudden rise in diastolic blood pressure exceeding 120 mmHg, with accompanying evidence of damage to organs such as the eyes, brain, heart, and kidneys. Malignant hypertension is a medical emergency and requires immediate therapy and hospitalization.

Epidemiology

Elevated arterial pressure is one of the most important public health problems in developed countries. In the United States, for instance, nearly 30 percent of the adult population is hypertensive. High blood pressure is significantly more prevalent and serious among African Americans. Age, race, gender, smoking, alcohol intake, elevated serum cholesterol, salt intake, glucose intolerance, obesity, and stress all may contribute to the degree and prognosis of the disease. In both men and women, the risk of developing high blood pressure increases with age.

Hypertension has been called the “silent killer” because it usually produces no symptoms. It is important, therefore, for anyone with risk factors to have their blood pressure checked regularly and to make appropriate lifestyle changes.

Complications

The most common immediate cause of hypertension-related death is heart disease, but death from stroke or renal (kidney) failure is also frequent. Complications result directly from the increased pressure (cerebral hemorrhage, retinopathy, left ventricular hypertrophy, congestive heart failure, arterial aneurysm, and vascular rupture), from atherosclerosis (increased coronary, cerebral, and renal vascular resistance), and from decreased blood flow and ischemia (myocardial infarction, cerebral thrombosis and infarction, and renal nephrosclerosis). The risk of developing many of these complications is greatly elevated when hypertension is diagnosed in young adulthood.

Treatment

Effective treatment will reduce overall cardiovascular morbidity and mortality. Nondrug therapy consists of: (1) relief of stress, (2) dietary management (restricted intake of salt, calories, cholesterol, and saturated fats; sufficient intake of potassium, magnesium, calcium, and vitamin C), (3) regular aerobic exercise, (4) weight reduction, (5) smoking cessation, and (6) reduced intake of alcohol and caffeine.

Mild to moderate hypertension may be controlled by a single-drug regimen, although more severe cases often require a combination of two or more drugs. Diuretics are a common medication; these agents lower blood pressure primarily by reducing body fluids and thereby reducing peripheral resistance to blood flow. However, they deplete the body’s supply of potassium, so it is recommended that potassium supplements be added or that potassium-sparing diuretics be used. Beta-adrenergic blockers (beta-blockers) block the effects of epinephrine (adrenaline), thus easing the heart’s pumping action and widening blood vessels. Vasodilators act by relaxing smooth muscle in the walls of blood vessels, allowing small arteries to dilate and thereby decreasing total peripheral resistance. Calcium channel blockers promote peripheral vasodilation and reduce vascular resistance. Angiotensin-converting enzyme (ACE) inhibitors inhibit the generation of a potent vasoconstriction agent (angiotensin II), and they also may retard the degradation of a potent vasodilator (bradykinin) and involve the synthesis of vasodilatory prostaglandins. Angiotensin receptor antagonists are similar to ACE inhibitors in utility and tolerability, but instead of blocking the production of angiotensin II, they completely inhibit its binding to the angiotensin II receptor. Statins, best known for their use as cholesterol-lowering agents, have shown promise as antihypertensive drugs because of their ability to lower both diastolic and systolic blood pressure. The mechanism by which statins act to reduce blood pressure is unknown; however, scientists suspect that these drugs activate substances involved in vasodilation.

Other agents that may be used in the treatment of hypertension include the antidiabetic drug semaglutide and the drug aprocitentan. Semaglutide is used specifically in patients who are obese or overweight. The drug acts as a glucagon-like peptide-1 (GLP-1) receptor agonist; GLP-1 interacts with receptors in the brain involved in the regulation of appetite, and thus semaglutide effectively triggers a reduction in appetite and thereby helps relieve symptoms of weight-related complications, such as hypertension. Aprocitentan acts as an inhibitor at endothelin A and endothelin B receptors, preventing binding by endothelin-1, which is a key protein involved in the activation of vasoconstriction and inflammatory processes in blood vessels.

Hypertension-2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2456 2025-02-09 17:05:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2355) Hypotension

Gist

Low blood pressure is a condition in which the force of the blood pushing against the artery walls is too low. It's also called hypotension. Blood pressure is measured in millimeters of mercury (mm Hg). In general, low blood pressure is a reading lower than 90/60 mm Hg.

Low blood pressure occurs when blood pressure is much lower than normal. This means the heart, brain, and other parts of the body may not get enough blood. Normal blood pressure is mostly between 90/60 mmHg and 120/80 mmHg. The medical word for low blood pressure is hypotension.

Summary

Hypotension, also known as low blood pressure, is a cardiovascular condition characterized by abnormally reduced blood pressure. Blood pressure is the force of blood pushing against the walls of the arteries as the heart pumps out blood and is indicated by two numbers, the systolic blood pressure (the top number) and the diastolic blood pressure (the bottom number), which are the maximum and minimum blood pressures within the cardiac cycle, respectively. A systolic blood pressure of less than 90 millimeters of mercury (mmHg) or diastolic of less than 60 mmHg is generally considered to be hypotension. Different numbers apply to children. However, in practice, blood pressure is considered too low only if noticeable symptoms are present.

Symptoms may include dizziness, lightheadedness, confusion, feeling tired, weakness, headache, blurred vision, nausea, neck or back pain, an irregular heartbeat or feeling that the heart is skipping beats or fluttering, sweating, and fainting. Hypotension is the opposite of hypertension, which is high blood pressure. It is best understood as a physiological state rather than a disease. Severely low blood pressure can deprive the brain and other vital organs of oxygen and nutrients, leading to a life-threatening condition called shock. Shock is classified based on the underlying cause, including hypovolemic shock, cardiogenic shock, distributive shock, and obstructive shock.

Hypotension can be caused by strenuous exercise, excessive heat, low blood volume (hypovolemia), hormonal changes, widening of blood vessels, anemia, vitamin B12 deficiency, anaphylaxis, heart problems, or endocrine problems. Some medications can also lead to hypotension. There are also syndromes that can cause hypotension in patients including orthostatic hypotension, vasovagal syncope, and other rarer conditions.

For many people, excessively low blood pressure can cause dizziness and fainting or indicate serious heart, endocrine or neurological disorders.

For some people who exercise and are in top physical condition, low blood pressure could be normal. A single session of exercise can induce hypotension and water-based exercise can induce a hypotensive response.

Treatment depends on what causes low blood pressure. Treatment of hypotension may include the use of intravenous fluids or vasopressors. When using vasopressors, trying to achieve a mean arterial pressure (MAP) of greater than 70 mmHg does not appear to result in better outcomes than trying to achieve an MAP of greater than 65 mmHg in adults.

Details

Low blood pressure is a reading below 90/60 mm Hg. Many issues can cause low blood pressure. Treatment varies depending on what’s causing it. Symptoms of low blood pressure include dizziness and fainting, but many people don’t have symptoms. The cause also affects your prognosis.

Symptoms of low blood pressure include feeling tired or dizzy.

What is low blood pressure?

Hypotension, or low blood pressure, is when your blood pressure is much lower than expected. It can happen either as a condition on its own or as a symptom of a wide range of conditions. It may not cause symptoms. But when it does, you may need medical attention.

Types of low blood pressure

Hypotension has two definitions:

* Absolute hypotension: Your resting blood pressure is below 90/60 millimeters of mercury (mm Hg).
* Orthostatic hypotension: Your blood pressure stays low for longer than three minutes after you stand up from a sitting position. (It’s normal for your blood pressure to drop briefly when you change positions, but not for that long.) The drop must be 20 mm Hg or more for your systolic (top) pressure and 10 mm Hg or more for your diastolic (bottom) pressure. Another name for this is postural hypotension because it happens with changes in posture.

Measuring blood pressure involves two numbers:

Systolic (top number): This is the pressure on your arteries each time your heart beats.
Diastolic (bottom number): This is how much pressure your arteries are under between heartbeats.

What is considered low blood pressure?

Learn how blood pressure is measured.

Low blood pressure is below 90/60 mm Hg. Normal blood pressure is above that, up to 120/80 mm Hg.

How common is low blood pressure?

Because low blood pressure is common without any symptoms, it’s impossible to know how many people it affects. However, orthostatic hypotension seems to be more and more common as you get older. An estimated 5% of people have it at age 50, while that figure climbs to more than 30% in people over 70.

Who does low blood pressure affect?

Hypotension can affect people of any age and background, depending on why it happens. However, it’s more likely to cause symptoms in people over 50 (especially orthostatic hypotension). It can also happen (with no symptoms) to people who are very physically active, which is more common in younger people.

Symptoms and Causes:

What are the symptoms of low blood pressure?

Low blood pressure symptoms include:

* Dizziness or feeling lightheaded.
* Fainting or passing out (syncope).
* Nausea or vomiting.
* Distorted or blurred vision.
* Fast, shallow breathing.
* Fatigue or weakness.
* Feeling tired, sluggish or lethargic.
* Confusion or trouble concentrating.
* Agitation or other unusual changes in behavior (a person not acting like themselves).

For people with symptoms, the effects depend on why hypotension is happening, how fast it develops and what caused it. Slow decreases in blood pressure happen normally, so hypotension becomes more common as people get older. Fast decreases in blood pressure can mean certain parts of your body aren’t getting enough blood flow. That can have effects that are unpleasant, disruptive or even dangerous.

Usually, your body can automatically control your blood pressure and keep it from dropping too much. If it starts to drop, your body tries to make up for that, either by speeding up your heart rate or constricting blood vessels to make them narrower. Symptoms of hypotension happen when your body can’t offset the drop in blood pressure.

For many people, hypotension doesn’t cause any symptoms. Many people don’t even know their blood pressure is low unless they measure their blood pressure.

What are the possible signs of low blood pressure?

Your healthcare provider may observe these signs of low blood pressure:

* A heart rate that’s too slow or too fast.
* A skin color that looks lighter than it usually does.
* Cool kneecaps.
* Low cardiac output (how much blood your heart pumps).
* Low urine (pee) output.

What causes low blood pressure?

Hypotension can happen for a wide range of reasons. Causes of low blood pressure include:

* Orthostatic hypotension: This happens when you stand up too quickly and your body can’t compensate with more blood flow to your brain.
* Central nervous system diseases: Conditions like Parkinson’s disease can affect how your nervous system controls your blood pressure. People with these conditions may feel the effects of low blood pressure after eating because their digestive systems use more blood as they digest food.
* Low blood volume: Blood loss from severe injuries can cause low blood pressure. Dehydration can also contribute to low blood volume.
* Life-threatening conditions: These conditions include irregular heart rhythms (arrhythmias), pulmonary embolism (PE), heart attacks and collapsed lung. Life-threatening allergic reactions (anaphylaxis) or immune reactions to severe infections (sepsis) can also cause hypotension.
* Heart and lung conditions: You can get hypotension when your heart beats too quickly or too slowly, or if your lungs aren’t working as they should. Advanced heart failure (weak heart muscle) is another cause.
* Prescription medications: Hypotension can happen with medications that treat high blood pressure, heart failure, erectile dysfunction, neurological problems, depression and more. Don’t stop taking any prescribed medicine unless your provider tells you to stop.
* Alcohol or recreational drugs: Recreational drugs can lower your blood pressure, as can alcohol (for a short time). Certain herbal supplements, vitamins or home remedies can also lower your blood pressure. This is why you should always include these when you tell your healthcare provider what medications you’re taking.
* Pregnancy: Orthostatic hypotension is possible in the first and second trimesters of pregnancy. Bleeding or other complications of pregnancy can also cause low blood pressure.
* Extreme temperatures: Being too hot or too cold can affect hypotension and make its effects worse.

What are the complications of low blood pressure?

Complications that can happen because of hypotension include:

* Falls and fall-related injuries: These are the biggest risks with hypotension because it can cause dizziness and fainting. Falls can lead to broken bones, concussions and other serious or even life-threatening injuries. If you have hypotension, preventing falls should be one of your biggest priorities.
* Shock: When your blood pressure is low, that can affect your organs by reducing the amount of blood they get. That can cause organ damage or even shock (where your body starts to shut down because of limited blood flow and oxygen).
* Heart problems or stroke: Low blood pressure can cause your heart to try to compensate by pumping faster or harder. Over time, that can cause permanent heart damage and even heart failure. It can also cause problems like deep vein thrombosis (DVT) and stroke because blood isn’t flowing like it should, causing clots to form.

Diagnosis and Tests:

How is low blood pressure diagnosed?

Hypotension itself is easy to diagnose. Taking your blood pressure is all you need to do. But figuring out why you have hypotension is another story. If you have symptoms, a healthcare provider will likely use a variety of tests to figure out why it’s happening and if there’s any danger to you because of it.

What tests will be done to diagnose low blood pressure?

Your provider may recommend the following tests:

Lab testing

Tests on your blood and pee (urine) can look for any potential problems, like:

* Diabetes.
* Vitamin deficiencies.
* Thyroid or hormone problems.
* Low iron levels (anemia).
* Pregnancy (for anyone who can become pregnant).

Imaging

If providers suspect a heart or lung problem is behind your hypotension, they’ll likely use imaging tests to see if they’re right. These tests include:

* X-rays.
* Computed tomography (CT) scans.
* Magnetic resonance imaging (MRI).
* Echocardiogram or similar ultrasound-based tests.

Diagnostic testing

These tests look for specific problems with your heart or other body systems.

* Electrocardiogram (ECG or EKG).
* Exercise stress testing.
* Tilt table test (can help in diagnosing orthostatic hypotension).

Additional Information

Hypotension is a condition in which the blood pressure is abnormally low, either because of reduced blood volume or because of increased blood-vessel capacity. Though not in itself an indication of ill health, it often accompanies disease.

Extensive bleeding is an obvious cause of reduced blood volume that leads to hypotension. There are other possible causes. A person who has suffered an extensive burn loses blood plasma—blood minus the red and white blood cells and the platelets. Blood volume is reduced in a number of conditions involving loss of salt and water from the tissues—as in excessive sweating and diarrhea—and its replacement with water from the blood. Loss of water from the blood to the tissues may result from exposure to cold temperatures. Also, a person who remains standing for as long as one-half hour may temporarily lose as much as 15 percent of the blood water into the tissues of the legs.

Orthostatic hypotension—low blood pressure upon standing up—seems to stem from a failure in the autonomic nervous system. Normally, when a person stands up, there is a reflex constriction of the small arteries and veins to offset the effects of gravity. Hypotension from an increase in the capacity of the blood vessels is a factor in fainting (see syncope). Hypotension is also a factor in poliomyelitis, in shock, and in overdose of depressant drugs, such as barbiturates.

detail-main-polotno-34-1.jpg?w=1200&q=75


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2457 2025-02-10 17:51:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2356) Mathematician

Gist

A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, structure, space, models, and change.

Summary

Mathematics is the science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects. It deals with logical reasoning and quantitative calculation, and its development has involved an increasing degree of idealization and abstraction of its subject matter. Since the 17th century, mathematics has been an indispensable adjunct to the physical sciences and technology, and in more recent times it has assumed a similar role in the quantitative aspects of the life sciences.

In many cultures—under the stimulus of the needs of practical pursuits, such as commerce and agriculture—mathematics has developed far beyond basic counting. This growth has been greatest in societies complex enough to sustain these activities and to provide leisure for contemplation and the opportunity to build on the achievements of earlier mathematicians.

All mathematical systems (for example, Euclidean geometry) are combinations of sets of axioms and of theorems that can be logically deduced from the axioms. Inquiries into the logical and philosophical basis of mathematics reduce to questions of whether the axioms of a given system ensure its completeness and its consistency. For full treatment of this aspect, see mathematics, foundations of.

As a consequence of the exponential growth of science, most mathematics has developed since the 15th century ce, and it is a historical fact that, from the 15th century to the late 20th century, new developments in mathematics were largely concentrated in Europe and North America. For these reasons, the bulk of this article is devoted to European developments since 1500.

This does not mean, however, that developments elsewhere have been unimportant. Indeed, to understand the history of mathematics in Europe, it is necessary to know its history at least in ancient Mesopotamia and Egypt, in ancient Greece, and in Islamic civilization from the 9th to the 15th century. The way in which these civilizations influenced one another and the important direct contributions Greece and Islam made to later developments are discussed in the first parts of this article.

India’s contributions to the development of contemporary mathematics were made through the considerable influence of Indian achievements on Islamic mathematics during its formative years. A separate article, South Asian mathematics, focuses on the early history of mathematics in the Indian subcontinent and the development there of the modern decimal place-value numeral system. The article East Asian mathematics covers the mostly independent development of mathematics in China, Japan, Korea, and Vietnam.

The substantive branches of mathematics are treated in several articles. See algebra; analysis; arithmetic; combinatorics; game theory; geometry; number theory; numerical analysis; optimization; probability theory; set theory; statistics; trigonometry.

Ancient mathematical sources

It is important to be aware of the character of the sources for the study of the history of mathematics. The history of Mesopotamian and Egyptian mathematics is based on the extant original documents written by scribes. Although in the case of Egypt these documents are few, they are all of a type and leave little doubt that Egyptian mathematics was, on the whole, elementary and profoundly practical in its orientation. For Mesopotamian mathematics, on the other hand, there are a large number of clay tablets, which reveal mathematical achievements of a much higher order than those of the Egyptians. The tablets indicate that the Mesopotamians had a great deal of remarkable mathematical knowledge, although they offer no evidence that this knowledge was organized into a deductive system. Future research may reveal more about the early development of mathematics in Mesopotamia or about its influence on Greek mathematics, but it seems likely that this picture of Mesopotamian mathematics will stand.

From the period before Alexander the Great, no Greek mathematical documents have been preserved except for fragmentary paraphrases, and, even for the subsequent period, it is well to remember that the oldest copies of Euclid’s Elements are in Byzantine manuscripts dating from the 10th century ce. This stands in complete contrast to the situation described above for Egyptian and Babylonian documents. Although, in general outline, the present account of Greek mathematics is secure, in such important matters as the origin of the axiomatic method, the pre-Euclidean theory of ratios, and the discovery of the conic sections, historians have given competing accounts based on fragmentary texts, quotations of early writings culled from nonmathematical sources, and a considerable amount of conjecture.

Many important treatises from the early period of Islamic mathematics have not survived or have survived only in Latin translations, so that there are still many unanswered questions about the relationship between early Islamic mathematics and the mathematics of Greece and India. In addition, the amount of surviving material from later centuries is so large in comparison with that which has been studied that it is not yet possible to offer any sure judgment of what later Islamic mathematics did not contain, and therefore it is not yet possible to evaluate with any assurance what was original in European mathematics from the 11th to the 15th century.

In modern times the invention of printing has largely solved the problem of obtaining secure texts and has allowed historians of mathematics to concentrate their editorial efforts on the correspondence or the unpublished works of mathematicians. However, the exponential growth of mathematics means that, for the period from the 19th century on, historians are able to treat only the major figures in any detail. In addition, there is, as the period gets nearer the present, the problem of perspective. Mathematics, like any other human activity, has its fashions, and the nearer one is to a given period, the more likely these fashions will look like the wave of the future. For this reason, the present article makes no attempt to assess the most recent developments in the subject.

Details

A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems. Mathematicians are concerned with numbers, data, quantity, structure, space, models, and change.

History

One of the earliest known mathematicians was Thales of Miletus (c. 624 – c. 546 BC); he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem.

The number of known mathematicians grew when Pythagoras of Samos (c. 582 – c. 507 BC) established the Pythagorean school, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number". It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins.

The first woman mathematician recorded by history was Hypatia of Alexandria (c. AD 350 – 415). She succeeded her father as librarian at the Great Library and wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria punished her, presuming she was involved, by stripping her naked and scraping off her skin with clamshells (some say roofing tiles).

Science and mathematics in the Islamic world during the Middle Ages followed various models and modes of funding varied based primarily on scholars. It was extensive patronage and strong intellectual policies implemented by specific rulers that allowed scientific knowledge to develop in many areas. Funding for translation of scientific texts in other languages was ongoing throughout the reign of certain caliphs, and it turned out that certain scholars became experts in the works they translated, and in turn received further support for continuing to develop certain sciences. As these sciences received wider attention from the elite, more scholars were invited and funded to study particular sciences. An example of a translator and mathematician who benefited from this type of support was Al-Khawarizmi. A notable feature of many scholars working under Muslim rule in medieval times is that they were often polymaths. Examples include the work on optics, maths and astronomy of Ibn al-Haytham.

The Renaissance brought an increased emphasis on mathematics and science to Europe. During this period of transition from a mainly feudal and ecclesiastical culture to a predominantly secular one, many notable mathematicians had other occupations: Luca Pacioli (founder of accounting); Niccolò Fontana Tartaglia (notable engineer and bookkeeper); Gerolamo Cardano (earliest founder of probability and binomial expansion); Robert Recorde (physician) and François Viète (lawyer).

As time passed, many mathematicians gravitated towards universities. An emphasis on free thinking and experimentation had begun in Britain's oldest universities beginning in the seventeenth century at Oxford with the scientists Robert Hooke and Robert Boyle, and at Cambridge where Isaac Newton was Lucasian Professor of Mathematics & Physics. Moving into the 19th century, the objective of universities all across Europe evolved from teaching the "regurgitation of knowledge" to "encouraging productive thinking." In 1810, Alexander von Humboldt convinced the king of Prussia, Fredrick William III, to build a university in Berlin based on Friedrich Schleiermacher's liberal ideas; the goal was to demonstrate the process of the discovery of knowledge and to teach students to "take account of fundamental laws of science in all their thinking." Thus, seminars and laboratories started to evolve.

British universities of this period adopted some approaches familiar to the Italian and German universities, but as they already enjoyed substantial freedoms and autonomy the changes there had begun with the Age of Enlightenment, the same influences that inspired Humboldt. The Universities of Oxford and Cambridge emphasized the importance of research, arguably more authentically implementing Humboldt's idea of a university than even German universities, which were subject to state authority. Overall, science (including mathematics) became the focus of universities in the 19th and 20th centuries. Students could conduct research in seminars or laboratories and began to produce doctoral theses with more scientific content. According to Humboldt, the mission of the University of Berlin was to pursue scientific knowledge. The German university system fostered professional, bureaucratically regulated scientific research performed in well-equipped laboratories, instead of the kind of research done by private and individual scholars in Great Britain and France. In fact, Rüegg asserts that the German system is responsible for the development of the modern research university because it focused on the idea of "freedom of scientific research, teaching and study."

Required education

Mathematicians usually cover a breadth of topics within mathematics in their undergraduate education, and then proceed to specialize in topics of their own choice at the graduate level. In some universities, a qualifying exam serves to test both the breadth and depth of a student's understanding of mathematics; the students who pass are permitted to work on a doctoral dissertation.

Activities:

Applied mathematics

Mathematicians involved with solving problems with applications in real life are called applied mathematicians. Applied mathematicians are mathematical scientists who, with their specialized knowledge and professional methodology, approach many of the imposing problems presented in related scientific fields. With professional focus on a wide variety of problems, theoretical systems, and localized constructs, applied mathematicians work regularly in the study and formulation of mathematical models. Mathematicians and applied mathematicians are considered to be two of the STEM (science, technology, engineering, and mathematics) careers.

The discipline of applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry; thus, "applied mathematics" is a mathematical science with specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on problems, often concrete but sometimes abstract. As professionals focused on problem solving, applied mathematicians look into the formulation, study, and use of mathematical models in science, engineering, business, and other areas of mathematical practice.

Pure mathematics

Pure mathematics is mathematics that studies entirely abstract concepts. From the eighteenth century onwards, this was a recognized category of mathematical activity, sometimes characterized as speculative mathematics, and at variance with the trend towards meeting the needs of navigation, astronomy, physics, economics, engineering, and other applications.

Another insightful view put forth is that pure mathematics is not necessarily applied mathematics: it is possible to study abstract entities with respect to their intrinsic nature, and not be concerned with how they manifest in the real world. Even though the pure and applied viewpoints are distinct philosophical positions, in practice there is much overlap in the activity of pure and applied mathematicians.

To develop accurate models for describing the real world, many applied mathematicians draw on tools and techniques that are often considered to be "pure" mathematics. On the other hand, many pure mathematicians draw on natural and social phenomena as inspiration for their abstract research.

Mathematics teaching

Many professional mathematicians also engage in the teaching of mathematics. Duties may include:

* teaching university mathematics courses;
* supervising undergraduate and graduate research; and
* serving on academic committees.

Consulting

Many careers in mathematics outside of universities involve consulting. For instance, actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, or loss of property. Actuaries also address financial questions, including those involving the level of pension contributions required to produce a certain retirement income and the way in which a company should invest resources to maximize its return on investments in light of potential risk. Using their broad knowledge, actuaries help design and price insurance policies, pension plans, and other financial strategies in a manner which will help ensure that the plans are maintained on a sound financial basis.

As another example, mathematical finance will derive and extend the mathematical or numerical models without necessarily establishing a link to financial theory, taking observed market prices as input. Mathematical consistency is required, not compatibility with economic theory. Thus, for example, while a financial economist might study the structural reasons why a company may have a certain share price, a financial mathematician may take the share price as a given, and attempt to use stochastic calculus to obtain the corresponding value of derivatives of the stock.

Occupations

In 1938 in the United States, mathematicians were desired as teachers, calculating machine operators, mechanical engineers, accounting auditor bookkeepers, and actuary statisticians.
According to the Dictionary of Occupational Titles occupations in mathematics include the following.

* Mathematician
* Operations-Research Analyst
* Mathematical Statistician
* Mathematical Technician
* Actuary
* Applied Statistician
* Weight Analyst

Prizes in mathematics

There is no Nobel Prize in mathematics, though sometimes mathematicians have won the Nobel Prize in a different field, such as economics or physics. Prominent prizes in mathematics include the Abel Prize, the Chern Medal, the Fields Medal, the Gauss Prize, the Nemmers Prize, the Balzan Prize, the Crafoord Prize, the Shaw Prize, the Steele Prize, the Wolf Prize, the Schock Prize, and the Nevanlinna Prize.

The American Mathematical Society, Association for Women in Mathematics, and other mathematical societies offer several prizes aimed at increasing the representation of women and minorities in the future of mathematics.

mathematician-greatest-mathematicians-of-all-time.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2458 2025-02-11 00:06:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2357) Mortar

Gist

Mortar is a combination of sand, a binding agent like cement or lime and water, used in masonry buildings to bridge the space between building blocks. It is applied in the form of a paste which then hardens and binds the masonry units such as stones, bricks, or concrete used in the construction.

Common mortar specifications include 1:3, 1:2:9 or 1:1:6 mixes. The first one or two digits refer to the binder content (lime, cement or both) and the last digit always refers to the filler, which is usually sand. So a 1:3 mix could mean one part by volume of lime or cement to three parts by volume of sand.

Summary

Mortar, in technology, is material used in building construction to bond brick, stone, tile, or concrete blocks into a structure. Mortar consists of inert siliceous (sandy) material mixed with cement and water in such proportions that the resulting substance will be sufficiently plastic to enable ready application with the mason’s trowel and to flow slightly but not collapse under the weight of the masonry units. Slaked lime is often added to promote smoothness, and sometimes colouring agents are also added. Cement is the most costly ingredient and is held to the minimum consistent with desired strength and watertightness.

Mortar hardens into a stonelike mass and, properly applied, distributes the load of the structure uniformly over the bonding surfaces and provides a weathertight joint.

Details

Mortar is a workable paste which hardens to bind building blocks such as stones, bricks, and concrete masonry units, to fill and seal the irregular gaps between them, spread the weight of them evenly, and sometimes to add decorative colours or patterns to masonry walls. In its broadest sense, mortar includes pitch, asphalt, and soft clay, as those used between bricks, as well as cement mortar. The word "mortar" comes from the Old French word mortier, "builder's mortar, plaster; bowl for mixing."

Cement mortar becomes hard when it cures, resulting in a rigid aggregate structure; however, the mortar functions as a weaker component than the building blocks and serves as the sacrificial element in the masonry, because mortar is easier and less expensive to repair than the building blocks. Bricklayers typically make mortars using a mixture of sand, a binder, and water. The most common binder since the early 20th century is Portland cement, but the ancient binder lime (producing lime mortar) is still used in some specialty new construction. Lime, lime mortar, and gypsum in the form of plaster of Paris are used particularly in the repair and repointing of historic buildings and structures, so that the repair materials will be similar in performance and appearance to the original materials. Several types of cement mortars and additives exist.

Ancient mortar

The first mortars were made of mud and clay, as demonstrated in the 10th millennia BCE buildings of Jericho, and the 8th millennia BCE of Ganj Dareh.

According to Roman Ghirshman, the first evidence of humans using a form of mortar was at the Mehrgarh of Baluchistan in what is today Pakistan, built of sun-dried bricks in 6500 BCE.

Gypsum mortar, also called plaster of Paris, was used in the construction of many ancient structures. It is made from gypsum, which requires a lower firing temperature. It is therefore easier to make than lime mortar and sets up much faster, which may be a reason it was used as the typical mortar in ancient, brick arch and vault construction. Gypsum mortar is not as durable as other mortars in damp conditions.

In the Indian subcontinent, multiple cement types have been observed in the sites of the Indus Valley civilization, with gypsum appearing at sites such as the Mohenjo-daro city-settlement, which dates to earlier than 2600 BCE.

Gypsum cement that was "light grey and contained sand, clay, traces of calcium carbonate, and a high percentage of lime" was used in the construction of wells, drains, and on the exteriors of "important looking buildings." Bitumen mortar was also used at a lower-frequency, including in the Great Bath at Mohenjo-daro.

In early Egyptian pyramids, which were constructed during the Old Kingdom (~2600–2500 BCE), the limestone blocks were bound by a mortar of mud and clay, or clay and sand. In later Egyptian pyramids, the mortar was made of gypsum, or lime. Gypsum mortar was essentially a mixture of plaster and sand and was quite soft.

2nd millennia BCE Babylonian constructions used lime or pitch for mortar.

Historically, building with concrete and mortar next appeared in Greece. The excavation of the underground aqueduct of Megara revealed that a reservoir was coated with a pozzolanic mortar 12 mm thick. This aqueduct dates back to c. 500 BCE. Pozzolanic mortar is a lime based mortar, but is made with an additive of volcanic ash that allows it to be hardened underwater; thus it is known as hydraulic cement. The Greeks obtained the volcanic ash from the Greek islands Thira and Nisiros, or from the then Greek colony of Dicaearchia (Pozzuoli) near Naples, Italy. The Romans later improved the use and methods of making what became known as pozzolanic mortar and cement. Even later, the Romans used a mortar without pozzolana using crushed terra cotta, introducing aluminum oxide and silicon dioxide into the mix. This mortar was not as strong as pozzolanic mortar, but, because it was denser, it better resisted penetration by water.

Hydraulic mortar was not available in ancient China, possibly due to a lack of volcanic ash. Around 500 CE, sticky rice soup was mixed with slaked lime to make an inorganic−organic composite sticky rice mortar that had more strength and water resistance than lime mortar.

It is not understood how the art of making hydraulic mortar and cement, which was perfected and in such widespread use by both the Greeks and Romans, was then lost for almost two millennia. During the Middle Ages when the Gothic cathedrals were being built, the only active ingredient in the mortar was lime. Since cured lime mortar can be degraded by contact with water, many structures suffered over the centuries from wind-blown rain.

Ordinary Portland cement mortar

Ordinary Portland cement mortar, commonly known as OPC mortar or just cement mortar, is created by mixing powdered ordinary Portland cement, fine aggregate and water.

It was invented in 1794 by Joseph Aspdin and patented on 18 December 1824, largely as a result of efforts to develop stronger mortars. It was made popular during the late nineteenth century, and had by 1930 became more popular than lime mortar as construction material. The advantages of Portland cement is that it sets hard and quickly, allowing a faster pace of construction. Furthermore, fewer skilled workers are required in building a structure with Portland cement.

As a general rule, however, Portland cement should not be used for the repair or repointing of older buildings built in lime mortar, which require the flexibility, softness and breathability of lime if they are to function correctly.

In the United States and other countries, five standard types of mortar (available as dry pre-mixed products) are generally used for both new construction and repair. Strengths of mortar change based on the mix ratio for each type of mortar, which are specified under the ASTM standards. These premixed mortar products are designated by one of the five letters, M, S, N, O, and K. Type M mortar is the strongest, and Type K the weakest.

These type letters are taken from the alternate letters of the words "MaSoN wOrK".

Polymer cement mortar

Polymer cement mortars (PCM) are the materials which are made by partially replacing the cement hydrate binders of conventional cement mortar with polymers. The polymeric admixtures include latexes or emulsions, redispersible polymer powders, water-soluble polymers, liquid thermoset resins and monomers. Although they increase cost of mortars when used as an additive, they enhance properties. Polymer mortar has low permeability that may be detrimental to moisture accumulation when used to repair a traditional brick, block or stone wall. It is mainly designed for repairing concrete structures. The use of recovered plastics in mortars is being researched and is gaining ground. Depolymerizing PET to use as a polymeric binder to enhance mortars is actively being studied.

Lime mortar

The setting speed can be increased by using impure limestone in the kiln, to form a hydraulic lime that will set on contact with water. Such a lime must be stored as a dry powder. Alternatively, a pozzolanic material such as calcined clay or brick dust may be added to the mortar mix. Addition of a pozzolanic material will make the mortar set reasonably quickly by reaction with the water.

It would be problematic to use Portland cement mortars to repair older buildings originally constructed using lime mortar. Lime mortar is softer than cement mortar, allowing brickwork a certain degree of flexibility to adapt to shifting ground or other changing conditions. Cement mortar is harder and allows little flexibility. The contrast can cause brickwork to crack where the two mortars are present in a single wall.

Lime mortar is considered breathable in that it will allow moisture to freely move through and evaporate from the surface. In old buildings with walls that shift over time, cracks can be found which allow rain water into the structure. The lime mortar allows this moisture to escape through evaporation and keeps the wall dry. Re−pointing or rendering an old wall with cement mortar stops the evaporation and can cause problems associated with moisture behind the cement.

Pozzolanic mortar

Pozzolana is a fine, sandy volcanic ash. It was originally discovered and dug at Pozzuoli, nearby Mount Vesuvius in Italy, and was subsequently mined at other sites, too. The Romans learned that pozzolana added to lime mortar allowed the lime to set relatively quickly and even under water. Vitruvius, the Roman architect, spoke of four types of pozzolana. It is found in all the volcanic areas of Italy in various colours: black, white, grey and red. Pozzolana has since become a generic term for any siliceous and/or aluminous additive to slaked lime to create hydraulic cement.

Finely ground and mixed with lime it is a hydraulic cement, like Portland cement, and makes a strong mortar that will also set under water.

The fact that the materials involved in the creation of pozzolana are found in abundance within certain territories make its use more common there, with areas inside of Central Europe as well as inside of Southern Europe being an example (significantly because of the many European volcanoes of note). It has, as such, been commonly associated with a variety of large structures constructed by the Roman Empire.

Radiocarbon dating

As the mortar hardens, the current atmosphere is encased in the mortar and thus provides a sample for analysis. Various factors affect the sample and raise the margin of error for the analysis. Radiocarbon dating of mortar began as early as the 1960s, soon after the method was established (Delibrias and Labeyrie 1964; Stuiver and Smith 1965; Folk and Valastro 1976). The very first data were provided by van Strydonck et al. (1983), Heinemeier et al.(1997) and Ringbom and Remmer (1995). Methodological aspects were further developed by different groups (an international team headed by Åbo Akademi University, and teams from CIRCE, CIRCe, ETHZ, Poznań, RICH and Milano-Bicocca laboratory. To evaluate the different anthropogenic carbon extraction methods for radiocarbon dating as well as to compare the different dating methods, i.e. radiocarbon and OSL, the first intercomparison study (MODIS) was set up and published in 2017.

What-Is-Mortar-Made-Of.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2459 2025-02-12 00:05:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2358) Oceanography/Oceanology

Gist

Oceanography is the study of all aspects of the ocean. Oceanography covers a wide range of topics, from marine life and ecosystems to currents and waves, the movement of sediments, and seafloor geology.

Oceanology is an area of Earth Science that deals with oceans. Oceanology, also called as Oceanography, is a vast subject covering a range of topics in the sub field areas of Physical, Chemical, Biological and Geological oceanography.

Summary

Oceanography is the study of the physical, chemical, and biological features of the ocean, including the ocean’s ancient history, its current condition, and its future. In a time when the ocean is threatened by climate change and pollution, coastlines are eroding, and entire species of marine life are at risk of extinction, the role of oceanographers may be more important now than it has ever been.

Indeed, one of the most critical branches of oceanography today is known as biological oceanography. It is the study of the ocean’s plants and animals and their interactions with the marine environment. But oceanography is not just about study and research. It is also about using that information to help leaders make smart choices about policies that affect ocean health. Lessons learned through oceanography affect the ways humans use the sea for transportation, food, energy, water, and much more.

For example, fishermen with the Northwest Atlantic Marine Alliance (NAMA) are working with oceanographers to better understand how pollutants are reducing fish populations and posing health risks to consumers of the fish. Together, NAMA and ocean scientists hope to use their research to show why tighter pollution controls are needed.

Oceanographers from around the world are exploring a range of subjects as wide as the ocean itself. For example, teams of oceanographers are investigating how melting sea ice is changing the feeding and migration patterns of whales that populate the ocean’s coldest regions. National Geographic Explorer Gabrielle Corradino, a North Carolina State University 2017 Global Change Fellow, is also interested in marine ecosystems, though in a much warmer environment. Corradino is studying how the changing ocean is affecting populations of microscopic phytoplankton and the fish that feed off of them. Her field work included five weeks in the Gulf of Mexico filtering seawater to capture phytoplankton and protozoa—the tiniest, but most important, parts of the sea’s food chain.

Of course, oceanography covers more than the living organisms in the sea. A branch of oceanography called geological oceanography focuses on the formation of the seafloor and how it changes over time. Geological oceanographers are starting to use special GPS technology to map the seafloor and other underwater features. This research can provide critical information, such as seismic activity, that could lead to more accurate earthquake and tsunami prediction.

In addition to biological and geological oceanography, there are two other main branches of sea science. One is physical oceanography, the study of the relationships between the seafloor, the coastline, and the atmosphere. The other is chemical oceanography, the study of the chemical composition of seawater and how it is affected by weather, human activities, and other factors.

About 70 percent of Earth’s surface is covered by water. Nearly 97 percent of that water is the saltwater swirling in the world’s ocean. Given the size of the ocean and the rapid advancements in technology, there is seemingly no end to what can and will be uncovered in the science of oceanography.

Details

Oceanography, also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology.

It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology.

Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics.

History:

Early history

Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken.

The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic.

The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour:

"nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient).

His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current  will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums)  leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe.

The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail).[9] Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486).

The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil.

The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth.

Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770.

Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences.

Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagle's three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology.

The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide.

Modern oceanography

Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans.

HMS Challenger undertook the first global marine research expedition in 1872.

The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. Challenger, leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly 70,000 nautical miles (130,000 km) surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development.

In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period.

In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean.

The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge.

In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000)

Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966.

The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible DSV Alvin.

In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe Trieste to investigate the ocean's depths. The United States nuclear submarine Nautilus made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a 355-foot (108 m) spar buoy, was first deployed.

In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent.

From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events.

1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995.

Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks.

In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science.

Branches:

The study of oceanography is divided into these five branches:

Biological oceanography

Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment.

Chemical oceanography

Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography.

Ocean acidification

Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide (CO2) emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH  through ocean acidification. The pH is expected to reach 7.7 by the year 2100.

An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers.

The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas.

Geological oceanography

Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography.

Physical oceanography

Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography.

Ocean currents

Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity.

Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents.

Ocean heat content

Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971.

Paleoceanography

Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology.

Oceanographic institutions

The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research.

In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards.

Additional Information

Oceanography is a scientific discipline concerned with all aspects of the world’s oceans and seas, including their physical and chemical properties, their origin and geologic framework, and the life forms that inhabit the marine environment.

Traditionally, oceanography has been divided into four separate but related branches: physical oceanography, chemical oceanography, marine geology, and marine ecology. Physical oceanography deals with the properties of seawater (temperature, density, pressure, and so on), its movement (waves, currents, and tides), and the interactions between the ocean waters and the atmosphere. Chemical oceanography has to do with the composition of seawater and the biogeochemical cycles that affect it. Marine geology focuses on the structure, features, and evolution of the ocean basins. Marine ecology, also called biological oceanography, involves the study of the plants and animals of the sea, including life cycles and food production.

Oceanography is the sum of these several branches. Oceanographic research entails the sampling of seawater and marine life for close study, the remote sensing of oceanic processes with aircraft and Earth-orbiting satellites, and the exploration of the seafloor by means of deep-sea drilling and seismic profiling of the terrestrial crust below the ocean bottom. Greater knowledge of the world’s oceans enables scientists to more accurately predict, for example, long-term weather and climatic changes and also leads to more efficient exploitation of the Earth’s resources. Oceanography also is vital to understanding the effect of pollutants on ocean waters and to the preservation of the quality of the oceans’ waters in the face of increasing human demands made on them.

bigstock-Travel-by-the-research-ship-S-115405130_600x.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2460 2025-02-13 00:01:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2359) Medical Transcriptionist/Medical Transcription

Gist

Medical transcriptionists, sometimes referred to as healthcare documentation specialists, use electronic devices to convert voice recordings from physicians and other healthcare workers into formal reports. Transcriptionists also may edit medical records for accuracy and return documents for review and approval.

You can earn as much as you are capable of and want to earn based upon quality (accuracy) and quantity (line count). You can work enjoy working from your comfort zone, i.e., from home. By choosing a career in home-based medical transcription, your dream to work from your home could be realized.

Summary

Medical Transcription is a permanent, legal document that formally states the result of a medical investigation. It facilitates communication and supports the insurance claims.

A medical transcriptionist is a professional who converts voice recordings from healthcare appointments into written reports. You see these reports in your electronic health records, and they help your provider give you the best possible care. Medical transcriptionists receive training in anatomy, physiology, medical terms and grammar.

What is a medical transcriptionist?

A medical transcriptionist is a healthcare professional who converts voice recordings into written reports. Primary care physicians and other healthcare providers create voice recordings to quickly save appointment notes. Instead of listening to these recordings, you’ll see reports in your electronic medical records or health portal. Medical transcriptionists likely play a role in creating these records so you and your provider can access them later.

You see your healthcare providers in person and have conversations with them. But you’ll probably never meet your medical transcriptionists. They’re like the stage crew behind the curtains that makes sure the show goes on smoothly and safely.

Transcriptionists must know medical concepts and terms to ensure your medical notes are accurate and clear. It’s a high-stakes job. Mistakes in medical reports that seem small could have a serious impact on your health. This sense of responsibility, as well as the fast-paced nature of the job, may cause stress for medical transcriptionists.

Advances in technology are changing the field. For example, transcription software can convert voice recordings to written transcripts with increasing accuracy. As a result, employers (like hospitals) may not need to hire as many new transcriptionists. But it’s still an important role that’s worth knowing about, whether you receive care in a medical setting or are interested in working in the field. Medical transcriptionists are part of the larger team that supports your health and helps you get the care you need.

Another name for a medical transcriptionist is a healthcare documentation specialist.

Where do medical transcriptionists work?

Medical transcriptionists work in many different settings, including:

* Administrative offices (for companies that offer transcription services).
* Healthcare providers’ offices.
* Hospitals.
* Medical and diagnostic labs.

Some medical transcriptionists work from home. Most jobs are full-time.

What does a medical transcriptionist do?

Medical transcriptionists use technologies to convert healthcare providers’ audio recordings into written reports. For example, they might use speech recognition software to convert a recording into a written report draft. But using technology is just one part of their job. They must also draw upon their knowledge and critical thinking skills to:

* Listen carefully to each recording and compare it against the report draft, making sure the report is accurate.
* Correct errors in reports.
* Find inconsistent or missing information that could put your health at risk.
* Change jargon and abbreviations that providers might use into full words or phrases.
* Submit reports to providers for approval.
* Put reports into your electronic health records.
* Follow legal guidelines to keep your information safe and private. Medical transcriptionists treat the information they transcribe as confidential.

Details

Medical transcription, also known as MT, is an allied health profession dealing with the process of transcribing voice-recorded medical reports that are dictated by physicians, nurses and other healthcare practitioners. Medical reports can be voice files, notes taken during a lecture, or other spoken material. These are dictated over the phone or uploaded digitally via the Internet or through smart phone apps.

History

Medical transcription as it is currently known has existed since the beginning of the 20th century when standardization of medical records and data became critical to research. At that time, medical stenographers recorded medical information, taking doctors' dictation in shorthand. With the creation of audio recording devices, it became possible for physicians and their transcribers to work asynchronously.

Over the years, transcription equipment has changed from manual typewriters, to electric typewriters, to word processors, and finally, to computers. Storage methods have also changed: from plastic disks and magnetic belts to cassettes, endless loops, and digital recordings. Today, speech recognition (SR), also known as continuous speech recognition (CSR), is increasingly used, with medical transcriptions and, in some cases, "editors" providing supplemental editorial services. Natural-language processing takes "automatic" transcription a step further, providing an interpretive function that speech recognition alone does not provide.

In the past, these medical reports consisted of very abbreviated handwritten notes that were added in the patient's file for interpretation by the primary physician responsible for the treatment. Ultimately, these handwritten notes and typed reports were consolidated into a single patient file and physically stored along with thousands of other patient records in the medical records department. Whenever the need arose to review the records of a specific patient, the patient's file would be retrieved from the filing cabinet and delivered to the requesting physician. To enhance this manual process, many medical record documents were produced in duplicate or triplicate by means of carbon copy.

In recent years,[when?] medical records have changed considerably. Although many physicians and hospitals still maintain paper records, the majority are stored as electronic records. This digital format allows for immediate remote access by any physician who is authorized to review the patient information. Reports are stored electronically and printed selectively as the need arises. Many healthcare providers today work using handheld PCs or personal data assistants (PDAs) and are now utilizing software on them to record dictation.

Overview

Medical transcription is part of the healthcare industry that renders and edits doctor dictated reports, procedures, and notes in an electronic format in order to create files representing the treatment history of patients. Health practitioners dictate what they have done after performing procedures on patients, and MTs transcribe the oral dictation, edit reports that have gone through speech recognition software, or both.

Pertinent, up-to-date and confidential patient information is converted to a written text document by a medical transcriptionist (MT). This text may be printed and placed in the patient's record, retained only in its electronic format, or placed in the patient's record and also retained in its electronic format. Medical transcription can be performed by MTs who are employees in a hospital or who work at home as telecommuting employees for the hospital; by MTs working as telecommuting employees or independent contractors for an outsourced service that performs the work offsite under contract to a hospital, clinic, physician group or other healthcare provider; or by MTs working directly for the providers of service (doctors or their group practices) either onsite or telecommuting as employees or contractors. Hospital facilities often prefer electronic storage of medical records due to the sheer volume of hospital patients and the accompanying paperwork. The electronic storage in their database gives immediate access to subsequent departments or providers regarding the patient's care to date, notation of previous or present medications, notification of allergies, and establishes a history on the patient to facilitate healthcare delivery regardless of geographical distance or location.

The term transcript, or "report" is used to refer to a healthcare professional's specific encounter with a patient. This report is also referred to by many as a "medical record". Each specific transcribed record or report, with its own specific date of service, is then merged and becomes part of the larger patient record commonly known as the patient's medical history. This record is often called the patient's "chart" in a hospital setting.

Medical transcription encompasses the medical transcriptionist, performing document typing and formatting functions according to an established criterion or format, transcribing the spoken word of the patient's care information into a written, easily readable form. A proper transcription requires correct spelling of all terms and words, and correcting medical terminology or dictation errors. Medical transcriptionists also edit the transcribed documents, print or return the completed documents in a timely fashion. All transcription reports must comply with medico-legal concerns, policies and procedures, and laws under patient confidentiality.

In transcribing directly for a doctor or a group of physicians, there are specific formats and report types used, dependent on that doctor's speciality of practice, although history and physical exams or consults are mainly utilized. In most of the off-hospital sites, independent medical practices perform consultations as a second opinion, pre-surgical exams, and as IMEs (Independent Medical Examinations) for liability insurance or disability claims. Some private practice family doctors choose not to utilize a medical transcriptionist, preferring to keep their patient's records in a handwritten format, although this is not true of all family practitioners.

Currently, a growing number of medical providers send their dictation by digital voice files, utilizing a method of transcription called speech or voice recognition. Speech recognition is still a nascent technology that loses much in translation. For dictators to utilize the software, they must first train the program to recognize their spoken words. Dictation is read into the database and the program continuously "learns" the spoken words and phrases.

Poor speech habits and other problems such as heavy accents and mumbling complicate the process for both the MT and the recognition software. An MT can "flag" such a report as unintelligible, but the recognition software will transcribe the unintelligible word(s) from the existing database of "learned" language. The result is often a "word salad" or missing text. Thresholds can be set to reject a bad report and return it for standard dictation, but these settings are arbitrary. Below a set percentage rate, the word salad passes for actual dictation. The MT simultaneously listens, reads, and "edits" the correct version. Every word must be confirmed in this process. The downside of the technology is when the time spent in this process cancels out the benefits. The quality of recognition can range from excellent to poor, with whole words and sentences missing from the report. Not infrequently, negative contractions and the word "not" is dropped altogether. These flaws trigger concerns that the present technology could have adverse effects on patient care. Control over quality can also be reduced when providers choose a server-based program from a vendor Application Service Provider (ASP).

Downward adjustments in MT pay rates for voice recognition are controversial. Understandably, a client will seek optimum savings to offset any net costs. Yet vendors that overstate the gains in productivity do harm to MTs paid by the line. Despite the new editing skills required of MTs, significant reductions in compensation for voice recognition have been reported. Reputable industry sources put the field average for increased productivity in the range of 30–50%; yet this is still dependent on several other factors involved in the methodology.

Operationally, speech recognition technology (SRT) is an interdependent, collaborative effort. It is a mistake to treat it as compatible with the same organizational paradigm as standard dictation, a largely "stand-alone" system. The new software supplants an MT's former ability to realize immediate time-savings from programming tools such as macros and other word/format expanders. Requests for client/vendor format corrections delay those savings. If remote MTs cancel each other out with disparate style choices, they and the recognition engine may be trapped in a seesaw battle over control. Voice recognition managers should take care to ensure that the impositions on MT autonomy are not so onerous as to outweigh its benefits.

Medical transcription is still the primary mechanism for a physician to clearly communicate with other healthcare providers who access the patient record, to advise them on the state of the patient's health and past/current treatment, and to assure continuity of care. More recently, following Federal and State Disability Act changes, a written report (IME) became a requirement for documentation of a medical bill or an application for Workers' Compensation (or continuation thereof) insurance benefits based on requirements of Federal and State agencies.

As a profession

An individual who performs medical transcription is known as a medical transcriber (MT) or a Medical Language Specialist (MLS). The equipment used is called a medical transcriber, e.g., a cassette player with foot controls operated by the MT for report playback and transcription.

Education and training can be obtained through certificate or diploma programs, distance learning, or on-the-job training offered in some hospitals, although there are countries currently employing transcriptionists that require 18 months to 2 years of specialized MT training. Working in medical transcription leads to a mastery in medical terminology and editing, ability to listen and type simultaneously, utilization of playback controls on the transcriber (machine), and use of foot pedal to play and adjust dictations – all while maintaining a steady rhythm of execution. Medical transcription training normally includes coursework in medical terminology, anatomy, editing and proofreading, grammar and punctuation, typing, medical record types and formats, and healthcare documentation.

While medical transcription does not mandate registration or certification, individual MTs may seek out registration/certification for personal or professional reasons. Obtaining a certificate from a medical transcription training program does not entitle an MT to use the title of Certified Medical Transcriptionist. A Certified Healthcare Documentation Specialist (CHDS) credential can be earned by passing a certification examination conducted solely by the Association for Healthcare Documentation Integrity (AHDI), formerly the American Association for Medical Transcription (AAMT), as the credentialing designation they created. AHDI also offers the credential of Registered Healthcare Documentation Specialist (RHDS). According to AHDI, RHDS is an entry-level credential while the CHDS is an advanced level. AHDI maintains a list of approved medical transcription schools. Generally, certified medical transcriptionists earn more than their non-certified counterparts. It is also notable that training through an educational program that is approved by AHDI will increase the chances of an MT getting certified and getting hired.

There is a great degree of internal debate about which training program best prepares an MT for industry work. Yet, whether one has learned medical transcription from an online course, community college, high school night course, or on-the-job training in a doctor's office or hospital, a knowledgeable MT is highly valued. In lieu of these AHDI certification credentials, MTs who can consistently and accurately transcribe multiple document work-types and return reports within a reasonable turnaround-time (TAT) are sought after. TATs set by the service provider or agreed to by the transcriptionist should be reasonable but consistent with the need to return the document to the patient's record in a timely manner.

On March 7, 2006, the MT occupation became an eligible U.S. Department of Labor Apprenticeship, a 2-year program focusing on acute care facility (hospital) work. In May 2004, a pilot program for Vermont residents was initiated, with 737 applicants for only 20 classroom pilot-program openings. The objective was to train the applicants as MTs in a shorter time period.

The medical transcription process

When the patient visits a doctor, the latter spends time with the former discussing their medical problems and performing diagnostic services. After the patient leaves the office, the doctor uses a voice-recording device to record information about the patient encounter. This information may be recorded into a hand-held cassette recorder or into a regular telephone, dialed into a central server located in the hospital or transcription service office, which will 'hold' the report for the transcriptionist. This report is then accessed by a medical transcriptionist, who then listens to the dictation and transcribes it into the required format for the medical record, and of which this medical record is considered a legal document. The next time the patient visits the doctor, the doctor will call for the medical record or the patient's entire chart, which will contain all reports from previous encounters. The doctor can on occasion refill the patient's medications after seeing only the medical record, although doctors prefer to not refill prescriptions without seeing the patient to establish if anything has changed.

It is very important to have a properly formatted, edited, and reviewed medical transcription document. If a medical transcriptionist accidentally typed a wrong medication or the wrong diagnosis, the patient could be at risk if the doctor (or their designee) did not review the document for accuracy. Both the doctor and the medical transcriptionist play an important role to make sure the transcribed dictation is correct and accurate. The doctor should speak slowly and concisely, especially when dictating medications or details of diseases and conditions. The medical transcriptionist must possess hearing acuity, medical knowledge, and good reading comprehension in addition to checking references when in doubt.

However, some doctors do not review their transcribed reports for accuracy, and the computer attaches an electronic signature with the disclaimer that a report is "dictated but not read". This electronic signature is readily acceptable in a legal sense. The transcriptionist is bound to transcribe verbatim (exactly what is said) and make no changes, but has the option to flag any report inconsistencies. On some occasions, the doctors do not speak clearly, or voice files are garbled. Some doctors are time-challenged and need to dictate their reports quickly (as in ER Reports). In addition, there are many regional or national accents and (mis)pronunciations of words the MT must contend with. It is imperative and a large part of the job of the transcriptionist to look up the correct spelling of complex medical terms, medications, obvious dosage or dictation errors, and when in doubt should "flag" a report. A "flag" on a report requires the dictator (or their designee) to fill in a blank on a finished report, which has been returned to him, before it is considered complete. Transcriptionists are never permitted to guess, or 'just put in anything' in a report transcription. Furthermore, medicine is constantly changing. New equipment, new medical devices, and new medications come on the market on a daily basis, and the Medical Transcriptionist needs to be creative and to tenaciously research (quickly) to find these new words. An MT needs to have access to, or keep on memory, an up-to-date library to quickly facilitate the insertion of a correctly spelled device.

Medical transcription editing

Medical transcription editing is the process of listening to a voice-recorded file and comparing that to the transcribed report of that audio file, correcting errors as needed. Although speech recognition technology has become better at understanding human language, editing is still needed to ensure better accuracy. Medical transcription editing is also performed on medical reports transcribed by medical transcriptionists.

Medical transcription editors

Recent advances in speech recognition technology have shifted the job responsibilities of medical transcriptionists from not only transcribing but also editing. Editing has always been a part of the medical transcriptionist job; however, now[when?] editing is a larger requirement as reports are more often being transcribed electronically. With different accents, articulations, and pronunciations, speech recognition technology can still have problems deciphering words. This is where the medical transcriptionist editor steps in. Medical transcription editors will compare and correct the transcribed file to the voice-recorded audio file. The job is similar to medical transcription as editing will use a foot pedal and the education and training requirements are mostly the same.

Training and education

Education and training requirements for medical transcription editors is very similar to that of medical transcriptionists. Many medical transcription editors start out as medical transcriptionists and transition to editing. Several of the AHDI-approved medical transcription schools have seen the need for medical transcription editing training and have incorporated editing in their training programs. Quality training is key to success as a Medical Transcription / Healthcare Documentation Specialist. It is also very important to get work experience while training to ensure employers will be willing to hire freshly graduated students. Students who receive 'real world' training are much better suited for the medical transcription industry, than those who do not.

Outsourcing of medical transcription

Due to the increasing demand to document medical records, countries have started to outsource the services of medical transcription. The main reason for outsourcing is stated to be the cost advantage due to cheap labor in developing countries, and their currency rates as compared to the US dollar.

There is a volatile controversy on whether medical transcription work should be outsourced, mainly due to three reasons:

The greater majority of MTs presently work from home offices rather than in hospitals, working off-site for "national" transcription services. It is predominantly those nationals located in the United States who are striving to outsource work to other-than-US-based transcriptionists. In outsourcing work to sometimes lesser-qualified and lower-paid non-US MTs, the nationals can force US transcriptionists to accept lower rates, at the risk of losing business altogether to the cheaper outsourcing providers. In addition to the low line rates forced on US transcriptionists, US MTs are often paid as ICs (independent contractors); thus, the nationals save on employee insurance and benefits offered, etc. Unfortunately[editorializing] for the state of healthcare-related administrative costs in the United States, in outsourcing, the nationals still charge the hospitals the same rate as they did in the past for highly qualified US transcriptionists but subcontract the work to non-US MTs, keeping the difference as profit.

There are concerns about patient privacy, with confidential reports going from the country where the patient is located (e.g. the US) to a country where the laws about privacy and patient confidentiality may not even exist. The offshore provider has a clear business interest in preventing a data breach and could be prosecuted under HIPAA or other privacy laws, yet the counter-argument is made that such a prosecution might never happen or if tried wouldn't get anywhere. Some of the countries that now outsource transcription work are the United States and Britain, with work outsourced to Philippines, India, Sri Lanka, Canada, Australia, Pakistan and Barbados.

The quality of the finished transcriptions is a concern. Many outsourced transcriptionists simply do not have the requisite basic education to do the job with reasonable accuracy, as well as additional, occupation-specific training in medical transcription.

Additional Information:

What Medical Transcriptionists Do

Medical transcriptionists use electronic devices to convert voice recordings from physicians and other healthcare workers into formal reports.

Work Environment

Many medical transcriptionists work for hospitals, physicians' offices, and third-party transcription companies that provide services to healthcare establishments. Most are full time, but part-time work is common.

How to Become a Medical Transcriptionist

Medical transcriptionists typically need postsecondary education that leads to a certificate. Prospective medical transcriptionists must know basic medical terminology, anatomy and physiology, and rules of grammar.

Pay

The median annual wage for medical transcriptionists was $37,060 in May 2023.

Job Outlook

Employment of medical transcriptionists is projected to decline 5 percent from 2023 to 2033.

Despite declining employment, about 9,600 openings for medical transcriptionists are projected each year, on average, over the decade. All of those openings are expected to result from the need to replace workers who transfer to other occupations or exit the labor force, such as to retire.

GettyImages-1352251581.jpg?w=1500&h=680&q=60&fit=fill&f=faces&fm=jpg&fl=progressive&auto=format%2Ccompress&dpr=2&w=1000


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2461 2025-02-14 00:13:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2360) Olfactory Nerve

Gist

Your olfactory nerve is the first cranial nerve (CN I). This nerve enables your olfactory system and sense of smell. Cranial nerve 1 is the shortest sensory nerve. It starts in your brain and ends in the upper, inside part of your nose.

The olfactory nerve is the first cranial nerve (CN I). It is a sensory nerve that functions for the sense of smell. Olfaction is phylogenetically referred to as the oldest of the senses. It is carried out through special visceral afferent nerve.

What are the 12 cranial nerves?

Olfactory nerve (CN I), optic nerve (CN II), oculomotor nerve (CN III), trochlear nerve (CN IV), trigeminal nerve (CN V), abducens nerve (CN VI), facial nerve (CN VII), vestibulocochlear nerve (CN VIII), glossopharyngeal nerve (CN IX), vagus nerve (CN X), accessory nerve (CN XI), and hypoglossal nerve (CN XII).

The olfactory system is at the roof of the nasal cavity at the cribriform plate - a perforated portion of the ethmoid bone separating the frontal lobe of the cerebrum from the nasal cavity. Odorant molecules within the nasal passages first encounter receptors on the primary cilia of olfactory sensory neurons.

Olfactory comes from the Latin word olfacere (“to smell”), which in turn combines two verbs, olēre (“to give off a smell”) and facere (“to do”).

Summary:

Description

The olfactory nerve is the first cranial nerve (CN I). It is a sensory nerve that functions for the sense of smell. Olfaction is phylogenetically referred to as the oldest of the senses. It is carried out through special visceral afferent nerve. It is a cranial nerve with certain unique features such as lacking a precortical connection to the thalamus.

Root

The olfactory nerve terminates at the olfactory bulb, located just above the ethmoid bone and below the frontal lobe. The olfactory bulb acts as a relay center for the transmission of the impulses from the olfactory nerve to the olfactory tract and then to the cerebral cortex (olfactory cortex).

Branches

The olfactory nerve divides into two branches. These are:

* Lateral olfactory nerves- located in the superior nasal concha
* Medial olfactory nerves- located along the nasal septum

Pathway

The olfactory epithelium is found in the nasal cavity. This contains different types of cells ,including: olfactory sensory neurons, microvillar‐capped sustentacular cells, microvillar cells, the Bowman's duct cells, and the basal cells or stem cells.

* The olfactory sensory neurons are bipolar cells that receive the stimuli at its dendrite from the apical pole and conducts them to the axon from basal pole. Groups of 15 to 40 axons are ensheathed with glial cells at the level of the basement membrane, making up a small fascicle.

* These small fascicles converge with other small fascicles giving rise to larger fascicles, known as fila olfactoria.

* These collection of axons crosses the cribriform plate of the ethmoid bone, through its perforations, and reach the brain where it innervates the olfactory bulb.

* The olfactory bulb is a relay station for the transmission of the impulse coming from the olfactory nerve. From the olfactory bulb, impulses then travel through the olfactory tract. The olfactory tract divides into the medial and lateral olfactory striae, then connects to the olfactory cortex.

* The olfactory cortex are the parts of the brain that receive and process the sensory input.

Function

The olfactory nerve is purely a sensory nerve that functions for the sense of smell.

Clinical Relevance

The olfactory nerve can be damaged through trauma eg TBI; Blunt trauma to the head can lead to laceration of the olfactory nerve as it crosses the ethmoid bone; Infections can also cause damage to the olfactory nerve.

The anatomical position of the olfactory nerve makes it an obstacle to exploring the anterior cranial fossa. Therefore, this leads to a potential risk of injury especially during surgeries. During surgical procedures, there are three mechanisms that can lead to olfactory nerve injury:

* Detachment of the cribriform plate and section of the fila olfactoria during frontal lobe retraction.
* Partial or total section of the olfactory tract during the process of dissection
* Ischemic injuries

Lesions to the Olfactory Nerve and/or to the Olfactory Pathway can lead to the following symptoms:

* Anosmia- loss of sense of smell
* Hyposmia- decrease ability to detect smell
* Hyperosmia- increased sensitivity to the sense of smell
* Dysosmia- distorted sense of smell or presence of unpleasant smell in the absence of any actual odor (olfactory hallucinations)

Assessment

Assessment of the olfactory nerve is readily done either at bedside or in the clinic. Testing of the integrity of the olfactory nerve involves either pinching or blocking of one nostril while the patient is blindfolded or with the eyes closed, then have the patient smell aromatic substances such as coffee, vanilla, cinnamon, etc. Avoid using substances with strong or noxious smell such as alcohol or ammonia. Have the patient identify the substance based on its aroma. Then repeat the same procedure on the other nostril. This is done to compare the sense of smell on both sides.

Conservative management for olfactory nerve damage secondary to different etiologies is olfactory training.

Your sense of smell comes from the olfactory nerve, the first cranial nerve. Multiple sensory nerve fibers make up the olfactory nerve. This nerve plays a vital role in using your senses to enjoy your favorite smells. 

What Is the Olfactory Nerve?

Your first and shortest cranial nerve is the olfactory nerve. This nerve is one of two that don’t connect with your brainstem. Your olfactory nerve anatomy is a part of your central nervous system.

There are about six million to 10 million olfactory sensory neurons in each nostril. That means there’s a lot of work happening inside of your nostrils to help you smell what’s around you or the food you’re eating.

Olfactory nerve location. These nerves are in a portion of each nasal cavity. Your olfactory nerve gets its blood supply from your cerebral or olfactory artery. All this action takes place in the nasal cavity, though the olfactory nerve comes from your brain. There are 20 or more branches of your olfactory nerve in the roof of your nasal cavity.

How Does the Olfactory Nerve Function?

The primary olfactory nerve function is to help you smell and taste foods.

Your olfactory nerve stems from your olfactory epithelium, which comes from your nasal pits. From birth, your olfactory nerve's origin goes on to your nose and primary palate. The nerves and cells combine to help you smell.

A lot of the work is done when your olfactory neurons send messages to your brain in the olfactory bulb. These messages send information to other parts of your brain and help you combine taste and smell.

The olfactory sensory neurons connect your senses of smell and taste. When you chew food, a smell is released and picked up by these neurons through the second channel in your throat. When that channel gets blocked from a stuffy nose, you lose your ability to taste and smell.   

Signs Something Could Be Wrong With Your Olfactory Nerve

Losing your sense of smell can be more dangerous than you might think. A scent can help alert us to problems around us. If there's a fire, a moldy piece of food, or something else, our noses can sometimes alert us before our eyes and ears do.

Sometimes you might not lose a total sense of taste and smell. Sometimes, you may only lose the ability to taste certain flavors, such as sweet things or bitter things. The same applies to smell. When your sense of smell starts to change, something you once enjoyed the scent of could become repulsive. 

Certain conditions, like olfactory neuroblastoma, can cause olfactory nerve damage, resulting in multiple symptoms. These include:

* A stuffy or blocked nose
* Worsening congestion
* Pain around your eyes
* Watery eyes
* Nosebleeds

In more severe cases, if there's a problem with your olfactory nerve, you may notice face or tooth numbness and loose teeth. Along with a worsening sense of smell, you may have a change in vision, ear pain, loose teeth, or trouble opening your mouth.

What Conditions Affect the Olfactory Nerve?

Quite a few conditions affect the olfactory nerve. Most of these conditions also have major effects on your brain and your sense of taste. 

Olfactory neuroblastoma is cancer that occurs on the roof of your nasal cavity. This rare form of cancer forms on the bone between your eyes and your skull. Loss of smell can be the first indicator of this cancer. That's why it's important to talk to your doctor if you have the symptoms listed above.

Parkinson's disease and Alzheimer's disease are conditions that affect your nervous system, including the olfactory nerve. Early signs of problems with your olfactory nerve can indicate Parkinson's or Alzheimer's.

Anosmia and hyposmia are two conditions that affect your olfactory nerve. Anosmia is a complete lack of smell. In rare cases, a child can be born without a sense of smell, although this condition typically occurs later in life. Hyposmia is a heightened sense of smell that can sometimes be overwhelming and revolting.

Parosmia is a smell disorder that changes how you smell things. For example, if you once liked the smell of something, you might now be nauseated by it. Phantosmia is a smell disorder in which you smell things that aren't there.

Other conditions that affect your olfactory nerve include:

* Multiple sclerosis
* Obesity
* Diabetes
* Hypertension
* Malnutrition

Problems with nerves in your brain can be severe. It's best to talk to your doctor as soon as possible if you notice problems with your senses. You should act fast, whether it's taste, smell, touch, sight, or hearing.

How to Protect Your Olfactory Nerve

While you can't always prevent certain conditions like cancer or Parkinson's, there are steps you can take to protect your olfactory nerve.

Your olfactory nerve has a vital role in your nervous system. If that nerve becomes damaged, it creates a lot of consequences for other parts of your head and nose. Keeping your olfactory nerve protected is essential. Fortunately, your olfactory neurons can regenerate after an injury, so a loss of smell is most likely temporary.

Protecting your head, neck, and brain by wearing proper helmets and protection during certain activities can help you prevent a brain injury. To reduce the likelihood and severity of COVID-19, which has caused a loss of taste and smell in some people, stay up to date on the vaccine and boosters.

Your nose has many neurons, making them sensitive to gases and fumes. Ensure you're wearing proper protection from chemical exposure. You should also limit smoking or vaping.

Smell training is also an option if you've developed a smell disorder. A recent study indicated that using a kit over a few months can help your smell pathways recover and boost neuron regeneration.

Talk to your doctor if you've had an abrupt loss of smell or taste. They'll be able to help you diagnose the cause and start on a treatment plan.

Additional Information

The olfactory system, is the sensory system used for the sense of smell (olfaction). Olfaction is one of the special senses directly associated with specific organs. Most mammals and reptiles have a main olfactory system and an accessory olfactory system. The main olfactory system detects airborne substances, while the accessory system senses fluid-phase stimuli.

The senses of smell and taste (gustatory system) are often referred to together as the chemosensory system, because they both give the brain information about the chemical composition of objects through a process called transduction.

Structure:

Peripheral

The peripheral olfactory system consists mainly of the nostrils, ethmoid bone, nasal cavity, and the olfactory epithelium (layers of thin tissue covered in mucus that line the nasal cavity). The primary components of the layers of epithelial tissue are the mucous membranes, olfactory glands, olfactory neurons, and nerve fibers of the olfactory nerves.

Odor molecules can enter the peripheral pathway and reach the nasal cavity either through the nostrils when inhaling (olfaction) or through the throat when the tongue pushes air to the back of the nasal cavity while chewing or swallowing (retro-nasal olfaction). Inside the nasal cavity, mucus lining the walls of the cavity dissolves odor molecules. Mucus also covers the olfactory epithelium, which contains mucous membranes that produce and store mucus, and olfactory glands that secrete metabolic enzymes found in the mucus.

Transduction

Olfactory sensory neurons in the epithelium detect odor molecules dissolved in the mucus and transmit information about the odor to the brain in a process called sensory transduction. Olfactory neurons have cilia (tiny hairs) containing olfactory receptors that bind to odor molecules, causing an electrical response that spreads through the sensory neuron to the olfactory nerve fibers at the back of the nasal cavity.

Olfactory nerves and fibers transmit information about odors from the peripheral olfactory system to the central olfactory system of the brain, which is separated from the epithelium by the cribriform plate of the ethmoid bone. Olfactory nerve fibers, which originate in the epithelium, pass through the cribriform plate, connecting the epithelium to the brain's limbic system at the olfactory bulbs.

Central

The main olfactory bulb transmits pulses to both mitral and tufted cells, which help determine odor concentration based on the time certain neuron clusters fire (called 'timing code'). These cells also note differences between highly similar odors and use that data to aid in later recognition. The cells are different with mitral having low firing-rates and being easily inhibited by neighboring cells, while tufted have high rates of firing and are more difficult to inhibit. How the bulbar neural circuit transforms odor inputs to the bulb into the bulbar responses that are sent to the olfactory cortex can be partly understood by a mathematical model.

The uncus houses the olfactory cortex which includes the piriform cortex (posterior orbitofrontal cortex), amygdala, olfactory tubercle, and parahippocampal gyrus.

The olfactory tubercle connects to numerous areas of the amygdala, thalamus, hypothalamus, hippocampus, brain stem, retina, auditory cortex, and olfactory system. In total it has 27 inputs and 20 outputs. An oversimplification of its role is to state that it:

* checks to ensure odor signals arose from actual odors rather than villi irritation,
* regulates motor behavior (primarily social and stereotypical) brought on by odors,
* integrates auditory and olfactory sensory info to complete the aforementioned tasks, and
* plays a role in transmitting positive signals to reward sensors (and is thus involved in addiction).

The amygdala (in olfaction) processes pheromone, allomone, and kairomone (same-species, cross-species, and cross-species where the emitter is harmed and the sensor is benefited, respectively) signals. Due to cerebrum evolution this processing is secondary and therefore is largely unnoticed in human interactions. Allomones include flower scents, natural herbicides, and natural toxic plant chemicals. The info for these processes comes from the vomeronasal organ indirectly via the olfactory bulb. The main olfactory bulb's pulses in the amygdala are used to pair odors to names and recognize odor to odor differences.

The bed nuclei of the stria terminalis (BNST) act as the information pathway between the amygdala and hypothalamus, as well as the hypothalamus and pituitary gland. BNST abnormalities often lead to sexual confusion and immaturity. The BNST also connect to the septal area, rewarding sexual behavior.

Mitral pulses to the hypothalamus promote/discourage feeding, whereas accessory olfactory bulb pulses regulate reproductive and odor-related-reflex processes.

The hippocampus (although minimally connected to the main olfactory bulb) receives almost all of its olfactory information via the amygdala (either directly or via the BNST). The hippocampus forms new memories and reinforces existing ones.

Similarly, the parahippocampus encodes, recognizes and contextualizes scenes. The parahippocampal gyrus houses the topographical map for olfaction.

The orbitofrontal cortex (OFC) is heavily correlated with the cingulate gyrus and septal area to act out positive/negative reinforcement. The OFC is the expectation of reward/punishment in response to stimuli. The OFC represents the emotion and reward in decision making.

The anterior olfactory nucleus distributes reciprocal signals between the olfactory bulb and piriform cortex. The anterior olfactory nucleus is the memory hub for smell.

When different odor objects or components are mixed, humans and other mammals sniffing the mixture (presented by, e.g., a sniff bottle) are often unable to identify the components in the mixture even though they can recognize each individual component presented alone. This is largely because each odor sensory neuron can be excited by multiple odor components. It has been proposed that, in an olfactory environment typically composed of multiple odor components (e.g., odor of a dog entering a kitchen that contains a background coffee odor), feedback from the olfactory cortex to the olfactory bulb suppresses the pre-existing odor background (e.g., coffee) via olfactory adaptation, so that the newly arrived foreground odor (e.g., dog) can be singled out from the mixture for recognition.

Olfactory-nerve.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2462 2025-02-15 00:05:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2361) Statistician

Gist

Statisticians are experts who compile and analyze statistical data to solve problems for businesses, government organizations, and other institutions.

Statisticians are professionals who apply statistical methods and models to real-world problems. They gather, analyze, and interpret data to aid in many business decision-making processes.

Summary

Statisticians have long played a role in research and academia. Recently, however, there has been a spike in demand for statisticians in business, due to the proliferation of data generation and collection across industries and because businesses are now realizing the value of data-driven decision making.

With this increased demand in mind, it’s understandable that more and more professionals are considering careers as statisticians.

Unfortunately, the term “statistician” is rather vague, and many people are unsure what, exactly, these professionals actually do. Here, we explore the responsibilities of a statistician, the education and skills typically required to excel in the role, and offer some alternative career paths for those who want to work with data.

What Is a Statistician?

Statisticians are professionals who apply statistical methods and models to real-world problems. They gather, analyze, and interpret data to aid in many business decision-making processes. Statisticians are valuable employees in a range of industries, and often seek roles in areas such as business, health and medicine, government, physical sciences, and environmental sciences.

A statistician’s work is invaluable in all industries. Their analysis, analytical methods, and data interpretations can directly impact an organization’s actions. They can communicate patterns in data to advise executives on how business operations should be altered or changed. Data is, and continues to become, an increasing asset businesses use to inform their decisions. This means that statisticians have many opportunities to work in different sectors.

Are Statisticians in High Demand?

The job outlook for statisticians is positive. According to our job posting data analysis, overall employment for mathematicians and statisticians is expected to grow 35.5 percent from 2021 to 2031.

Much of this projected growth for statisticians will result from businesses collecting an increasing amount of data from an ever-widening number of sources. In order to analyze and interpret this data, businesses and organizations will need to hire more people specifically trained in such analysis.

The same job postings data shows that the median salary for statisticians was $93,600 per year as of 2023.

Roles and Responsibilities of a Statistician

The specific tasks that statisticians are expected to complete on a daily basis will naturally vary and depend on the specific industry and organization in which they work.

Generally speaking, in the private sector, statisticians often work to interpret data in a way that can inform organizational and business strategies; for example, by understanding changes in consumer behavior and buying trends. In the public sector, on the other hand, analyses will often be focused on furthering the public good; for example, by collecting and analyzing environmental, demographic, or health data.

Regardless of whether a statistician works in the public or private sector, their daily tasks are likely to include:

* Collecting, analyzing, and interpreting data
* Identifying trends and relationships in data
* Designing processes for data collection
* Communicating findings to stakeholders
* Advising organizational and business strategy
* Assisting in decision making

Required Skills for Statisticians

Statisticians must possess a variety of skills to be successful in this field, which are usually a unique combination of technical, analytical, and leadership. These include:

* Analytical skills: First and foremost, statisticians must be experts in statistical analysis. They must have a keen eye for detecting patterns and anomalies in data.
* Technical skills: To effectively collect and manipulate the data that informs their actions, statisticians must leverage computer systems, algorithms, and other technologies, meaning technical proficiency is critical.
* Communication skills: Although statisticians are experts in mathematics and statistics, they must also exhibit strong communication skills to effectively communicate the findings of their analysis with others in their organization. This includes both verbal and written communication, as well as the ability to present data in easy-to-understand, visual ways.
* Leadership skills: Truly effective statisticians must be able to think critically about the data that they are analyzing through the lens of key stakeholders and executives. Learning to think like a leader can help statisticians identify trends and data points that can make a big difference in their organizations.

Details

A statistician is a person who works with theoretical or applied statistics. The profession exists in both the private and public sectors.

It is common to combine statistical knowledge with expertise in other subjects, and statisticians may work as employees or as statistical consultants.

Overview

According to the United States Bureau of Labor Statistics, as of 2014, 26,970 jobs were classified as statistician in the United States. Of these people, approximately 30 percent worked for governments (federal, state, or local). As of October 2021, the median pay for statisticians in the United States was $92,270.

Additionally, there is a substantial number of people who use statistics and data analysis in their work but have job titles other than statistician, such as actuaries, applied mathematicians, economists, data scientists, data analysts (predictive analytics), financial analysts, psychometricians, sociologists, epidemiologists, and quantitative psychologists. Statisticians are included with the professions in various national and international occupational classifications.

In many countries, including the United States, employment in the field requires either a master's degree in statistics or a related field or a PhD.

According to one industry professional, "Typical work includes collaborating with scientists, providing mathematical modeling, simulations, designing randomized experiments and randomized sampling plans, analyzing experimental or survey results, and forecasting future events (such as sales of a product)."

According to the BLS, "Overall employment grow 33% from 2016 to 2026, much faster than average for all occupations. Businesses will need these workers to analyze the increasing volume of digital and electronic data." In October 2021, the CNBC rated it the fastest growing job in science and technology of the next decade, with a projected growth rate of 35.40%.

Additional Information

Whenever statisticians use data from a sample—i.e., a subset of the population—to make statements about a population, they are performing statistical inference. Estimation and hypothesis testing are procedures used to make statistical inferences.

Statistics is the science of collecting, analyzing, presenting, and interpreting data. Governmental needs for census data as well as information about a variety of economic activities provided much of the early impetus for the field of statistics. Currently the need to turn the large amounts of data available in many applied fields into useful information has stimulated both theoretical and practical developments in statistics.

Data are the facts and figures that are collected, analyzed, and summarized for presentation and interpretation. Data may be classified as either quantitative or qualitative. Quantitative data measure either how much or how many of something, and qualitative data provide labels, or names, for categories of like items. For example, suppose that a particular study is interested in characteristics such as age, gender, marital status, and annual income for a sample of 100 individuals. These characteristics would be called the variables of the study, and data values for each of the variables would be associated with each individual. Thus, the data values of 28, male, single, and $30,000 would be recorded for a 28-year-old single male with an annual income of $30,000. With 100 individuals and 4 variables, the data set would have 100 × 4 = 400 items. In this example, age and annual income are quantitative variables; the corresponding data values indicate how many years and how much money for each individual. Gender and marital status are qualitative variables. The labels male and female provide the qualitative data for gender, and the labels single, married, divorced, and widowed indicate marital status.

Sample survey methods are used to collect data from observational studies, and experimental design methods are used to collect data from experimental studies. The area of descriptive statistics is concerned primarily with methods of presenting and interpreting data using graphs, tables, and numerical summaries. Whenever statisticians use data from a sample—i.e., a subset of the population—to make statements about a population, they are performing statistical inference. Estimation and hypothesis testing are procedures used to make statistical inferences. Fields such as health care, biology, chemistry, physics, education, engineering, business, and economics make extensive use of statistical inference.

Methods of probability were developed initially for the analysis of gambling games. Probability plays a key role in statistical inference; it is used to provide measures of the quality and precision of the inferences. Many of the methods of statistical inference are described in this article. Some of these methods are used primarily for single-variable studies, while others, such as regression and correlation analysis, are used to make inferences about relationships among two or more variables.

Veramed_Nov19_web-9974.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2463 2025-02-16 00:09:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2362) Actuary

Gist

Actuaries analyze the financial costs of risk and uncertainty. They use mathematics, statistics, and financial theory to assess the risk of potential events, and they help businesses and clients develop policies that minimize the cost of that risk. Actuaries' work is essential to the insurance industry.

Summary:

What is an actuary?

Actuaries are problem solvers and strategic thinkers, who use their mathematical skills to help measure the probability and risk of future events. They use these skills to predict the financial impact of these events on a business and their clients.

Businesses and governments increasingly depend on the skills of actuaries and analysts to help them model and plan for the future. As the world changes at an increasingly rapid pace, risk management expertise can help businesses navigate this evolving landscape.

What does an actuary do?

Actuaries possess a unique mix of mathematical, analytical, communication and management skills. They apply their abilities to create social impact, inform high-level strategic decisions and have a significant impact on legislation, businesses, and peoples' lives.

Actuaries are creative, curious and adaptable and it’s this learning mindset that helps them succeed in the digital age. Actuaries’ unique combination of technical skills and professional acumen ensures they will continue to make a difference, guarding against the impacts of future uncertainty.

Where do actuaries work?

Although actuaries are often associated with traditional fields such as life, pensions, and insurance, there are an increasing number of actuaries moving into a hold range of new areas. Health, banking and finance, technology, and climate change are just some of the areas where you can now find actuaries.

The rise of artificial intelligence and data science are challenging actuaries to think differently and are creating new opportunities that previously never existed. Whether you work in-house within an organisation or in a consultancy firm supporting different clients, you will enjoy a financially rewarding career where can grow, develop, and be challenged.

Why should you become an actuary?

Being an actuary means having the opportunity to apply highly valued mathematical skills and expertise in a diverse, exciting and challenging career that really makes a difference. Your hard work is rewarded with a highly competitive salary and a good work/life balance in comparison to similarly paid professions in the financial services, such as investment banking.

Details

An actuary is a professional with advanced mathematical skills who deals with the measurement and management of risk and uncertainty. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms. The name of the corresponding academic discipline is actuarial science.

While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design programs that manage risk, by determining if the implementation of strategies proposed for mitigating potential risks, does not exceed the expected cost of those risks actualized. The steps needed to become an actuary, including education and licensing, are specific to a given country, with various additional requirements applied by regional administrative units; however, almost all processes impart universal principles of risk assessment, statistical analysis, and risk mitigation, involving rigorously structured training and examination schedules, taking many years to complete.

The profession has consistently been ranked as one of the most desirable. In various studies in the United States, being an actuary was ranked first or second multiple times since 2010, and in the top 20 for most of the past decade.

Responsibilities

Actuaries use skills primarily in mathematics, particularly calculus-based probability and mathematical statistics, but also economics, computer science, finance, and business. For this reason, actuaries are essential to the insurance and reinsurance industries, either as staff employees or as consultants; to other businesses, including sponsors of pension plans; and to government agencies such as the Government Actuary's Department in the United Kingdom or the Social Security Administration in the United States of America. Actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, or loss of property. Actuaries also address financial questions, including those involving the level of pension contributions required to produce a certain retirement income and the way in which a company should invest resources to maximize its return on investments in light of potential risk. Using their broad knowledge, actuaries help design and price insurance policies, pension plans, and other financial strategies in a manner that will help ensure that the plans are maintained on a sound financial basis.

Disciplines

Most traditional actuarial disciplines fall into two main categories: life and non-life.

Life actuaries, which includes health and pension actuaries, primarily deal with mortality risk, morbidity risk, and investment risk. Products prominent in their work include life insurance, annuities, pensions, short and long term disability insurance, health insurance, health savings accounts, and long-term care insurance. In addition to these risks, social insurance programs are influenced by public opinion, politics, budget constraints, changing demographics, and other factors such as medical technology, inflation, and cost of living considerations.

Non-life actuaries, also known as "property and casualty" (mainly US) or "general insurance" (mainly UK) actuaries, deal with both physical and legal risks that affect people or their property. Products prominent in their work include auto insurance, homeowners insurance, commercial property insurance, workers' compensation, malpractice insurance, product liability insurance, marine insurance, terrorism insurance, and other types of liability insurance.

Actuaries are also called upon for their expertise in enterprise risk management. This can involve dynamic financial analysis, stress testing, the formulation of corporate risk policy, and the setting up and running of corporate risk departments. Actuaries are also involved in other areas in the economic and financial field, such as analyzing securities offerings or market research.

Traditional employment

On both the life and casualty sides, the classical function of actuaries is to calculate premiums and reserves for insurance policies covering various risks. On the casualty side, this analysis often involves quantifying the probability of a loss event, called the frequency, and the size of that loss event, called the severity. The amount of time that occurs before the loss event is important, as the insurer will not have to pay anything until after the event has occurred. On the life side, the analysis often involves quantifying how much a potential sum of money or a financial liability will be worth at different points in the future. Since neither of these kinds of analysis are purely deterministic processes, stochastic models are often used to determine frequency and severity distributions and the parameters of these distributions. Forecasting interest yields and currency movements also plays a role in determining future costs, especially on the life side.

Actuaries do not always attempt to predict aggregate future events. Often, their work may relate to determining the cost of financial liabilities that have already occurred, called retrospective reinsurance, or the development or re-pricing of new products.

Actuaries also design and maintain products and systems. They are involved in financial reporting of companies' assets and liabilities. They must communicate complex concepts to clients who may not share their language or depth of knowledge. Actuaries work under a code of ethics that covers their communications and work products.

Non-traditional employment

As an outgrowth of their more traditional roles, actuaries also work in the fields of risk management and enterprise risk management for both financial and non-financial corporations. Actuaries in traditional roles study and use the tools and data previously in the domain of finance. The Basel II accord for financial institutions (2004), and its analogue, the Solvency II accord for insurance companies (in force since 2016), require institutions to account for operational risk separately, and in addition to, credit, reserve, asset, and insolvency risk. Actuarial skills are well suited to this environment because of their training in analyzing various forms of risk, and judging the potential for upside gain, as well as downside loss associated with these forms of risk.

Actuaries are also involved in investment advice and asset management, and can be general business managers and chief financial officers. They analyze business prospects with their financial skills in valuing or discounting risky future cash flows, and apply their pricing expertise from insurance to other lines of business. For example, insurance securitization requires both actuarial and finance skills. Actuaries also act as expert witnesses by applying their analysis in court trials to estimate the economic value of losses such as lost profits or lost wages.

History

Need for insurance

The basic requirements of communal interests gave rise to risk sharing since the dawn of civilization. For example, people who lived their entire lives in a camp had the risk of fire, which would leave their band or family without shelter. After barter came into existence, more complex risks emerged and new forms of risk manifested. Merchants embarking on trade journeys bore the risk of losing goods entrusted to them, their own possessions, or even their lives. Intermediaries developed to warehouse and trade goods, which exposed them to financial risk. The primary providers in extended families or households ran the risk of premature death, disability or infirmity, which could leave their dependents to starve. Credit procurement was difficult if the creditor worried about repayment in the event of the borrower's death or infirmity. Alternatively, people sometimes lived too long from a financial perspective, exhausting their savings, if any, or becoming a burden on others in the extended family or society.

Early attempts

In the ancient world there was not always room for the sick, suffering, disabled, aged, or the poor—these were often not part of the cultural consciousness of societies. Early methods of protection, aside from the normal support of the extended family, involved charity; religious organizations or neighbors would collect for the destitute and needy. By the middle of the 3rd century, charitable operations in Rome supported 1,500 suffering people. Charitable protection remains an active form of support in the modern era, but receiving charity is uncertain and often accompanied by social stigma.

Elementary mutual aid agreements and pensions did arise in antiquity. Early in the Roman empire, associations were formed to meet the expenses of burial, cremation, and monuments—precursors to burial insurance and friendly societies. A small sum was paid into a communal fund on a weekly basis, and upon the death of a member, the fund would cover the expenses of rites and burial. These societies sometimes sold shares in the building of columbāria, or burial vaults, owned by the fund. Other early examples of mutual surety and assurance pacts can be traced back to various forms of fellowship within the Saxon clans of England and their Germanic forebears, and to Celtic society.

Non-life insurance started as a hedge against loss of cargo during sea travel. Anecdotal reports of such guarantees occur in the writings of Demosthenes, who lived in the 4th century BCE. The earliest records of an official non-life insurance policy come from Sicily, where there is record of a 14th-century contract to insure a shipment of wheat. In 1350, Lenardo Cattaneo assumed "all risks from act of God, or of man, and from perils of the sea" that may occur to a shipment of wheat from Sicily to Tunis up to a maximum of 300 florins. For this he was paid a premium of 18%.

Development of theory

During the 17th century, a more scientific basis for risk management was being developed. In 1662, a London draper named John Graunt showed that there were predictable patterns of longevity and death in a defined group, or cohort, of people, despite the uncertainty about the future longevity or mortality of any one individual. This study became the basis for the original life table. Combining this idea with that of compound interest and annuity valuation, it became possible to set up an insurance scheme to provide life insurance or pensions for a group of people, and to calculate with some degree of accuracy each member's necessary contributions to a common fund, assuming a fixed rate of interest. The first person to correctly calculate these values was Edmond Halley. In his work, Halley demonstrated a method of using his life table to calculate the premium someone of a given age should pay to purchase a life-annuity.

Early actuaries

James Dodson's pioneering work on the level premium system led to the formation of the Society for Equitable Assurances on Lives and Survivorship (now commonly known as Equitable Life) in London in 1762. This was the first life insurance company to use premium rates that were calculated scientifically for long-term life policies, using Dodson's work. After Dodson's death in 1757, Edward Rowe Mores took over the leadership of the group that eventually became the Society for Equitable Assurances. It was he who specified that the chief official should be called an actuary.[39] Previously, the use of the term had been restricted to an official who recorded the decisions, or acts, of ecclesiastical courts, in ancient times originally the secretary of the Roman senate, responsible for compiling the Acta Senatus. Other companies that did not originally use such mathematical and scientific methods most often failed or were forced to adopt the methods pioneered by Equitable.

Development of the modern profession

In the 18th and 19th centuries, computational complexity was limited to manual calculations. The calculations required to compute fair insurance premiums can be burdensome. The actuaries of that time developed methods to construct easily used tables, using arithmetical short-cuts called commutation functions, to facilitate timely, accurate, manual calculations of premiums. In the mid-19th century, professional bodies were founded to support and further both actuaries and actuarial science, and to protect the public interest by ensuring competency and ethical standards. Since calculations were cumbersome, actuarial shortcuts were commonplace.

Non-life actuaries followed in the footsteps of their life compatriots in the early 20th century. In the United States, the 1920 revision to workers' compensation rates took over two months of around-the-clock work by day and night teams of actuaries. In the 1930s and 1940s, rigorous mathematical foundations for stochastic processes were developed. Actuaries began to forecast losses using models of random events instead of deterministic methods. Computers further revolutionized the actuarial profession. From pencil-and-paper to punchcards to microcomputers, the modeling and forecasting ability of the actuary has grown vastly.

Another modern development is the convergence of modern finance theory with actuarial science. In the early 20th century, some economists and actuaries were developing techniques that can be found in modern financial theory, but for various historical reasons, these developments did not achieve much recognition. In the late 1980s and early 1990s, there was a distinct effort for actuaries to combine financial theory and stochastic methods into their established models. In the 21st century, the profession, both in practice and in the educational syllabi of many actuarial organizations, combines tables, loss models, stochastic methods, and financial theory, but is still not completely aligned with modern financial economics.

Remuneration and ranking

As there are relatively few actuaries in the world compared to other professions, actuaries are in high demand, and are highly paid for the services they render.

The actuarial profession has been consistently ranked for decades as one of the most desirable. Actuaries work comparatively reasonable hours, in comfortable conditions, without the need for physical exertion that may lead to injury, are well paid, and the profession consistently has a good hiring outlook. Not only has the overall profession ranked highly, but it also is considered one of the best professions for women, and one of the best recession-proof professions. In the United States, the profession was rated as the best profession by CareerCast, which uses five key criteria to rank jobs—environment, income, employment outlook, physical demands, and stress, in 2010, 2013, and 2015. In other years, it remained in the top 20.

Credentialing and exams

Becoming a fully credentialed actuary requires passing a rigorous series of professional examinations, usually taking several years. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the US, most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure.

Exam support

As these qualifying exams are extremely rigorous, support is usually available to people progressing through the exams. Often, employers provide paid on-the-job study time and paid attendance at seminars designed for the exams. Also, many companies that employ actuaries have automatic pay raises or promotions when exams are passed. As a result, actuarial students have strong incentives for devoting adequate study time during off-work hours. A common rule of thumb for exam students is that, for the Society of Actuaries examinations, roughly 400 hours of study time are necessary for each four-hour exam. Thus, thousands of hours of study time should be anticipated over several years, assuming no failures.

Pass marks and pass rates

Historically, the actuarial profession has been reluctant to specify the pass marks for its examinations. To address concerns that there are pre-existing pass/fail quotas, a former chairman of the Board of Examiners of the Institute and Faculty of Actuaries stated: "Although students find it hard to believe, the Board of Examiners does not have fail quotas to achieve. Accordingly, pass rates are free to vary (and do). They are determined by the quality of the candidates sitting the examination and in particular how well prepared they are. Fitness to pass is the criterion, not whether you can achieve a mark in the top 40% of candidates sitting." In 2000, the Casualty Actuarial Society (CAS) decided to start releasing pass marks for the exams it offers. The CAS's policy is also not to grade to specific pass ratios; the CAS board affirmed in 2001 that "the CAS shall use no predetermined pass ratio as a guideline for setting the pass mark for any examination. If the CAS determines that 70% of all candidates have demonstrated sufficient grasp of the syllabus material, then those 70% should pass. Similarly, if the CAS determines that only 30% of all candidates have demonstrated sufficient grasp of the syllabus material, then only those 30% should pass."

Additional Information

An actuary is one who calculates insurance risks and premiums. Actuaries compute the probability of the occurrence of various contingencies of human life, such as birth, marriage, sickness, unemployment, accidents, retirement, and death. They also evaluate the hazards of property damage or loss and the legal liability for the safety and well-being of others.

Most actuaries are employed by insurance companies. They make statistical studies to establish basic mortality and morbidity tables, develop corresponding premium rates, establish underwriting practices and procedures, determine the amounts of money required to assure payment of benefits, analyze company earnings, and counsel with the company accounting staff in organizing records and statements. In many insurance companies the actuary is a senior officer.

Some actuaries serve as consultants, and some are employed by large industrial corporations and governments to advise on insurance and pension matters.

-FWEBP-T3:2-S2880x1920


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2464 2025-02-17 00:01:31

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2363) Population Density

Gist

Population density is the number of people per unit of area, usually transcribed as "per square kilometer" or square mile, and which may include or exclude, for example, areas of water or glaciers.

Population density is the measure of the number of individuals living within a specific area. To determine population density, the number of individuals is divided by the size of the area.

Summary

Population density is the number of individuals of a certain species per unit area.

When analyzing human populations, the related measures of agricultural density and physiological density are often preferred. Agricultural density is the number of farmers per unit of arable land and is used to study the productivity of farms. Physiological density is the number of individuals per unit of arable land and is used to examine how many people a unit of agricultural land supports.

Uses:

Administrative uses

Population density is used to understand human geography. A number of countries collect data on their population, often through routine censuses, which are then combined with area measurements to understand the population density of the country and its administrative divisions. Governments, on both national and local levels, use this information to allocate resources, determine administrative or electoral districts, and make decisions regarding urban planning. The United Nations Statistics Division collects statistical information, including population density, and uses it to track development goals.

In the sciences

Population density is used in a variety of ways in the social and biological sciences. In the social sciences population density is concerned with the number of humans who lived or are living in a certain area. It is frequently used in studies of urbanization, which may consider data from time periods ranging from the urban revolution to modernization. The population density of human settlements is also used in studies of infectious disease and how environmental factors affect its spread among people as well as in studies that track the impact of crowded or sparse environments on mental health.

Ecologists use population density as part of population ecology, which studies the distribution and abundance of animal and plant populations. Both low and high population densities can present challenges for a species. Increased competition for food, predation, migration, and disease are common in areas with high population densities. However, in areas where a species has a low population density, individuals may struggle to find mates. Population density is also used to calculate carrying capacity.

Problems in population density

Population density is an imperfect measure. It can obscure differences in settlement intensity within a unit of land. If, for example, an area of land contains both city and countryside and a simple population density is calculated, those who live in the city will appear to live in a lower-density area than they actually do, while those in towns and villages will appear to live in a higher-density area. Population-weighted densities, which weight different densities by their population sizes, can be used to counter this distortion. Censuses and other measures of population may struggle to accurately count certain populations, particularly mobile populations such as nomadic groups, migrants, and refugees. Individuals without housing and those who live in settlements that are not recognized by governmental authorities are often undercounted as well. Some studies that consider population density, particularly those focusing on animals, use sampling to estimate the number of individuals in the area of interest and so are subject to sampling error, which happens because samples contain only a fraction of values in a population and are thus not perfectly representative of the entire set.

Details

Population density is the concentration of individuals within a species in a specific geographic locale. Population density data can be used to quantify demographic information and to assess relationships among ecosystems, human health and infrastructure.

A population is a subgroup of individuals within the same species that are living and breeding within a geographic area. The number of individuals living within that specific location determines the population density, or the number of individuals divided by the size of the area. In the case of humans, population density is often described as the number of people per square kilometer (or per square mile) and is discussed in relation to urbanization, migration and demographics.

Globally, statistics related to population density are tracked by the United Nations Statistics Division. Many countries also collect their own data on population density, often through the use of a census. In the United States, for example, the Constitution requires population data to be collected every 10 years, a task carried out by the U.S. Census Bureau. China also conducts a census once a decade and uses the information for economic and social planning. In recent years, some countries have employed new technologies to confirm population density and predict changes. China, for example, analyzes nighttime light data collected from sensors in space to confirm current population density calculations and to predict how density will change in coming years. Other countries use computer algorithms to interpret census data and forecast future growth.

There are also independent research groups that track population density across the world. Independent groups are especially helpful in countries that have few resources or are facing challenges (such as conflict) that make it difficult for the government to collect accurate data.

Data on population density at the national and regional levels is not always useful, however. One reason for this is because census data can be flawed; in particular, it sometimes fails to count entire groups of people, such as nomadic populations, migrants, refugees, people experiencing homelessness, residents of informal settlements (such as favelas) and others who do not have clear citizenship status. Collecting census data often relies on census takers visiting people at home, so residents who live in areas where there is conflict or who live in remote, hard-to-access places may also not be counted. In many places, forms have replaced census takers, but this can contribute to inaccuracies in areas with low literacy rates.

Population density provides the number of people living on a given area of land (such as people per kilometer or per square mile), but it does not describe how people relate to the land of that country. For example, society tends to form clusters that can be surrounded by sparsely inhabited areas, and life might be very different for those who live in the cluster versus those who live outside of it. Similarly, the population density of an area that includes a large city and desert may be the same as that for a population that is more evenly spread in an area, despite the fact that the two areas have little in common in terms of crowdedness. As a result, population density is more useful when applied to small areas, such as neighborhoods, than to entire regions or countries.

Some methods of calculating population density are more helpful for certain kinds of analysis than a strict “people per unit of land area” formula. Physiological density is a calculation of how many people in the country are supported by a unit of arable land. This can reveal how much pressure is put on the country to produce enough food for the population. Agricultural density measures how many farmers or people living in rural areas there are versus how much arable land. These kinds of metrics help illuminate issues of population density that would otherwise be missed.

Dense population clusters generally coincide with geographical locations often referred to as city, or as an urban or metropolitan area; sparsely populated areas are often referred to as rural. While these terms do not have globally agreed upon definitions, they are useful in general discussions about population density and geographic location.

Population density data can be important for many related studies, including studies of ecosystems and improvements to human health and infrastructure. For example, the World Health Organization, the U.S. Energy Information Administration, the U.S. Global Change Research Program and the U.S. Departments of Energy and Agriculture all use population data from the U.S. Census or United Nations statistics to understand and better predict resource use and health trends.

Key areas of study include the following:

* Ecology: how increasing population density in certain areas impacts biodiversity and use of natural resources.
* Epidemiology: how densely populated areas differ with respect to incidence, prevalence and transmission of infectious disease.
* Infrastructure: how population density drives specific requirements for energy use and the transport of goods.
* Urbanization: how population density relates to overcrowding, housing shortages and the provision of services.

This list is not inclusive—the way society structures its living spaces affects many other fields of study as well. Scientists have even studied how happiness correlates with population density. A substantial area of study, however, focuses on demographics of populations as they relate to density. Areas of demographic breakdown and study include, but are not limited to:

* age (including tracking of older populations);
* race, ethnicity and cultural characteristics (ethnic origin, religion and language)
* socioeconomic characteristics (including poverty rates).

Additional Information

Population density (in agriculture: standing stock or plant density) is a measurement of population per unit land area. It is mostly applied to humans, but sometimes to other living organisms too. It is a key geographical term.

Biological population densities

Population density is population divided by total land area, sometimes including seas and oceans, as appropriate.

Low densities may cause an extinction vortex and further reduce fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes of reduced fertility in low population densities are:

* Increased problems with locating sexual mates
* Increased inbreeding

Human densities

Population density is the number of people per unit of area, usually transcribed as "per square kilometer" or square mile, and which may include or exclude, for example, areas of water or glaciers. Commonly this is calculated for a county, city, country, another territory or the entire world.

The world's population is around 8,000,000,000 and the Earth's total area (including land and water) is 510,000,000 km^2 (200,000,000 sq mi). Therefore, the worldwide human population density is approximately 8,000,000,000 ÷ 510,000,000 = 16/{km}^{2} (41/sq mi). However, if only the Earth's land area of 150,000,000 {km}^{2} (58,000,000 sq mi) is taken into account, then human population density is 53/{km}^{2} (140/sq mi). This includes all continental and island land area, including Antarctica. However, if Antarctica is excluded, then population density rises to over 58 per square kilometre (150/sq mi).

The European Commission's Joint Research Centre (JRC) has developed a suite of (open and free) data and tools named the Global Human Settlement Layer (GHSL) to improve the science for policy support to the European Commission Directorate Generals and Services and as support to the United Nations system.

Several of the most densely populated territories in the world are city-states, microstates and urban dependencies. In fact, 95% of the world's population is concentrated on just 10% of the world's land. These territories have a relatively small area and a high urbanization level, with an economically specialized city population drawing also on rural resources outside the area, illustrating the difference between high population density and overpopulation.

Deserts have very limited potential for growing crops as there is not enough rain to support them. Thus, their population density is generally low. However, some cities in the Middle East, such as Dubai, have been increasing in population and infrastructure growth at a fast pace.

Mongolian Steppes. Mongolia is the least densely populated country in the world due to its harsh climate as a result of its geography.

Cities with high population densities are, by some, considered to be overpopulated, though this will depend on factors like quality of housing and infrastructure and access to resources. Very densely populated cities are mostly in Asia (particularly Southeast Asia); Africa's Lagos, Kinshasa, and Cairo; South America's Bogotá, Lima, and São Paulo; and Mexico City and Saint Petersburg also fall into this category.

Monaco is currently the most densely populated nation in Europe.

City population and especially area are, however, heavily dependent on the definition of "urban area" used: densities are almost invariably higher for the center only than when suburban settlements and intervening rural areas are included, as in the agglomeration or metropolitan area (the latter sometimes including neighboring cities).

In comparison, based on a world population of 8 billion, the world's inhabitants, if conceptualized as a loose crowd occupying just under 1 {m}^{2} (10 sq ft) per person (cf. Jacobs Method), would occupy an area of 8,000 square kilometres (3,100 sq mi) a little less than the land area of Puerto Rico, 8,868 square kilometres (3,424 sq mi).

Other methods of measurement

Although the arithmetic density is the most common way of measuring population density, several other methods have been developed to provide alternative measures of population density over a specific area.

* Arithmetic density: The total number of people / area of land
* Physiological density: The total population / area of arable land
* Agricultural density: The total rural population / area of arable land
* Residential density: The number of people living in an urban area / area of residential land
* Urban density: The number of people inhabiting an urban area / total area of urban land
* Ecological optimum: The density of population that can be supported by the natural resources.
* Population-weighted density: Also known as living density, population density at which the average person lives.

World-Population-Density-people-km-2-Source-The-World-Population-Density-2016-World.ppm


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2465 2025-02-17 20:33:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2364) Geothermal Power Plant

Gist

Geothermal power plants use hydrothermal resources that have both water (hydro) and heat (thermal). Geothermal power plants require high-temperature hydrothermal resources—300 degrees Fahrenheit (°F) to 700° F—that come from either dry steam wells or from hot water wells.

Geothermal energy is heat energy from the earth—geo (earth) + thermal (heat). Geothermal resources are reservoirs of hot water that exist or are human-made at varying temperatures and depths below the earth's surface.

Summary

Geothermal energy is a natural resource of heat energy from within Earth that can be captured and harnessed for cooking, bathing, space heating, electrical power generation, and other uses. The total amount of geothermal energy incident on Earth is vastly in excess of the world’s current energy requirements, but it can be difficult to harness for electricity production. Despite its challenges, geothermal energy stands in stark contrast to the combustion of greenhouse gas-emitting fossil fuels (namely coal, petroleum, and natural gas) driving much of the climate crisis, and it has become increasingly attractive as a renewable energy source.

Mechanism and potential

Temperatures increase below Earth’s surface at a rate of about 30 °C per km in the first 10 km (roughly 90 °F per mile in the first 6 miles) below the surface. This internal heat of Earth is an immense store of energy and can manifest aboveground in phenomena such as volcanoes, lava flows, geysers, fumaroles, hot springs, and mud pots. The heat is produced mainly by the radioactive decay of potassium, thorium, and uranium in Earth’s crust and mantle and also by friction generated along the margins of continental plates.

Worldwide, the annual low-grade heat flow to the surface of Earth averages between 50 and 70 milliwatts (mW) per square meter. In contrast, incoming solar radiation striking Earth’s surface provides 342 watts per square meter annually. In the upper 10 km of rock beneath the contiguous United States alone, geothermal energy amounts to 3.3 × {10}^{25} joules, or about 6,000 times the energy contained in the world’s oil reserves. The estimated energy that can be recovered and utilized on the surface is 4.5 × {10}^{6} exajoules, or about 1.4 × {10}^{6} terawatt-years, which equates to roughly three times the world’s annual consumption of all types of energy.

Although geothermal energy is plentiful, geothermal power is not. The amount of usable energy from geothermal sources varies with depth and by extraction method. Normally, heat extraction requires a fluid (or steam) to bring the energy to the surface. Locating and developing geothermal resources can be challenging. This is especially true for the high-temperature resources needed for generating electricity. Such resources are typically limited to parts of the world characterized by recent volcanic activity or located along plate boundaries (such as along the Pacific Ring of Fire) or within crustal hot spots (such as Yellowstone National Park and the Hawaiian Islands). Geothermal reservoirs associated with those regions must have a heat source, adequate water recharge, adequate permeability or faults that allow fluids to rise close to the surface, and an impermeable caprock to prevent the escape of the heat. In addition, such reservoirs must be economically accessible (that is, within the range of drills). The most economically efficient facilities are located close to the geothermal resource to minimize the expense of constructing long pipelines. In the case of electric power generation, costs can be kept down by locating the facility near electrical transmission lines to transmit the electricity to market. Even though there is a continuous source of heat within Earth, the extraction rate of the heated fluids and steam can exceed the replenishment rate, and, thus, use of the resource must be managed sustainably.

Uses and history

Geothermal energy use can be divided into three categories: direct-use applications, geothermal heat pumps (GHPs), and electric power generation.

Direct uses

Probably the most widely used set of applications of geothermal energy involves the direct use of heated water from the ground without the need for any specialized equipment. All direct-use applications make use of low-temperature geothermal resources, which range between about 50 and 150 °C (122 and 302 °F). Such low-temperature geothermal water and steam have been used to warm single buildings, as well as whole districts where numerous buildings are heated from a central supply source. In addition, many swimming pools, balneological (therapeutic) facilities at spas, greenhouses, and aquaculture ponds around the world have been heated with geothermal resources.

Geothermal energy from natural pools and hot springs has long been used for cooking, bathing, and warmth. There is evidence that Native Americans used geothermal energy for cooking as early as 10,000 years ago. In ancient times, baths heated by hot springs were used by the Greeks and Romans. Such uses of geothermal energy were initially limited to sites where hot water and steam were accessible.

Other direct uses of geothermal energy include cooking, industrial applications (such as drying fruit, vegetables, and timber), milk pasteurization, and large-scale snow melting. For many of those activities, hot water is often used directly in the heating system, or it may be used in conjunction with a heat exchanger, which transfers heat when there are problematic minerals and gases such as hydrogen sulfide mixed in with the fluid. Early industrial direct-use applications included the extraction of borate compounds from geothermal fluids at Larderello, Italy, during the early 19th century.

Geothermal heat pumps

Geothermal energy is also used for the heating and cooling of buildings. Examples of geothermal space heating date at least as far back as the Roman city of Pompeii during the 1st century ce. Although the world’s first district heating system was installed at Chaudes-Aigues, France, in the 14th century, it was not until the late 19th century that other cities, as well as industries, began to realize the economic potential of geothermal resources. Geothermal heat was delivered to the first residences in the United States in 1892, to Warm Springs Avenue in Boise, Idaho, and most of the city used geothermal heat by 1970. The largest and most-famous geothermal district heating system is in Reykjavík, Iceland, where 99 percent of the city received geothermal water for space heating by the mid-1970s after efforts began in the 1930s.

Beginning in the late 20th century, geothermal heat pumps gained popularity in many places as a greener alternative to traditional boilers, furnaces, and air conditioners. Utilizing pipes buried in the ground, these systems take advantage of the relatively stable moderate temperature conditions that occur within 6 meters (about 20 feet) of Earth’s surface, where the temperature of the ground maintains a near-constant temperature of 10 to 16 °C (50 to 60 °F). Consequently, geothermal heat can be used to help warm buildings when the air temperature falls below that of the ground, and GHPs can also help to cool buildings when the air temperature is greater than that of the ground by drawing warm air from a building and circulating it underground, where it loses much of its heat, before returning it. GHPs are very efficient, using 25–50 percent less electricity than comparable conventional heating and cooling systems, and they produce less pollution.

Electric power generation

Depending upon the temperature and the fluid (steam) flow, geothermal energy can also be used to generate electricity. Geothermal power plants control the behavior of steam and use it to drive electrical generators. Some “dry steam” geothermal power plants simply collect rising steam from the ground and funnel it directly into a turbine. Other power plants, built around the flash steam and binary cycle designs, use a mixture of steam and heated water (“wet steam”) extracted from the ground to start the electrical generation process. Given that the excess water vapor at the end of each process is condensed and returned to the ground, where it is reheated for later use, geothermal power is considered a form of renewable energy.

The first geothermal electric power generation took place in Larderello, Italy, with the development of an experimental plant in 1904. The first commercial use of that technology occurred there in 1913 with the construction of a plant that produced 250 kilowatts (kW). Geothermal power plants were commissioned in New Zealand starting in 1958 and at the Geysers in northern California in 1960.

Details

Geothermal power is electrical power generated from geothermal energy. Technologies in use include dry steam power stations, flash steam power stations and binary cycle power stations. Geothermal electricity generation is currently used in 26 countries, while geothermal heating is in use in 70 countries.

As of 2019, worldwide geothermal power capacity amounts to 15.4 gigawatts (GW), of which 23.9% (3.68 GW) are installed in the United States. International markets grew at an average annual rate of 5 percent over the three years to 2015, and global geothermal power capacity is expected to reach 14.5–17.6 GW by 2020. Based on current geologic knowledge and technology the Geothermal Energy Association (GEA) publicly discloses, the GEA estimates that only 6.9% of total global potential has been tapped so far, while the IPCC reported geothermal power potential to be in the range of 35 GW to 2 TW. Countries generating more than 15 percent of their electricity from geothermal sources include El Salvador, Kenya, the Philippines, Iceland, New Zealand, and Costa Rica. Indonesia has an estimated potential of 29 GW of geothermal energy resources, the largest in the world; in 2017, its installed capacity was 1.8 GW.

Geothermal power is considered to be a sustainable, renewable source of energy because the heat extraction is small compared with the Earth's heat content. The greenhouse gas emissions of geothermal electric stations average 45 grams of carbon dioxide per kilowatt-hour of electricity, or less than 5% of those of conventional coal-fired plants.

As a source of renewable energy for both power and heating, geothermal has the potential to meet 3 to 5% of global demand by 2050. With economic incentives, it is estimated that by 2100 it will be possible to meet 10% of global demand with geothermal power.

History and development

In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904 in Larderello, Italy. It successfully lit four light bulbs. Later, in 1911, the world's first commercial geothermal power station was built there. Experimental generators were built in Beppu, Japan and the Geysers, California, in the 1920s, but Italy was the world's only industrial producer of geothermal electricity until 1958.

In 1958, New Zealand became the second major industrial producer of geothermal electricity when its Wairakei station was commissioned. Wairakei was the first station to use flash steam technology. Over the past 60 years, net fluid production has been in excess of 2.5 {km}^{3}. Subsidence at Wairakei-Tauhara has been an issue in a number of formal hearings related to environmental consents for expanded development of the system as a source of renewable energy.

In 1960, Pacific Gas and Electric began operation of the first successful geothermal electric power station in the United States at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power.

An organic fluid based binary cycle power station was first demonstrated in 1967 in the Soviet Union and later introduced to the United States in 1981, following the 1970s energy crisis and significant changes in regulatory policies. This technology allows the use of temperature resources as low as 81 °C. In 2006, a binary cycle station in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57 °C (135 °F).

Geothermal electric stations have until recently been built exclusively where high-temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland was shut down after it triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the United States of America.

The thermal efficiency of geothermal electric stations is low, around 7–10%, because geothermal fluids are at a low temperature compared with steam from boilers. By the laws of thermodynamics this low temperature limits the efficiency of heat engines in extracting useful energy during the generation of electricity. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. The efficiency of the system does not affect operational costs as it would for a coal or other fossil fuel plant, but it does factor into the viability of the station. In order to produce more energy than the pumps consume, electricity generation requires high-temperature geothermal fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large – up to 96% has been demonstrated. However the global average capacity factor was 74.5% in 2008, according to the IPCC.

Additional Information

The United States leads the world in geothermal electricity-generating capacity—just over 4 gigawatts. That’s enough to power the equivalent of about 3 million U.S. homes.

To generate power from geothermal systems, three elements are needed:

* Heat—Abundant heat found in rocks deep underground, varying by depth, geology, and geographic location.
* Fluid—Sufficient fluid to carry heat from the rocks to the earth’s surface.
* Permeability—Small pathways that facilitate fluid movement through the hot rocks.

The presence of hot rocks, fluid, and permeability underground creates natural geothermal systems. Small underground pathways, such as fractures, conduct fluids through the hot rocks. In geothermal electricity generation, this fluid can be drawn as energy in the form of heat through wells to the earth’s surface. Once it has reached the surface, this fluid is used to drive turbines that produce electricity.

Conventional hydrothermal resources naturally contain all three elements. Sometimes, though, these conditions do not exist naturally—for instance, the rocks are hot, but they lack permeability or sufficient fluid flow. Enhanced geothermal systems (EGS) use humanmade reservoirs to create the proper conditions for electricity generation by injecting fluid into the hot rocks. This creates new fractures and opens existing ones to enhance the size and connectivity of fluid pathways. Once this engineered reservoir is created, fluid can be injected into the subsurface and then drawn up through a production well to generate electricity using the same processes as a conventional hydrothermal system.

The2019 GeoVisionanalysisconcluded that, with advancements in EGS, geothermal could power more than 40 million U.S. homes by 2050 and provide heating and cooling solutions nationwide. The 2023Enhanced Geothermal Shot™ analysisfound that the potential was even higher: technical advances would enable geothermal energy to power the equivalent of more than 65 million U.S. homes

GTO is also assessing opportunities to use sedimentary geothermal resources to produce electricity. Sedimentary rock formations commonly associated with oil and gas can also hold significant amounts of thermal energy. This creates opportunities to access additional geothermal resources and even to repurpose idle or unproductive oil and gas wells for geothermal electricity generation.

Figure-2-2.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2466 2025-02-18 00:03:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2365) Population

Gist

The world's population is more than three times larger than it was in the mid-twentieth century. The global human population reached 8.0 billion in mid-November 2022 from an estimated 2.5 billion people in 1950, adding 1 billion people since 2010 and 2 billion since 1998.

Summary

Population is the number of people that live in a country. It counts the resident population, defined as all nationals present in, or temporarily absent from the country, and aliens permanently settled in the country.

The population includes the following categories: national armed forces stationed abroad; merchant seamen at sea; diplomatic personnel located abroad; civilian aliens resident in the country; and displaced persons resident in the country. Excluded are the following categories: foreign armed forces stationed in the country; foreign diplomatic personnel located in the country; and civilian aliens temporarily in the country. For countries with overseas colonies, protectorates or other territorial possessions, their populations are generally excluded.

This indicator is measured in millions of persons.

Details

Population is the term typically used to refer to the number of people in a single area. Governments conduct a census to quantify the size of a resident population within a given jurisdiction. The term is also applied to non-human animals, microorganisms, and plants, and has specific uses within such fields as ecology and genetics.

Etymology

The word population is derived from the Late Latin populatio (a people, a multitude), which itself is derived from the Latin word populus (a people).

Use of the term:

Social sciences

In sociology and population geography, population refers to a group of human beings with some predefined feature in common, such as location, race, ethnicity, nationality, or religion.

Ecology

In ecology, a population is a group of organisms of the same species which inhabit the same geographical area and are capable of interbreeding. The area of a sexual population is the area where interbreeding is possible between any opposite-gender pair within the area and more probable than cross-breeding with individuals from other areas.

In humans, interbreeding is unrestricted by racial differences, as all humans belong to the same species of Homo sapiens.

In ecology, the population of a certain species in a certain area can be estimated using the Lincoln index to calculate the total population of an area based on the number of individuals observed.

Dynamics

Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. Population dynamics is a branch of mathematical biology, and uses mathematical techniques such as differential equations to model behaviour. Population dynamics is also closely related to other mathematical biology fields such as epidemiology, and also uses techniques from evolutionary game theory in its modelling.

Genetics

In genetics, a population is often defined as a set of organisms in which any pair of members can breed together. They can thus routinely exchange gametes in order to have usually fertile progeny, and such a breeding group is also known therefore as a gamodeme. This also implies that all members belong to the same species. If the gamodeme is very large (theoretically, approaching infinity), and all gene alleles are uniformly distributed by the gametes within it, the gamodeme is said to be panmictic. Under this state, allele (gamete) frequencies can be converted to genotype (zygote) frequencies by expanding an appropriate quadratic equation, as shown by Sir Ronald Fisher in his establishment of quantitative genetics.

This seldom occurs in nature: localization of gamete exchange – through dispersal limitations, preferential mating, cataclysm, or other cause – may lead to small actual gamodemes which exchange gametes reasonably uniformly within themselves but are virtually separated from their neighboring gamodemes. However, there may be low frequencies of exchange with these neighbors. This may be viewed as the breaking up of a large sexual population (panmictic) into smaller overlapping sexual populations. This failure of panmixia leads to two important changes in overall population structure: (1) the component gamodemes vary (through gamete sampling) in their allele frequencies when compared with each other and with the theoretical panmictic original (this is known as dispersion, and its details can be estimated using expansion of an appropriate binomial equation); and (2) the level of homozygosity rises in the entire collection of gamodemes. The overall rise in homozygosity is quantified by the inbreeding coefficient (f or φ). All homozygotes are increased in frequency – both the deleterious and the desirable. The mean phenotype of the gamodemes collection is lower than that of the panmictic original – which is known as inbreeding depression. It is most important to note, however, that some dispersion lines will be superior to the panmictic original, while some will be about the same, and some will be inferior. The probabilities of each can be estimated from those binomial equations. In plant and animal breeding, procedures have been developed which deliberately utilize the effects of dispersion (such as line breeding, pure-line breeding, backcrossing). Dispersion-assisted selection leads to the greatest genetic advance (ΔG=change in the phenotypic mean), and is much more powerful than selection acting without attendant dispersion. This is so for both allogamous (random fertilization) and autogamous (self-fertilization) gamodemes.

World human population

According to the UN, the world's population surpassed 8 billion on 15 November 2022, an increase of 1 billion since 12 March 2012. According to a separate estimate by the United Nations, Earth's population exceeded seven billion in October 2011. According to UNFPA, growth to such an extent offers unprecedented challenges and opportunities to all of humanity.

According to papers published by the United States Census Bureau, the world population hit 6.5 billion on 24 February 2006. The United Nations Population Fund designated 12 October 1999 as the approximate day on which world population reached 6 billion. This was about 12 years after the world population reached 5 billion in 1987, and six years after the world population reached 5.5 billion in 1993. The population of countries such as Nigeria is not even known to the nearest million, so there is a considerable margin of error in such estimates.

Researcher Carl Haub calculated that a total of over 100 billion people have probably been born in the last 2000 years.

Predicted growth and decline

Population growth increased significantly as the Industrial Revolution gathered pace from 1700 onwards. The last 50 years have seen a yet more rapid increase in the rate of population growth due to medical advances and substantial increases in agricultural productivity, particularly beginning in the 1960s, made by the Green Revolution. In 2017 the United Nations Population Division projected that the world's population would reach about 11.6 billion in 2050 and 24.8 billion in 2100.

In the future, the world's population is expected to peak at some point, after which it will decline due to economic reasons, health concerns, land exhaustion and environmental hazards. According to one report, it is very likely that the world's population will stop growing before the end of the 21st century. Further, there is some likelihood that population will actually decline before 2100.[The date at which it stops growing is the exact same date when it starts to decline.] Population has already declined in the last decade or two in Eastern Europe, the Baltics and in the former Commonwealth of Independent States.

The population pattern of less-developed regions of the world in recent years has been marked by gradually declining birth rates. These followed an earlier sharp reduction in death rates. This transition from high birth and death rates to low birth and death rates is often referred to as the demographic transition.

Population planning

Human population planning is the practice of changing the rate of growth of a human population. Historically, human population control has been implemented with the goal of limiting the rate of population growth. In the time from the 1950s to the 1980s, concerns about global population growth and its effects on poverty, environmental degradation, and political stability led to efforts to reduce population growth rates. While population control can involve measures that make people's lives better by giving them greater control of their reproduction, a few programs, most notably the Chinese government's one-child per family policy, have resorted to coercive measures.

In the 1970s, tension grew between population control advocates and women's health activists who advanced women's reproductive rights as part of a human rights-based approach. Growing opposition to the narrow population control focus led to a significant change in population control policies in the early 1980s.

Additional Information

Population, in human biology, is the whole number of inhabitants occupying an area (such as a country or the world) and continually being modified by increases (births and immigrations) and losses (deaths and emigrations). As with any biological population, the size of a human population is limited by the supply of food, the effect of diseases, and other environmental factors. Human populations are further affected by social customs governing reproduction and by the technological developments, especially in medicine and public health, that have reduced mortality and extended the life span.

Few aspects of human societies are as fundamental as the size, composition, and rate of change of their populations. Such factors affect economic prosperity, health, education, family structure, crime patterns, language, culture—indeed, virtually every aspect of human society is touched upon by population trends.

The study of human populations is called demography—a discipline with intellectual origins stretching back to the 18th century, when it was first recognized that human mortality could be examined as a phenomenon with statistical regularities. Especially influential was English economist and demographer Thomas Malthus, who is best known for his theory that population growth will always tend to outrun the food supply and that betterment of humankind is impossible without stern limits on reproduction. This thinking is commonly referred to as Malthusianism.

Demography casts a multidisciplinary net, drawing insights from economics, sociology, statistics, medicine, biology, anthropology, and history. Its chronological sweep is lengthy: limited demographic evidence for many centuries into the past, and reliable data for several hundred years are available for many regions. The present understanding of demography makes it possible to project (with caution) population changes several decades into the future.

The basic components of population change

At its most basic level, the components of population change are few indeed. A closed population (that is, one in which immigration and emigration do not occur) can change according to the following simple equation: the population (closed) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths during the interval. In other words, only addition by births and reduction by deaths can change a closed population.

Populations of nations, regions, continents, islands, or cities, however, are rarely closed in the same way. If the assumption of a closed population is relaxed, in- and out-migration can increase and decrease population size in the same way as do births and deaths; thus, the population (open) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths, plus in-migrants, minus out-migrants. Hence the study of demographic change requires knowledge of fertility (births), mortality (deaths), and migration. These, in turn, affect not only population size and growth rates but also the composition of the population in terms of such attributes as gender, age, ethnic or racial composition, and geographic distribution.

popln.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2467 2025-02-19 00:02:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2366) Radioactive Decay

Gist

Radioactive decay is the emission of energy in the form of ionizing radiation. Ionizing radiation can affect the atoms in living things, so it poses a health risk by damaging tissue and DNA in genes. The ionizing radiation that is emitted can include alpha particles.

Radioactive decay is the emission of energy in the form of ionizing radiation. Ionizing radiation can affect the atoms in living things, so it poses a health risk by damaging tissue and DNA in genes.. The ionizing radiation that is emitted can include alpha particles.

Summary

Radioactivity is a property exhibited by certain types of matter of emitting energy and subatomic particles spontaneously. It is, in essence, an attribute of individual atomic nuclei.

An unstable nucleus will decompose spontaneously, or decay, into a more stable configuration but will do so only in a few specific ways by emitting certain particles or certain forms of electromagnetic energy. Radioactive decay is a property of several naturally occurring elements as well as of artificially produced isotopes of the elements. The rate at which a radioactive element decays is expressed in terms of its half-life; i.e., the time required for one-half of any given quantity of the isotope to decay. Half-lives range from more than {10}^{24} years for some nuclei to less than {10}^{-23} second.  The product of a radioactive decay process—called the daughter of the parent isotope—may itself be unstable, in which case it, too, will decay. The process continues until a stable nuclide has been formed.

Details

Radioactive decay (also known as nuclear decay, radioactivity, radioactive disintegration, or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nuclei is considered radioactive. Three of the most common types of decay are alpha, beta, and gamma decay. The weak force is the mechanism that is responsible for beta decay, while the other two are governed by the electromagnetic and nuclear forces.

Radioactive decay is a random process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed. However, for a significant number of identical atoms, the overall decay rate can be expressed as a decay constant or as a half-life. The half-lives of radioactive atoms have a huge range: from nearly instantaneous to far longer than the age of the universe.

The decaying nucleus is called the parent radionuclide (or parent radioisotope), and the process produces at least one daughter nuclide. Except for gamma decay or internal conversion from a nuclear excited state, the decay is a nuclear transmutation resulting in a daughter containing a different number of protons or neutrons (or both). When the number of protons changes, an atom of a different chemical element is created.

There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 35 radionuclides (seven elements have two different radionuclides each) that date before the time of formation of the Solar System. These 35 are known as primordial radionuclides. Well-known examples are uranium and thorium, but also included are naturally occurring long-lived radioisotopes, such as potassium-40. Each of the heavy primordial radionuclides participates in one of the four decay chains.

History of discovery

Henri Poincaré laid the seeds for the discovery of radioactivity through his interest in and studies of X-rays, which significantly influenced physicist Henri Becquerel. Radioactivity was discovered in 1896 by Becquerel and independently by Marie Curie, while working with phosphorescent materials. These materials glow in the dark after exposure to light, and Becquerel suspected that the glow produced in cathode-ray tubes by X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescent salts on it. All results were negative until he used uranium salts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays".

It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescent salts of uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light.

At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel, Ernest Rutherford, Paul Villard, Pierre Curie, Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his student Frederick Soddy were the first to realize that many decay processes resulted in the transmutation of one element to another. Subsequently, the radioactive displacement law of Fajans and Soddy was formulated to describe the products of alpha and beta decay.

The early researchers also discovered that many other chemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements: polonium and radium. Except for the radioactivity of radium, the chemical similarity of radium to barium made these two elements difficult to distinguish.

Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity" to define the emission of ionizing radiation by some heavy elements. (Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modern nuclear medicine.

Additional Information

Radioactive decay is the emission of energy in the form of ionizing radiation. The ionizing radiation that is emitted can include alpha particles, beta particles and/or gamma rays. Radioactive decay occurs in unbalanced atoms called radionuclides.

Elements in the periodic table can take on several forms. Some of these forms are stable; other forms are unstable. Typically, the most stable form of an element is the most common in nature. However, all elements have an unstable form. Unstable forms emit ionizing radiation and are radioactive. There are some elements with no stable form that are always radioactive, such as uranium. Elements that emit ionizing radiation are called radionuclides.

When it decays, a radionuclide transforms into a different atom - a decay product. The atoms keep transforming to new decay products until they reach a stable state and are no longer radioactive. The majority of radionuclides only decay once before becoming stable. Those that decay in more than one step are called series radionuclides. The series of decay products created to reach this balance is called the decay chain.

Each series has its own unique decay chain. The decay products within the chain are always radioactive. Only the final, stable atom in the chain is not radioactive. Some decay products are a different chemical element.

Every radionuclide has a specific decay rate, which is measured in terms of "half-life." Radioactive half-life is the time required for half of the radioactive atoms present to decay. Some radionuclides have half-lives of mere seconds, but others have half-lives of hundreds or millions or billions of years.

Radioactive-Decay-Feature-1-678x378.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2468 2025-02-19 20:43:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2367) Iritable Bowel Syndrome

Gist

What is IBS?

* Irritable bowel syndrome (IBS) is a common condition that affects the digestive system.
* It causes symptoms like stomach cramps, bloating, diarrhoea and constipation. These tend to come and go over time, and can last for days, weeks or months at a time.
* It's usually a lifelong problem. It can be very frustrating to live with and can have a big impact on your everyday life.
* There's no cure, but diet changes and medicines can often help control the symptoms.
* The exact cause is unknown – it's been linked to things like food passing through your gut too quickly or too slowly, oversensitive nerves in your gut, stress and a family history of IBS.

Summary

Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder characterized by a group of symptoms that commonly include abdominal pain, abdominal bloating and changes in the consistency of bowel movements. These symptoms may occur over a long time, sometimes for years. IBS can negatively affect quality of life and may result in missed school or work or reduced productivity at work. Disorders such as anxiety, major depression, and myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are common among people with IBS.

The cause of IBS is not known but multiple factors have been proposed to lead to the condition. Theories include combinations of "gut–brain axis" problems, alterations in gut motility, visceral hypersensitivity, infections including small intestinal bacterial overgrowth, neurotransmitters, genetic factors, and food sensitivity. Onset may be triggered by a stressful life event, or an intestinal infection. In the latter case, it is called post-infectious irritable bowel syndrome.

Diagnosis is based on symptoms in the absence of worrisome features and once other potential conditions have been ruled out. Worrisome or "alarm" features include onset at greater than 50 years of age, weight loss, blood in the stool, or a family history of inflammatory bowel disease. Other conditions that may present similarly include celiac disease, microscopic colitis, inflammatory bowel disease, bile acid malabsorption, and colon cancer.

Treatment of IBS is carried out to improve symptoms. This may include dietary changes, medication, probiotics, and counseling. Dietary measures include increasing soluble fiber intake, or a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, and polyols (FODMAPs). The "low FODMAP" diet is meant for short to medium term use and is not intended as a life-long therapy. The medication loperamide may be used to help with diarrhea while laxatives may be used to help with constipation. There is strong clinical-trial evidence for the use of antidepressants, often in lower doses than that used for depression or anxiety, even in patients without comorbid mood disorder. Tricyclic antidepressants such as amitriptyline or nortriptyline and medications from the selective serotonin reuptake inhibitor (SSRI) group may improve overall symptoms and reduce pain. Patient education and a good doctor–patient relationship are an important part of care.

About 10–15% of people in the developed world are believed to be affected by IBS. The prevalence varies according to country (from 1.1% to 45.0%) and criteria used to define IBS; however the average global prevalence is 11.2%. It is more common in South America and less common in Southeast Asia. In the Western world, it is twice as common in women as men and typically occurs before age 45. However, women in East Asia are not more likely than their male counterparts to have IBS, indicating much lower rates among East Asian women. Similarly, men from South America, South Asia and Africa are just as likely to have IBS as women in those regions, if not more so. The condition appears to become less common with age. IBS does not affect life expectancy or lead to other serious diseases. The first description of the condition was in 1820, while the current term irritable bowel syndrome came into use in 1944.

Details

Irritable bowel syndrome (IBS) is a group of digestive symptoms that can cause persistent discomfort. People can often reduce the symptoms with diet and lifestyle changes.

IBS is a chronic condition that typically causes constipation, diarrhea, or a combination of both. People may also experience bloating and cramping.

Many factors may influence the development IBS, from infections to psychological trauma.

What is irritable bowel syndrome?

IBS is a condition that affects the function of the digestive tract but with no visible damageTrusted Source or signs of disease in the intestines.

The condition is a result of a change or disturbance in bowel function, which then leads to symptoms such as abdominal pain and changes in bowel movements. These symptoms may come and go, or they may be persistent.

IBS symptoms

The signs and symptoms of IBS can vary from person to person and may include:

* diarrhea
* constipation
* bloating
* abdominal pain and cramping, which may reduce after passing a stool
* a feeling that the bowels are not empty after having a bowel movement
* passing mucus from the rectum

The same symptoms can affect adults and children.

Symptoms in males vs. females

IBS symptoms can be similar between the sexes, but a 2018 review of past research found that females are significantly more likely to have IBS-C and males more likely to have IBS-D. The reasons for this are not yet clear.

A 2022 review also notes that past research has found that men tend to experience more intense pain from IBS than women.

Learn more about IBS in females and in males.

Red flag symptoms

IBS can have similar symptoms to more serious conditions. If a person has any of the following, they should seek medical advice as soon as they can:

* stool that is black or contains blood
* unexplained weight loss
* swelling or pain in one specific area of the abdomen
* swelling around the rectum
* abdominal pain at night
* progressively worsening symptoms

Types of IBS

Generally, people with IBS have symptoms that fall into one of three categories:

* IBS with constipation (IBS-C): This form of IBS mainly causes constipation. A person may have hard, lumpy stools or have difficulty passing them.
* IBS with diarrhea (IBS-D): This type of IBS mainly causes diarrhea. A person may experience an urgent need to go to the toilet, frequent bowel movements, or watery or loose stools.
* IBS alternating (IBS-A) or mixed (IBS-M): In these types, a person experiences both constipation and diarrhea.

What causes IBS?

Scientists are still learning about what leads some people to develop IBS, but there are a few factors that may raise the risk.

Infection

Research suggests a link between IBS and food poisoning and other gut infections. A 2017 study of over 20,000 people found that 10% who experienced an intestinal infection later developed IBS. For those who had protozoa or parasite infections, the rate was 41.9%.

Some microbes may have an impact on the immune system that leads to long-term changes in the gut.

Dysbiosis

Dysbiosis is the medical term for an imbalanced gut microbiome. The microbiome is an ecosystem of bacteria and other organisms that live in the intestines, which influence how humans digest and absorb nutrients. Disturbances to this ecosystem, including the use of antibiotics or infections, may play a role in IBS.

Stress or trauma

The gut and brain are connected, meaning a problem with one can affect the other. This is known as the gut-brain axis. Researchers are still learning how it works, but there is evidence that severe stress may affect gut health.

For example, a 2019 review of past studies, mostly involving veterans, found that people living with posttraumatic stress disorder (PTSD) are more likely to develop IBS.

Motility problems

Gut motility refers to the speed at which the intestines push food through the digestive tract. If motility is too fast or too slow, it may result in IBS symptoms.

Conditions affecting the nervous system, structural differences in anatomy, and some medications are examples of things that may affect motility.

Visceral hypersensitivity

The viscera, or intestines, contain nerves that convey pain signals to the brain. If these nerves become hypersensitive, a person may perceive pain that they otherwise would not.

What triggers IBS flare-ups?

A flare or flare-up refers to a period when a person’s IBS symptoms worsen. Several factors can trigger IBS symptoms.

For many, diet plays a role in triggering IBS flare-ups. However, it is important to note that this does not mean food is the cause of the IBS itself, nor that dietary changes will cure it. There are many reasons why certain foods may trigger symptoms.

For example, some people may have difficulty with FODMAPs, or “fermentable oligosaccharides, disaccharides, monosaccharides, and polyols.” These are a type of carbohydrate that can be more difficult to digest or that microbes in the gut may ferment, leading to bloating and other symptoms.

Some examples of high FODMAP foods include:

* beans
* onions
* garlic
* dried fruits, such as raisins or prunes
* wheat-based foods, such as bread
* artificially sweetened foods, such as gum or candy
* milk and soft cheeses

Others with IBS may react to foods that irritate the gut because they are spicy or oily, or because they contain too much or too little fiber.

No two people with IBS are the same. Even among people who do react to FODMAPs, not everyone will react to every FODMAP type.

Caffeine or alcohol

Some find that caffeine can be an IBS trigger, meaning they react to foods such as:

* tea
* coffee
* chocolate
* energy drinks

For others, alcohol is an IBS trigger.

Stress

Some people find that experiencing stress or anxiety triggers or worsens their IBS.

This does not mean the condition is purely psychological, but for some, addressing mental health does improve symptoms.

Diagnosing IBS

Diagnosing IBS is often a process of elimination. First, doctors will ask about a person’s symptoms and may physically examine the abdomen. They may also ask a person to keep a diary to track symptoms over time as well as food intake.

If the symptoms point to a digestive condition, a doctor will try to rule out other conditions that produce symptoms similar to IBS, such as:

* small intestinal bacterial overgrowth
* lactose intolerance
* celiac disease
* IBD

If tests for other conditions are negative, they may make the diagnosis of IBS.

In recent years, researchers have developed a specific blood test to diagnose IBS-D, which may speed up diagnosis. However, this test is not widely available yet and does not work for IBS-C.

Additional Information

Irritable bowel syndrome (IBS) is relatively common disorder of the intestines characterized by abdominal pain, intestinal gas, and altered bowel habits, including diarrhea, constipation, or both. Other symptoms may include abdominal pain that is relieved after defecation, mucus in the stools, or a sensation of incomplete rectal evacuation. IBS is caused by a motility disturbance of the small and large intestines; this disturbance may result from increased intestinal sensitivity to distension. Stress or the consumption of fatty foods, milk products, certain fruits or vegetables (e.g., broccoli and cabbage), alcohol, or caffeine may cause similar symptoms. Women with the disorder may experience an increase in symptoms during menstruation. Treatment of IBS includes relaxation, exercise, and avoidance of aggravating foods. Antidiarrheal medications or fibre supplements may help lessen symptoms. Although IBS may cause discomfort and emotional distress, the disorder does not result in any permanent intestinal damage.

iStock-965546890.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2469 2025-02-20 00:09:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2368) Library and Information Science

Library and Information Science

Gist

Library and Information Science (LIS) is an interdisciplinary field of study that centers on the documentation that records our stories, memory, history, and knowledge. LIS professionals serve as custodians of printed materials, records, photographs, audiovisual materials, and ephemera, in both analog and digital form.

Summary

Library and Information Science (LIS) is an interdisciplinary field of study that centers on the documentation that records our stories, memory, history, and knowledge. LIS professionals serve as custodians of printed materials, records, photographs, audiovisual materials, and ephemera, in both analog and digital form. Librarians and other information professionals collect, organize, preserve, and provide access to these materials and are the stewards of the knowledge that they contain. We connect people to the resources that they need to understand their histories, communities, and the world around them. We advocate for free and open access to these resources and train folks to use these materials to better themselves and society as life-long learners.

An MLIS (Master of Arts in Library and Information Science) degree benefits working employees by equipping them with critical information management skills, allowing them to efficiently organize, access, and analyze data within their workplace, which can lead to increased productivity, better decision-making, and potential career advancement opportunities across various industries, even beyond traditional library roles.

What skills will I develop through an MLIS Degree?

* Enhanced research skills: advanced techniques to find and evaluate relevant information from diverse sources, including online databases, archives, and specialized publications, which is crucial for informed decision-making in any field.

* Information organization and management: expertise in structuring and classifying information using standardized systems, improving the efficiency of data storage and retrieval within an institution/organization/company.

* Data analysis and interpretation: Skills in analyzing large datasets and presenting findings in a clear and concise manner, enabling a better understanding of trends and patterns within the workplace.

* Digital literacy: proficiency in various information technologies and digital tools, including organizational management systems, online platforms, and data visualization software, which are essential in today's digital environment.

* Project management: project management skills to effectively plan, execute, and monitor information-related projects within a team, contributing to successful outcomes.

* Communication and collaboration: strong communication skills to effectively present information to diverse audiences, as well as collaborate with colleagues across different departments.

LIS Professionals

LIS professionals work in public libraries, school libraries, college and university libraries, corporate libraries, law libraries, medical libraries, special collections, historical societies, community archives, museums and galleries, non-profits, corporations, or anywhere that there is a need to collect, organize, and access documents and information resources.

Details

Library and Information Science (LIS) are two interconnected disciplines that deal with information management. This includes organization, access, collection, and regulation of information, both in physical and digital forms.

Library science and information science are two original disciplines; however, they are within the same field of study. Library science is applied information science. Library science is both an application and a subfield of information science. Due to the strong connection, sometimes the two terms are used synonymously.

Definition

Library science (previously termed library studies and library economy) is an interdisciplinary or multidisciplinary field that applies the practices, perspectives, and tools of management, information technology, education, and other areas to libraries; the collection, organization, preservation, and dissemination of information resources; and the political economy of information. Martin Schrettinger, a Bavarian librarian, coined the discipline within his work (1808–1828) Versuch eines vollständigen Lehrbuchs der Bibliothek-Wissenschaft oder Anleitung zur vollkommenen Geschäftsführung eines Bibliothekars. Rather than classifying information based on nature-oriented elements, as was previously done in his Bavarian library, Schrettinger organized books in alphabetical order. The first American school for library science was founded by Melvil Dewey at Columbia University in 1887.

Historically, library science has also included archival science. This includes: how information resources are organized to serve the needs of selected user groups; how people interact with classification systems and technology; how information is acquired, evaluated and applied by people in and outside libraries as well as cross-culturally; how people are trained and educated for careers in libraries; the ethics that guide library service and organization; the legal status of libraries and information resources; and the applied science of computer technology used in documentation and records management.

LIS should not be confused with information theory, the mathematical study of the concept of information. Library philosophy has been contrasted with library science as the study of the aims and justifications of librarianship as opposed to the development and refinement of techniques.

Education and training

Academic courses in library science include collection management, information systems and technology, research methods, user studies, information literacy, cataloging and classification, preservation, reference, statistics and management. Library science is constantly evolving, incorporating new topics like database management, information architecture and information management, among others.

With the mounting acceptance of Wikipedia as a valued and reliable reference source, many libraries, museums, and archives have introduced the role of Wikipedian in residence. As a result, some universities are including coursework relating to Wikipedia and Knowledge Management in their MLIS programs.

Becoming a library staff member does not always need a degree, and in some contexts the difference between being a library staff member and a librarian is the level of education. Most professional library jobs require a professional degree in library science or equivalent. In the United States and Canada the certification usually comes from a master's degree granted by an ALA-accredited institution. In Australia, a number of institutions offer degrees accepted by the ALIA (Australian Library and Information Association). Global standards of accreditation or certification in librarianship have yet to be developed.

United States and Canada

The Master of Library and Information Science (MLIS) is the master's degree that is required for most professional librarian positions in the United States and Canada. The MLIS was created after the older Master of Library Science (MLS) was reformed to reflect the information science and technology needs of the field. According to the American Library Association (ALA), "ALA-accredited degrees have [had] various names such as Master of Arts, Master of Librarianship, Master of Library and Information Studies, or Master of Science. The degree name is determined by the program. The [ALA] Committee for Accreditation evaluates programs based on their adherence to the Standards for Accreditation of Master's Programs in Library and Information Studies, not based on the name of the degree."

Types of librarianship:

Public

The study of librarianship for public libraries covers issues such as cataloging; collection development for a diverse community; information literacy; readers' advisory; community standards; public services-focused librarianship via community-centered programming; serving a diverse community of adults, children, and teens; intellectual freedom; censorship; and legal and budgeting issues. The public library as a commons or public sphere based on the work of Jürgen Habermas has become a central metaphor in the 21st century.

In the United States there are four different types of public libraries: association libraries, municipal public libraries, school district libraries, and special district public libraries. Each receives funding through different sources, each is established by a different set of voters, and not all are subject to municipal civil service governance.

School

The study of school librarianship covers library services for children in Nursery, primary through secondary school. In some regions, the local government may have stricter standards for the education and certification of school librarians (who are sometimes considered a special case of teacher), than for other librarians, and the educational program will include those local criteria. School librarianship may also include issues of intellectual freedom, pedagogy, information literacy, and how to build a cooperative curriculum with the teaching staff.

Academic

The study of academic librarianship covers library services for colleges and universities. Issues of special importance to the field may include copyright; technology; digital libraries and digital repositories; academic freedom; open access to scholarly works; and specialized knowledge of subject areas important to the institution and the relevant reference works. Librarians often divide focus individually as liaisons on particular schools within a college or university. Academic librarians may be subject specific librarians.

Some academic librarians are considered faculty, and hold similar academic ranks to those of professors, while others are not. In either case, the minimal qualification is a Master of Arts in Library Studies or a Master of Arts in Library Science. Some academic libraries may only require a master's degree in a specific academic field or a related field, such as educational technology.

Archival

The study of archives includes the training of archivists, librarians specially trained to maintain and build archives of records intended for historical preservation. Special issues include physical preservation, conservation, and restoration of materials and mass deacidification; specialist catalogs; solo work; access; and appraisal. Many archivists are also trained historians specializing in the period covered by the archive. There have been attempts to revive the concept of documentation and to speak of Library, information and documentation studies (or science).

The archival mission includes three major goals: To identify papers and records with enduring value, preserve the identified papers, and make the papers available to others. While libraries receive items individually, archival items will usually become part of the archive's collection as a cohesive group. Major difference in collections is that library collections typically comprise published items (books, magazines, etc.), while archival collections are usually unpublished works (letters, diaries, etc.). Library collections are created by many individuals, as each author and illustrator create their own publication; in contrast, an archive usually collects the records of one person, family, institution, or organization, so the archival items will have fewer sources of authors.

Behavior in an archive differs from behavior in other libraries. In most libraries, items are openly available to the public. Archival items almost never circulate, and someone interested in viewing documents must request them of the archivist and may only be able view them in a closed reading room.

Special

Special libraries are libraries established to meet the highly specialized requirements of professional or business groups. A library is special depending on whether it covers a specialized collection, a special subject, or a particular group of users, or even the type of parent organization, such as medical libraries or law libraries.

The issues at these libraries are specific to their industries but may include solo work, corporate financing, specialized collection development, and extensive self-promotion to potential patrons. Special librarians have their own professional organization, the Special Libraries Association (SLA).

Some special libraries, such as the CIA Library, may contain classified works. It is a resource to employees of the Central Intelligence Agency, containing over 125,000 written materials, subscribes to around 1,700 periodicals, and had collections in three areas: Historical Intelligence, Circulating, and Reference. In February 1997, three librarians working at the institution spoke to Information Outlook, a publication of the SLA, revealing that the library had been created in 1947, the importance of the library in disseminating information to employees, even with a small staff, and how the library organizes its materials.

Preservation

Preservation librarians most often work in academic libraries. Their focus is on the management of preservation activities that seek to maintain access to content within books, manuscripts, archival materials, and other library resources. Examples of activities managed by preservation librarians include binding, conservation, digital and analog reformatting, digital preservation, and environmental monitoring.

History

Libraries have existed for many centuries but library science is a more recent phenomenon, as early libraries were managed primarily by academics.

17th and 18th century

The earliest text on "library operations", Advice on Establishing a Library was published in 1627 by French librarian and scholar Gabriel Naudé. Naudé wrote on many subjects including politics, religion, history, and the supernatural. He put into practice all the ideas put forth in Advice when given the opportunity to build and maintain the library of Cardinal Jules Mazarin.

In 1726 Gottfried Wilhelm Leibniz wrote Idea of Arranging a Narrower Library.

19th century

Martin Schrettinger wrote the second textbook (the first in Germany) on the subject from 1808 to 1829.

Some of the main tools used by LIS to provide access to the resources originated in 19th century to make information accessible by recording, identifying, and providing bibliographic control of printed knowledge. The origin for some of these tools were even earlier. In the 17th century, during the 'golden age of libraries', publishers and sellers seeking to take advantage of the burgeoning book trade developed descriptive catalogs of their wares for distribution – a practice was adopted and further extrapolated by many libraries of the time to cover areas like philosophy, sciences, linguistics, and medicine

Thomas Jefferson, whose library at Monticello consisted of thousands of books, devised a classification system inspired by the Baconian method, which grouped books more or less by subject rather than alphabetically, as it was previously done. The Jefferson collection provided the start of what became the Library of Congress.

The first American school of librarianship opened at Columbia University under the leadership of Melvil Dewey, noted for his 1876 decimal classification, on January 5, 1887, as the School of Library Economy. The term library economy was common in the U.S. until 1942, with the term, library science, predominant through much of the 20th century.

20th century

In the English-speaking world the term "library science" seems to have been used for the first time in India in the 1916 book Punjab Library Primer, written by Asa Don Dickinson and published by the University of Punjab, Lahore, Pakistan. This university was the first in Asia to begin teaching "library science". The Punjab Library Primer was the first textbook on library science published in English anywhere in the world. The first textbook in the United States was the Manual of Library Economy by James Duff Brown, published in 1903.

Later, the term was used in the title of S. R. Ranganathan's The Five Laws of Library Science, published in 1931, which contains Ranganathan's titular theory. Ranganathan is also credited with the development of the first major analytical-synthetic classification system, the colon classification.

In the United States, Lee Pierce Butler published his 1933 book An Introduction to Library Science (University of Chicago Press), where he advocated for research using quantitative methods and ideas in the social sciences with the aim of using librarianship to address society's information needs. He was one of the first faculty at the University of Chicago Graduate Library School, which changed the structure and focus of education for librarianship in the twentieth century. This research agenda went against the more procedure-based approach of the "library economy", which was mostly confined to practical problems in the administration of libraries.

In 1923, Charles C. Williamson, who was appointed by the Carnegie Corporation, published an assessment of library science education entitled "The Williamson Report", which designated that universities should provide library science training. This report had a significant impact on library science training and education. Library research and practical work, in the area of information science, have remained largely distinct both in training and in research interests.

William Stetson Merrill's A Code for Classifiers, released in several editions from 1914 to 1939, is an example of a more pragmatic approach, where arguments stemming from in-depth knowledge about each field of study are employed to recommend a system of classification. While Ranganathan's approach was philosophical, it was also tied more to the day-to-day business of running a library. A reworking of Ranganathan's laws was published in 1995 which removes the constant references to books. Michael Gorman's Our Enduring Values: Librarianship in the 21st Century features the eight principles necessary by library professionals and incorporates knowledge and information in all their forms, allowing for digital information to be considered.

From Library Science to LIS

By the late 1960s, mainly due to the meteoric rise of human computing power and the new academic disciplines formed therefrom, academic institutions began to add the term "information science" to their names. The first school to do this was at the University of Pittsburgh in 1964. More schools followed during the 1970s and 1980s. By the 1990s almost all library schools in the US had added information science to their names. Although there are exceptions, similar developments have taken place in other parts of the world. In India, the Dept of Library Science,University of Madras (southern state of TamiilNadu, India) became the Dept. of Library and Information Science in 1976. In Denmark, for example, the 'Royal School of Librarianship' changed its English name to The Royal School of Library and Information Science in 1997.

21st century

The digital age has transformed how information is accessed and retrieved. "The library is now a part of a complex and dynamic educational, recreational, and informational infrastructure."[35] Mobile devices and applications with wireless networking, high-speed computers and networks, and the computing cloud have deeply impacted and developed information science and information services. The evolution of the library sciences maintains its mission of access equity and community space, as well as the new means for information retrieval called information literacy skills. All catalogs, databases, and a growing number of books are available on the Internet. In addition, the expanding free access to open access journals and sources such as Wikipedia has fundamentally impacted how information is accessed.

Information literacy is the ability to "determine the extent of information needed, access the needed information effectively and efficiently, evaluate information and its sources critically, incorporate selected information into one's knowledge base, use information effectively to accomplish a specific purpose, and understand the economic, legal, and social issues surrounding the use of information, and access and use information ethically and legally."

In the early 2000s, dLIST, Digital Library for Information Sciences and Technology was established. It was the first open access archive for the multidisciplinary 'library and information sciences' building a global scholarly communication consortium and the LIS Commons in order to increase the visibility of research literature, bridge the divide between practice, teaching, and research communities, and improve visibility, uncitedness, and integrate scholarly work in the critical information infrastructures of archives, libraries, and museums.

Social justice, an important ethical value in librarianship and in the 21st century has become an important research area, if not subdiscipline of LIS.

Additional Information

Library science are  the principles and practices of library operation and administration, and their study. Libraries have existed since ancient times, but only in the second half of the 19th century did library science emerge as a separate field of study. With the knowledge explosion in the 20th century, it was gradually subsumed under the more general field of information science (q.v.).

By the second half of the 19th century, Western countries had experienced such a proliferation of books of all sorts that the nature of the librarian’s work was radically altered; being well-read was no longer a sufficient characteristic for the post. The librarian needed some means of easy and rapid identification as well as strong organizational and administrative skills, and the necessity for specialized training soon became clear. One of the earliest pioneers in library training in the United States was Melvil Dewey (q.v.), who established the first training program for librarians in 1887. These training programs in the United States evolved into graduate programs in library education accredited by the American Library Association (ALA; founded 1876).

In the 20th century, advances in the means of collecting, organizing, and retrieving information changed the focus of libraries, enabling a great variety of institutions and organizations, as well as individuals, to conduct their own searches for information without the involvement of a library or library staff. As a result, universities began to offer combined graduate programs in library science and information science. These programs usually provide a master’s degree and may provide more advanced degrees, including doctorates. Particulars of admission and course requirements vary from school to school. In the United States and Canada, the appropriateness of graduate programs in library and information science in preparing students to become professional librarians is still ensured by accreditation by the ALA. Increasingly, however, graduates of these programs are finding themselves qualified for a variety of professional positions in other parts of the information industry.

In many countries the furtherance of librarianship and library systems is promoted by national and regional library associations. The Chicago-based ALA, for example, in addition to its promotion of library service and librarianship, has an extensive publishing program and holds annual national conferences. Professional associations of a similar nature exist throughout the world.

library_book_shelf_light-1411091.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2470 2025-02-20 16:51:37

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2369) Fodder

Gist

Fodder is a type of food made from farming or other agricultural processes for feeding domesticated animals like cattle, cow, bull, buffalo, rabbit, horses etc. Fodder can only be consumed or eaten by animals. It is not for human beings. Examples for fodder are. grass, dried hay, straw etc.

Summary

"Fodder" refers particularly to food given to the animals (including plants cut and carried to them), rather than that which they forage for themselves (called forage).

Fodder crops are those crops which are grown primarily as food for animals. Maize, wheat, rye and hay are some fodder crops. The waste material obtained after processing sugar beet is also used as animal feed.

Green fodders are rich in nutrients and also the primary source of Vitamin 'A'. Balanced feeding of green fodder increases immunity status of animals. Essential for respiratory tract and better vision.

Fodder should be palatable and easily digestible. It should not be injurious at the stage at which it is fed to cattle. It should be quick growing and early maturing. It should give high yield of green fodder.

Details

Fodder , also called provender , is any agricultural foodstuff used specifically to feed domesticated livestock, such as cattle, rabbits, sheep, horses, chickens and pigs. "Fodder" refers particularly to food given to the animals (including plants cut and carried to them), rather than that which they forage for themselves (called forage). Fodder includes hay, straw, silage, compressed and pelleted feeds, oils and mixed rations, and sprouted grains and legumes (such as bean sprouts, fresh malt, or spent malt). Most animal feed is from plants, but some manufacturers add ingredients to processed feeds that are of animal origin.

The worldwide animal feed trade produced 1.245 billion tons of compound feed in 2022 according to an estimate by the International Feed Industry Federation, with an annual growth rate of about 2%. The use of agricultural land to grow feed rather than human food can be controversial; some types of feed, such as corn (maize), can also serve as human food; those that cannot, such as grassland grass, may be grown on land that can be used for crops consumed by humans. In many cases the production of grass for cattle fodder is a valuable intercrop between crops for human consumption, because it builds the organic matter in the soil. When evaluating if this soil organic matter increase mitigates climate change, both permanency of the added organic matter as well as emissions produced during use of the fodder product have to be taken into account. Some agricultural byproducts fed to animals may be considered unsavory by humans.

Health concerns

In the past, bovine spongiform encephalopathy (BSE, or "mad cow disease") spread through the inclusion of ruminant meat and bone meal in cattle feed due to prion contamination. This practice is now banned in most countries where it has occurred. Some animals have a lower tolerance for spoiled or moldy fodder than others, and certain types of molds, toxins, or poisonous weeds inadvertently mixed into a feed source may cause economic losses due to sickness or death of the animals. The US Department of Health and Human Services regulates drugs of the Veterinary Feed Directive type that can be present within commercial livestock feed.

Droughts

Increasing intensities and frequencies of drought events put rangeland agriculture under pressure in semi-arid and arid geographic areas. Innovative emergency fodder production concepts have been reported, such as bush-based animal fodder production in Namibia. During extended dry periods, some farmers have used woody biomass fibre from encroacher bush as their primary source of cattle feed, adding locally-available supplements for nutrients as well as to improve palatability.

Production of sprouted grains as fodder

Fodder in the form of sprouted cereal grains such as barley, and legumes can be grown in small and large quantities.

Systems have been developed recently that allow for many tons of sprouts to be produced each day, year round. Sprouted grains can significantly increase the nutritional value of the grain compared with feeding the ungerminated grain to stock. They use less water than traditional forage, making them ideal for drought conditions. Sprouted barley and other cereal grains can be grown hydroponically in a carefully-controlled environment. Hydroponically-grown sprouted fodder at 150 mm tall with a 50 mm root mat is at its peak for animal feed.

Although products such as barley are grain, when sprouted they are approved by the American Grassfed Association to be used as livestock feed.

Additional Information

Assam is a State in North East India, situated South of the Eastern Himalayas along the Brahmaputra and Barak river valleys. Extending from to 96°E longitude and 24°8'N to 28°2’latitude, it has an area of 78, 438 km square. Livestock Sector of the Department includes animals like cattle, buffaloes, goats, sheep, pigs, horse, and rabbits.

The Department, at present, concerned for fodder production mainly for the cross bred cattle population in order to increase their milk production.

In future due to shrinkage of grazing land it may not be possible for open grazing, minimum open space may be available to a negligible nos. of animals. In such a situation, production & productivity of all animals would be dependent upon mainly on cultivated green fodder.

At present, the chief purpose of green fodder production is to make it available to the cattle to increase production of quality milk. However, in coming days, it has to encompass all aspects of animal rearing viz. quality milk, meat, egg production, general animal health, fertility & reproduction etc.

Hence, to cover all kind of situations & all types of farm animals the purview & scope of the Fodder Sector proportionately be widen. Success of livestock Sector is directly co-related with the production & supply of quality green fodder beyond any doubt.

BAIF-Bajra-7.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2471 Today 00:03:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,518

Re: Miscellany

2370) Welder

Gist

Welders join metals using a variety of techniques and processes. For example, in arc welding they use machinery that produces electrical currents to create heat and bond metals together. Welders usually choose a welding process based on a number of factors, such as the types of metals being joined.

A welder is a person or equipment that fuses materials together. The term welder refers to the operator, the machine is referred to as the welding power supply. The materials to be joined can be metals (such as steel, aluminum, brass, stainless steel etc.) or varieties of plastic or polymer.

Summary

Weldin is technique used for joining metallic parts usually through the application of heat. This technique was discovered during efforts to manipulate iron into useful shapes. Welded blades were developed in the 1st millennium ce, the most famous being those produced by Arab armourers at Damascus, Syria. The process of carburization of iron to produce hard steel was known at this time, but the resultant steel was very brittle. The welding technique—which involved interlayering relatively soft and tough iron with high-carbon material, followed by hammer forging—produced a strong, tough blade.

In modern times the improvement in iron-making techniques, especially the introduction of cast iron, restricted welding to the blacksmith and the jeweler. Other joining techniques, such as fastening by bolts or rivets, were widely applied to new products, from bridges and railway engines to kitchen utensils.

Modern fusion welding processes are an outgrowth of the need to obtain a continuous joint on large steel plates. Rivetting had been shown to have disadvantages, especially for an enclosed container such as a boiler. Gas welding, arc welding, and resistance welding all appeared at the end of the 19th century. The first real attempt to adopt welding processes on a wide scale was made during World War I. By 1916 the oxyacetylene process was well developed, and the welding techniques employed then are still used. The main improvements since then have been in equipment and safety. Arc welding, using a consumable electrode, was also introduced in this period, but the bare wires initially used produced brittle welds. A solution was found by wrapping the bare wire with asbestos and an entwined aluminum wire. The modern electrode, introduced in 1907, consists of a bare wire with a complex coating of minerals and metals. Arc welding was not universally used until World War II, when the urgent need for rapid means of construction for shipping, power plants, transportation, and structures spurred the necessary development work.

Resistance welding, invented in 1877 by Elihu Thomson, was accepted long before arc welding for spot and seam joining of sheet. Butt welding for chain making and joining bars and rods was developed during the 1920s. In the 1940s the tungsten-inert gas process, using a nonconsumable tungsten electrode to perform fusion welds, was introduced. In 1948 a new gas-shielded process utilized a wire electrode that was consumed in the weld. More recently, electron-beam welding, laser welding, and several solid-phase processes such as diffusion bonding, friction welding, and ultrasonic joining have been developed.

Basic principles of welding

A weld can be defined as a coalescence of metals produced by heating to a suitable temperature with or without the application of pressure, and with or without the use of a filler material.

In fusion welding a heat source generates sufficient heat to create and maintain a molten pool of metal of the required size. The heat may be supplied by electricity or by a gas flame. Electric resistance welding can be considered fusion welding because some molten metal is formed.

Solid-phase processes produce welds without melting the base material and without the addition of a filler metal. Pressure is always employed, and generally some heat is provided. Frictional heat is developed in ultrasonic and friction joining, and furnace heating is usually employed in diffusion bonding.

The electric arc used in welding is a high-current, low-voltage discharge generally in the range 10–2,000 amperes at 10–50 volts. An arc column is complex but, broadly speaking, consists of a cathode that emits electrons, a gas plasma for current conduction, and an anode region that becomes comparatively hotter than the cathode due to electron bombardment. A direct current (DC) arc is usually used, but alternating current (AC) arcs can be employed.

Total energy input in all welding processes exceeds that which is required to produce a joint, because not all the heat generated can be effectively utilized. Efficiencies vary from 60 to 90 percent, depending on the process; some special processes deviate widely from this figure. Heat is lost by conduction through the base metal and by radiation to the surroundings.

Most metals, when heated, react with the atmosphere or other nearby metals. These reactions can be extremely detrimental to the properties of a welded joint. Most metals, for example, rapidly oxidize when molten. A layer of oxide can prevent proper bonding of the metal. Molten-metal droplets coated with oxide become entrapped in the weld and make the joint brittle. Some valuable materials added for specific properties react so quickly on exposure to the air that the metal deposited does not have the same composition as it had initially. These problems have led to the use of fluxes and inert atmospheres.

In fusion welding the flux has a protective role in facilitating a controlled reaction of the metal and then preventing oxidation by forming a blanket over the molten material. Fluxes can be active and help in the process or inactive and simply protect the surfaces during joining.

Inert atmospheres play a protective role similar to that of fluxes. In gas-shielded metal-arc and gas-shielded tungsten-arc welding an inert gas—usually argon—flows from an annulus surrounding the torch in a continuous stream, displacing the air from around the arc. The gas does not chemically react with the metal but simply protects it from contact with the oxygen in the air.

The metallurgy of metal joining is important to the functional capabilities of the joint. The arc weld illustrates all the basic features of a joint. Three zones result from the passage of a welding arc: (1) the weld metal, or fusion zone, (2) the heat-affected zone, and (3) the unaffected zone. The weld metal is that portion of the joint that has been melted during welding. The heat-affected zone is a region adjacent to the weld metal that has not been welded but has undergone a change in microstructure or mechanical properties due to the heat of welding. The unaffected material is that which was not heated sufficiently to alter its properties.

Weld-metal composition and the conditions under which it freezes (solidifies) significantly affect the ability of the joint to meet service requirements. In arc welding, the weld metal comprises filler material plus the base metal that has melted. After the arc passes, rapid cooling of the weld metal occurs. A one-pass weld has a cast structure with columnar grains extending from the edge of the molten pool to the centre of the weld. In a multipass weld, this cast structure may be modified, depending on the particular metal that is being welded.

The base metal adjacent to the weld, or the heat-affected zone, is subjected to a range of temperature cycles, and its change in structure is directly related to the peak temperature at any given point, the time of exposure, and the cooling rates. The types of base metal are too numerous to discuss here, but they can be grouped in three classes: (1) materials unaffected by welding heat, (2) materials hardened by structural change, (3) materials hardened by precipitation processes.

Welding produces stresses in materials. These forces are induced by contraction of the weld metal and by expansion and then contraction of the heat-affected zone. The unheated metal imposes a restraint on the above, and as contraction predominates, the weld metal cannot contract freely, and a stress is built up in the joint. This is generally known as residual stress, and for some critical applications must be removed by heat treatment of the whole fabrication. Residual stress is unavoidable in all welded structures, and if it is not controlled bowing or distortion of the weldment will take place. Control is exercised by welding technique, jigs and fixtures, fabrication procedures, and final heat treatment.

Details

A welder is a person or equipment that fuses materials together. The term welder refers to the operator, the machine is referred to as the welding power supply. The materials to be joined can be metals (such as steel, aluminum, brass, stainless steel etc.) or varieties of plastic or polymer. Welders typically have to have good dexterity and attention to detail, as well as technical knowledge about the materials being joined and best practices in the field.

Safety issues

Welding, without the proper precautions appropriate for the process, can be a dangerous and unhealthy practice. However, with the use of new technology and proper protection, the risks of injury and death associated with welding can be greatly reduced. Because many common welding procedures involve an open electric arc or a flame, the risk of burns is significant. To prevent them, welders wear personal protective equipment in the form of heavy leather gloves and protective long sleeve jackets to avoid exposure to extreme heat and flames. Additionally, the brightness of the weld area leads to a condition called arc eye in which ultraviolet light causes the inflammation of the cornea and can burn the retinas of the eyes. Full face welding helmets with dark face plates are worn to prevent this exposure, and in recent years, new helmet models have been produced that feature a faceplate that self-darkens upon exposure to high amounts of UV light. To protect bystanders, opaque welding curtains often surround the welding area. These curtains, made of a polyvinyl chloride plastic film, shield nearby workers from exposure to the UV light from the electric arc, but should not be used to replace the filter glass used in helmets.

Welders are also often exposed to dangerous gases and particulate matter. Processes like flux-cored arc welding and shielded metal arc welding produce smoke containing particles of various types of oxides, which in some cases can lead to medical conditions like metal fume fever. The size of the particles in question tends to influence the toxicity of the fumes, with smaller particles presenting a greater danger. Additionally, many processes produce fumes and various gases, most commonly carbon dioxide and ozone, that can prove dangerous if ventilation is inadequate. Furthermore, because the use of compressed gases and flames in many welding processes pose an explosion and fire risk, some common precautions include limiting the amount of oxygen in the air and keeping combustible materials away from the workplace. Welders with expertise in welding pressurized vessels, including submarine hulls, industrial boilers, and power plant heat exchangers and boilers, are generally referred to as boilermakers.

A lot of welders relate to getting small electrical shocks from their equipment. Occasionally, welders might work in damp crowded environments and they consider it to be a "part of the job." Welders can be shocked by faulty conditions in the welding circuit, or, from the work lead clamp, a grounded power tool that is on the bench (the workpiece or the electrode). All of these types of shocks come from the welding electrode terminal. Often these shocks are minor and are misdiagnosed as being an issue with a power tool or the power supply to the welder's area. However, the more likely cause is from stray welding current which occurs when current from the welding cables leaks into the welder's work area. Often this is not a serious problem, however, under the right circumstances, this can be fatal to the welder or anyone else inside the work area. When a welder feels a shock, they should take a minute to inspect the welding cables and ensure that they are clean and dry, and, that there are no cracks or gouges out of the rubber casing around the wire. These precautions may be life-saving to the welders.

Additional Information

Ships, airplanes, skyscrapers, and bridges are built by melting many of their parts together into a single structure. This is done through welding, brazing, or soldering—processes that join two metallic surfaces by creating bonds between their constituent atoms. Examples range from connecting the contacts on a computer chip to joining the hull plates of a supertanker.

Although welding, brazing, and soldering are distinct processes, they have certain elements in common. All three processes frequently use a filler material between the surfaces, and all three apply heat to melt either the local surfaces or the filler. During the process the surfaces are protected from the oxygen in air, which tends to combine with the metallic atoms thus decreasing the strength of the bond, by the use of a chemical flux, which cleans the surfaces, or by immersion in a protective atmosphere.

Because they induce true intermolecular contact, these joining processes differ from simple adhesive processes such as gluing, in which two surfaces are held together by a nonmetallic chemical bonding agent. Welding provides the greatest junction strength. In a proper weld the joint will be as strong as the parent material—sometimes stronger.

Welding

In welding, two metal sections are normally joined by bringing their surfaces into contact under high temperature, high pressure, or both, depending on the application. Although most welds are made between similar metals, different compatible metals may also be welded.

In forge welding, the most ancient of the joining processes, the parts are first heated in a charcoal fire, hammered into shape, then hammered together to form the final piece. During the process a flux material is added to keep the surfaces clean and free of scale, or oxides. This works well for making small parts, but not for joining large sections.

Other forms of joining were developed, such as gas and arc welding, that offer advantages over traditional forge welding. Gas welding uses a flame to melt the local material and the filler. The filler is usually a metal rod that is allowed to flow between the parts to be joined. The flame also provides a protective atmosphere that discourages accumulation of oxides. Gas welding is used primarily for repairs in areas where portable equipment is an advantage.

Arc welding uses the intense heat of an electrical arc generated when a high current flows between the base metal and an electrode. Temperatures of up to 7,000° F (3,870° C) are applied to melt the local base and filler materials. Shielded metal-arc welding uses electrodes made of a coated metal filler wire. The electrical arc breaks down the coating to provide a protective atmosphere that both stabilizes the arc and acts as a flux. In gas-tungsten arc welding a non-melting tungsten electrode holds the arc, and an inert gas, such as nitrogen or argon, provides the protective atmosphere. Meanwhile a filler wire is fed through the electrode holder. Since no separate flux is used, the quality of the resulting weld depends heavily on the cleanliness of the initial surfaces. This process was initially developed for welding magnesium but is now used for many materials. Gas-metal arc welding is a similar process in which a filler material replaces the tungsten electrode.

In resistance welding a large electric current is passed through the surfaces to bring them almost to the melting point. Pressure is then applied to form the weld. Because this can be done within a few seconds, resistance welding is an economical process for mass manufacture. Thermite welding is an old process for joining thick sections, such as rails, or for repairing steel castings. Molten metal at temperatures of around 5,000° F (2,760° C) is applied locally in order to melt the surfaces of the metals; the metals then resolidify to form the final bond.

Recent developments for large-scale or rapid welding include submerged arc welding and electroslag welding. In the first process, a granular flux is applied in front of the welding electrode, and the arc is completely submerged in the flux. In electroslag welding the slag, or flux, is melted by the direct application of electrical current, without the creation of an arc. In both cases the tip of the electrode is melted by the intense current.

Much narrower welds, especially for thin parts, can be made by electron-beam welding and laser welding. In these processes, focused, high-intensity beams—as much as 5,000 times as intense as the current used in arc welding—induce rapid heating in a narrow region. As in the previous forms of welding, the material is locally melted and then bonds on solidification. Other welding techniques, used for special applications, include cold welding, friction welding, and explosive welding.

Brazing and Soldering

Brazing differs from welding in that it uses a filler having a lower melting point than the pieces to be joined. The molten filler flows between the pieces, then solidifies to form the bond. This process may also be used in soldering. The distinction between brazing and soldering is based on the metallurgical character of the joint and on the temperatures required. For filler melting temperatures above 840° F (450° C), the process is called brazing; below that temperature the process is known as soldering.

Since brazing depends on capillary action to distribute the filler between the pieces, the distance between the parts must be very small—in the range of 0.001 to 0.005 inch (0.025 to 0.13 millimeter), and the surfaces must be precisely machined and carefully cleaned. The use of a flux, frequently borax, or of an inert atmosphere is also required.

In gas brazing, a filler rod or wire is melted by a torch and flows into the junction space. Alternatively, thin sheets of braze material, called shims, are fitted into the joint space prior to assembly and then melted, often in an electric furnace. Braze metal may also be deposited in a paste form along the joint during assembly and then heated. Brazing is often used for delicate assemblies to prevent the distortion that occurs at the high temperatures used in welding. The process can join almost all metals, but is most commonly used with alloys of copper, silver, aluminum, nickel, and zinc. Brazing may also be used to join metals and ceramics.

Furnace brazing uniformly heats the assembly in a furnace, usually in a protective atmosphere or in a vacuum. In salt-bath brazing, the assembly is immersed in a bath of molten salt that melts the braze, protects the parts, and acts as a flux. The parts may also be dipped into a bath of molten braze in a process called dip brazing. Other types of brazing employ a gas torch, infrared heating, or induction heating, applied to the joint section only.

Soldering usually uses tin-lead alloys that melt at temperatures below 500° F (260° C) to join metallic surfaces without melting the surfaces themselves. Because the resulting joints are weak, soldering is used primarily for gas-tight joints in low-pressure systems and for electrical contacts. Temperature control is critical in soldering. Too low a temperature or too short a heating period, and the bond will not form; too high a temperature or too long a heating period, and the joint may become brittle.

History

Soldering and brazing were used more than 5,000 years ago to make jewelry and statuary. Before 1000 bc iron was forge-welded for tools, weapons, and armor; however, the high temperatures required for modern welding processes became possible only with the development of electric power in the 19th century.

The first arc welding patent was awarded in Germany in 1885, and shortly thereafter the process spread to the United States. These early welds were of poor quality, however, because of the absence of good protective atmospheres. In 1886 the oxyacetylene torch was developed, and for the next 20 years gas-torch welding was the only effective means of welding pieces of steel. In 1907 the first covered electrodes were developed, but not until the 1930s was welding developed to the point where major steel structures could be reliably joined.

welder.jpg?t=1483464014&width=768


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB