Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1726 2023-04-09 13:12:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1629) Robot

Android (robot)

Summary

An android is a humanoid robot or other artificial being often made from a flesh-like material. Historically, androids were completely within the domain of science fiction and frequently seen in film and television, but advances in robot technology now allow the design of functional and realistic humanoid robots.

Terminology

The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls. The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886). This story features an artificial humanlike robot named Hadaly. As said by the officer in the story, "In this age of Realien advancement, who knows what goes on in the mind of those responsible for these mechanical dolls." The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944).

Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts.

The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dickinson in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070.

While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (a portmanteau of anthrōpos and robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of science fiction, futurism and speculative astrobiology).

Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between.

Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition:

* the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues
* the golem type – made from flexible, possibly organic material, including golems and homunculi
* the automaton type – made from a mix of dead and living parts, including automatons and robots

Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence).

Details

A Robot is any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with the design, construction, and operation of robots.

The concept of artificial humans predates recorded history, but the modern term robot derives from the Czech word robota (“forced labour” or “serf”), used in Karel Čapek’s play R.U.R. (1920). The play’s robots were manufactured humans, heartlessly exploited by factory owners until they revolted and ultimately destroyed humanity. Whether they were biological, like the monster in Mary Shelley’s Frankenstein (1818), or mechanical was not specified, but the mechanical alternative inspired generations of inventors to build electrical humanoids.

The word robotics first appeared in Isaac Asimov’s science-fiction story Runaround (1942). Along with Asimov’s later robot stories, it set a new standard of plausibility about the likely difficulty of developing intelligent robots and the technical and social problems that might result. Runaround also contained Asimov’s famous Three Laws of Robotics:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This article traces the development of robots and robotics.

Industrial robots

Though not humanoid in form, machines with flexible behaviour and a few humanlike physical attributes have been developed for industry. The first stationary industrial robot was the programmable Unimate, an electronically controlled hydraulic heavy-lifting arm that could repeat arbitrary sequences of motions. It was invented in 1954 by the American engineer George Devol and was developed by Unimation Inc., a company founded in 1956 by American engineer Joseph Engelberger. In 1959 a prototype of the Unimate was introduced in a General Motors Corporation die-casting factory in Trenton, New Jersey. In 1961 Condec Corp. (after purchasing Unimation the preceding year) delivered the world’s first production-line robot to the GM factory; it had the unsavoury task (for humans) of removing and stacking hot metal parts from a die-casting machine. Unimate arms continue to be developed and sold by licensees around the world, with the automobile industry remaining the largest buyer.

More advanced computer-controlled electric arms guided by sensors were developed in the late 1960s and 1970s at the Massachusetts Institute of Technology (MIT) and at Stanford University, where they were used with cameras in robotic hand-eye research. Stanford’s Victor Scheinman, working with Unimation for GM, designed the first such arm used in industry. Called PUMA (Programmable Universal Machine for Assembly), they have been used since 1978 to assemble automobile subcomponents such as dash panels and lights. PUMA was widely imitated, and its descendants, large and small, are still used for light assembly in electronics and other industries. Since the 1990s small electric arms have become important in molecular biology laboratories, precisely handling test-tube arrays and pipetting intricate sequences of reagents.

Mobile industrial robots also first appeared in 1954. In that year a driverless electric cart, made by Barrett Electronics Corporation, began pulling loads around a South Carolina grocery warehouse. Such machines, dubbed AGVs (Automatic Guided Vehicles), commonly navigate by following signal-emitting wires entrenched in concrete floors. In the 1980s AGVs acquired microprocessor controllers that allowed more complex behaviours than those afforded by simple electronic controls. In the 1990s a new navigation method became popular for use in warehouses: AGVs equipped with a scanning laser triangulate their position by measuring reflections from fixed retro-reflectors (at least three of which must be visible from any location).

Although industrial robots first appeared in the United States, the business did not thrive there. Unimation was acquired by Westinghouse Electric Corporation in 1983 and shut down a few years later. Cincinnati Milacron, Inc., the other major American hydraulic-arm manufacturer, sold its robotics division in 1990 to the Swedish firm of Asea Brown Boveri Ltd. Adept Technology, Inc., spun off from Stanford and Unimation to make electric arms, is the only remaining American firm. Foreign licensees of Unimation, notably in Japan and Sweden, continue to operate, and in the 1980s other companies in Japan and Europe began to vigorously enter the field. The prospect of an aging population and consequent worker shortage induced Japanese manufacturers to experiment with advanced automation even before it gave a clear return, opening a market for robot makers. By the late 1980s Japan—led by the robotics divisions of Fanuc Ltd., math Electric Industrial Company, Ltd., Mitsubishi Group, and Honda Motor Company, Ltd.—was the world leader in the manufacture and use of industrial robots. High labour costs in Europe similarly encouraged the adoption of robot substitutes, with industrial robot installations in the European Union exceeding Japanese installations for the first time in 2001.

Robot toys

Lack of reliable functionality has limited the market for industrial and service robots (built to work in office and home environments). Toy robots, on the other hand, can entertain without performing tasks very reliably, and mechanical varieties have existed for thousands of years. (See automaton.) In the 1980s microprocessor-controlled toys appeared that could speak or move in response to sounds or light. More advanced ones in the 1990s recognized voices and words. In 1999 the Sony Corporation introduced a doglike robot named AIBO, with two dozen motors to activate its legs, head, and tail, two microphones, and a colour camera all coordinated by a powerful microprocessor. More lifelike than anything before, AIBOs chased coloured balls and learned to recognize their owners and to explore and adapt. Although the first AIBOs cost $2,500, the initial run of 5,000 sold out immediately over the Internet.

Robotics research

Dexterous industrial manipulators and industrial vision have roots in advanced robotics work conducted in artificial intelligence (AI) laboratories since the late 1960s. Yet, even more than with AI itself, these accomplishments fall far short of the motivating vision of machines with broad human abilities. Techniques for recognizing and manipulating objects, reliably navigating spaces, and planning actions have worked in some narrow, constrained contexts, but they have failed in more general circumstances.

The first robotics vision programs, pursued into the early 1970s, used statistical formulas to detect linear boundaries in robot camera images and clever geometric reasoning to link these lines into boundaries of probable objects, providing an internal model of their world. Further geometric formulas related object positions to the necessary joint angles needed to allow a robot arm to grasp them, or the steering and drive motions to get a mobile robot around (or to) the object. This approach was tedious to program and frequently failed when unplanned image complexities misled the first steps. An attempt in the late 1970s to overcome these limitations by adding an expert system component for visual analysis mainly made the programs more unwieldy—substituting complex new confusions for simpler failures.

In the mid-1980s Rodney Brooks of the MIT AI lab used this impasse to launch a highly visible new movement that rejected the effort to have machines create internal models of their surroundings. Instead, Brooks and his followers wrote computer programs with simple subprograms that connected sensor inputs to motor outputs, each subprogram encoding a behaviour such as avoiding a sensed obstacle or heading toward a detected goal. There is evidence that many insects function largely this way, as do parts of larger nervous systems. The approach resulted in some very engaging insectlike robots, but—as with real insects—their behaviour was erratic, as their sensors were momentarily misled, and the approach proved unsuitable for larger robots. Also, this approach provided no direct mechanism for specifying long, complex sequences of actions—the raison d’être of industrial robot manipulators and surely of future home robots (note, however, that in 2004 iRobot Corporation sold more than one million robot vacuum cleaners capable of simple insectlike behaviours, a first for a service robot).

Meanwhile, other researchers continue to pursue various techniques to enable robots to perceive their surroundings and track their own movements. One prominent example involves semiautonomous mobile robots for exploration of the Martian surface. Because of the long transmission times for signals, these “rovers” must be able to negotiate short distances between interventions from Earth.

A particularly interesting testing ground for fully autonomous mobile robot research is football (soccer). In 1993 an international community of researchers organized a long-term program to develop robots capable of playing this sport, with progress tested in annual machine tournaments. The first RoboCup games were held in 1997 in Nagoya, Japan, with teams entered in three competition categories: computer simulation, small robots, and midsize robots. Merely finding and pushing the ball was a major accomplishment, but the event encouraged participants to share research, and play improved dramatically in subsequent years. In 1998 Sony began providing researchers with programmable AIBOs for a new competition category; this gave teams a standard reliable prebuilt hardware platform for software experimentation.

While robot football has helped to coordinate and focus research in some specialized skills, research involving broader abilities is fragmented. Sensors—sonar and laser rangefinders, cameras, and special light sources—are used with algorithms that model images or spaces by using various geometric shapes and that attempt to deduce what a robot’s position is, where and what other things are nearby, and how different tasks can be accomplished. Faster microprocessors developed in the 1990s have enabled new, broadly effective techniques. For example, by statistically weighing large quantities of sensor measurements, computers can mitigate individually confusing readings caused by reflections, blockages, bad illumination, or other complications. Another technique employs “automatic” learning to classify sensor inputs—for instance, into objects or situations—or to translate sensor states directly into desired behaviour. Connectionist neural networks containing thousands of adjustable-strength connections are the most famous learners, but smaller, more-specialized frameworks usually learn faster and better. In some, a program that does the right thing as nearly as can be prearranged also has “adjustment knobs” to fine-tune the behaviour. Another kind of learning remembers a large number of input instances and their correct responses and interpolates between them to deal with new inputs. Such techniques are already in broad use for computer software that converts speech into text.

The future

Numerous companies are working on consumer robots that can navigate their surroundings, recognize common objects, and perform simple chores without expert custom installation. Perhaps about the year 2020 the process will have produced the first broadly competent “universal robots” with lizardlike minds that can be programmed for almost any routine chore. With anticipated increases in computing power, by 2030 second-generation robots with trainable mouselike minds may become possible. Besides application programs, these robots may host a suite of software “conditioning modules” that generate positive- and negative-reinforcement signals in predefined circumstances.

By 2040 computing power should make third-generation robots with monkeylike minds possible. Such robots would learn from mental rehearsals in simulations that would model physical, cultural, and psychological factors. Physical properties would include shape, weight, strength, texture, and appearance of things and knowledge of how to handle them. Cultural aspects would include a thing’s name, value, proper location, and purpose. Psychological factors, applied to humans and other robots, would include goals, beliefs, feelings, and preferences. The simulation would track external events and would tune its models to keep them faithful to reality. This should let a robot learn by imitation and afford it a kind of consciousness. By the middle of the 21st century, fourth-generation robots may exist with humanlike mental power able to abstract and generalize. Researchers hope that such machines will result from melding powerful reasoning programs to third-generation machines. Properly educated, fourth-generation robots are likely to become intellectually formidable.

robot-learning-kids-neurosicence.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1727 2023-04-09 18:50:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1630) Arecibo Telescope

Summary

Completed in 1963 and stewarded by the U.S. National Science Foundation since the 1970s, Arecibo Observatory has
contributed to many important scientific discoveries, including the first discovery of a binary pulsar, the first discovery of an extrasolar planet, the composition of the ionosphere, and the characterization of the properties and orbits of a number of potentially hazardous asteroids.

Location: Arecibo Observatory’s principal observing facilities are located 19 kilometers south of the city of Arecibo, Puerto Rico.
Operation and management: Arecibo Observatory is operated and managed for NSF by the Arecibo Observatory
Management Team, which is led by the University of Central Florida in partnership with the Universidad Ana G. Méndez
and Yang Enterprises Inc.
NSF has invested over $200 million in Arecibo operations, management and maintenance over the past two decades. The observatory has undergone two major upgrades in its lifetime (during the 1970s and 1990s), which NSF funded (along with partial NASA support), totaling $25 million. Since Fiscal Year 2018, NSF has contributed around $7.5 million-per-year to Arecibo operations and management.
Technical specifications and observational capabilities: Arecibo Observatory’s principal astronomical research instrument is a 1,000 foot (305 meter) fixed spherical radio/radar telescope. Its frequency capabilities range from 50 megahertz to 11 gigahertz. Transmitters include an S-band (2,380 megahertz) radar system for planetary studies and a 430 megahertz
radar system for atmospheric science studies and a heating facility for ionospheric research

Details

The Arecibo Telescope was a 305 m (1,000 ft) spherical reflector radio telescope built into a natural sinkhole at the Arecibo Observatory located near Arecibo, Puerto Rico. A cable-mount steerable receiver and several radar transmitters for emitting signals were mounted 150 m (492 ft) above the dish. Completed in November 1963, the Arecibo Telescope was the world's largest single-aperture telescope for 53 years, until it was surpassed in July 2016 by the Five-hundred-meter Aperture Spherical Telescope (FAST) in Guizhou, China.

The Arecibo Telescope was primarily used for research in radio astronomy, atmospheric science, and radar astronomy, as well as for programs that search for extraterrestrial intelligence (SETI). Scientists wanting to use the observatory submitted proposals that were evaluated by independent scientific referees. NASA also used the telescope for near-Earth object detection programs. The observatory, funded primarily by the National Science Foundation (NSF) with partial support from NASA, was managed by Cornell University from its completion in 1963 until 2011, after which it was transferred to a partnership led by SRI International. In 2018, a consortium led by the University of Central Florida assumed operation of the facility.

The telescope's unique and futuristic design led to several appearances in film, gaming and television productions, such as for the climactic fight scene in the James Bond film GoldenEye (1995). It is one of the 116 pictures included in the Voyager Golden Record. It has been listed on the US National Register of Historic Places since 2008. The center was named an IEEE Milestone in 2001.

Since 2006, the NSF has reduced its funding commitment to the observatory, leading academics to push for additional funding support to continue its programs. The telescope was damaged by Hurricane Maria in 2017 and was affected by earthquakes in 2019 and 2020. Two cable breaks, one in August 2020 and a second in November 2020, threatened the structural integrity of the support structure for the suspended platform and damaged the dish. Due to uncertainty over the remaining strength of the other cables supporting the suspended structure, and the risk of collapse owing to further failures making repairs dangerous, the NSF announced on November 19, 2020, that the telescope would be decommissioned and dismantled, with the radio telescope and LIDAR facility remaining operational. Before it could be decommissioned, several of the remaining support cables suffered a critical failure and the support structure, antenna, and dome assembly all fell into the dish at 7:55 a.m. local time on December 1, 2020, destroying the telescope. The NSF determined that it would not rebuild the telescope or similar Observatory at the site in October 2022.

General information

Comparison of the Arecibo (top), FAST (middle) and RATAN-600 (bottom) radio telescopes at the same scale
The telescope's main collecting dish had the shape of a spherical cap 1,000 feet (305 m) in diameter with an 869-foot (265 m) radius of curvature, and was constructed inside a karst sinkhole. The dish surface was made of 38,778 perforated aluminum panels, each about 3 by 7 feet (1 by 2 m), supported by a mesh of steel cables.[9] The ground beneath supported shade-tolerant vegetation.

The telescope had three radar transmitters, with effective isotropic radiated powers (EIRPs) of 22 TW (continuous) at 2380 MHz, 3.2 TW (pulse peak) at 430 MHz, and 200 MW at 47 MHz, as well as an ionospheric modification facility operating at 5.1 and 8.175 MHz.

The dish remained stationary, while receivers and transmitters were moved to the proper focal point of the telescope to aim at the desired target. As a spherical mirror, the reflector's focus is along a line rather than at one point. As a result, complex line feeds were implemented to carry out observations, with each line feed covering a narrow frequency band measuring 10–45 MHz. A limited number of line feeds could be used at any one time, limiting the telescope's flexibility.

The receiver was on an 820-tonne (900-short-ton) platform suspended 150 m (492 ft) above the dish by 18 main cables running from three reinforced concrete towers (six cables per tower), one 111 m (365 ft) high and the other two 81 m (265 ft) high, placing their tops at the same elevation. Each main cable was a 8 cm (3.1 in) diameter bundle containing 160 wires, with the bundle painted over and dry air continuously blown through to prevent corrosion due to the humid tropic climate. The platform had a rotating, bow-shaped track 93 m (305 ft) long, called the azimuth arm, carrying the receiving antennas and secondary and tertiary reflectors. This allowed the telescope to observe any region of the sky in a forty-degree cone of visibility about the local zenith (between −1 and 38 degrees of declination). Puerto Rico's location near the Northern Tropic allowed the Arecibo telescope to view the planets in the Solar System over the northern half of their orbit. The round trip light time to objects beyond Saturn is longer than the 2.6-hour time that the telescope could track a celestial position, preventing radar observations of more distant objects.

Additional Information:

Arecibo Observatory

Arecibo Observatory was an astronomical observatory located 16 km (10 miles) south of the town of Arecibo in Puerto Rico. It was the site of the world’s largest single-unit radio telescope until FAST in China began observations in 2016. This instrument, built in the early 1960s, employed a 305-metre (1,000-foot) spherical reflector consisting of perforated aluminum panels that focused incoming radio waves on movable antenna structures positioned about 168 metres (550 feet) above the reflector surface. The antenna structures could be moved in any direction, making it possible to track a celestial object in different regions of the sky. The observatory also has an auxiliary 12-metre (39-foot) radio telescope and a high-power laser transmitting facility used to study Earth’s atmosphere.

In August 2020 a cable holding up the central platform of the 305-metre telescope snapped and made a hole in the dish. After a second cable broke in November 2020, the National Science Foundation (NSF) announced that the telescope was in danger of collapse and the cables could not be safely repaired. The NSF thus planned to decommission the observatory. On December 1, 2020, days after the NSF’s announcement, the cables broke, and the central platform collapsed into the dish. In October 2022 the NSF announced that it would not rebuild the telescope but would instead build an educational centre at the site.

Scientists using the Arecibo Observatory discovered the first extrasolar planets around the pulsar B1257+12 in 1992. The observatory also produced detailed radar maps of the surface of Venus and Mercury and discovered that Mercury rotated every 59 days instead of 88 days and so did not always show the same face to the Sun. American astronomers Russell Hulse and Joseph H. Taylor, Jr., used Arecibo to discover the first binary pulsar. They showed that it was losing energy through gravitational radiation at the rate predicted by physicist Albert Einstein’s theory of general relativity, and they won the Nobel Prize for Physics in 1993 for their discovery.

ucf-arecibo-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1728 2023-04-10 13:17:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1631) Retirement home

Gist

A retirement home – sometimes called an old people's home or old age home, although old people's home can also refer to a nursing home – is a multi-residence housing facility intended for the elderly. Typically, each person or couple in the home has an apartment-style room or suite of rooms.

Details

A retirement home – sometimes called an old people's home or old age home, although old people's home can also refer to a nursing home – is a multi-residence housing facility intended for the elderly. Typically, each person or couple in the home has an apartment-style room or suite of rooms. Additional facilities are provided within the building. This can include facilities for meals, gatherings, recreation activities, and some form of health or hospital care. A place in a retirement home can be paid for on a rental basis, like an apartment, or can be bought in perpetuity on the same basis as a condominium.

A retirement home differs from a nursing home primarily in the level of medical care given. Retirement communities, unlike retirement homes, offer separate and autonomous homes for residents.

Retirement homes offer meal-making and some personal care services. Assisted living facilities, memory care facilities and nursing homes can all be referred to as retirement homes. The cost of living in a retirement home varies from $25,000 to $100,000 per year, although it can exceed this range, according to Senior Living Near Me's senior housing guide.

In the United Kingdom, there were about 750,000 places across 25,000 retirement housing schemes in 2021 with a forecast that numbers would grow by nearly 10% over the next five years.

United States

Proper design is integral to the experience within retirement homes, especially for those experiencing dementia.  Wayfinding and spatial orientation become difficult for residents with dementia, causing confusion, agitation and a general decline in the physical and mental wellbeing.

Signage

Those living with dementia often display difficulty with distinguishing relevance of information within signage. This phenomenon can be attributed to a combination of fixative behaviors as well as a tendency towards non discriminatory reading. Therefore in creating appropriate signage for retirement homes, we must first consider the who, what, when, where, and why of the design and placement of signage.

Considering the “who” of the user requires an understanding of those who interact with North American care homes. This group includes staff and visitors, however understandable wayfinding is most important for residents experiencing dementia. This then leads to “what” kind of information should be presented. Important information for staff, visitors, and patients covers a great variety, and altogether the amount of signage required directly conflicts with the ideal of reducing distraction, overstimulation, and non-discriminatory reading for those within retirement homes. This is where the “when”, “where”, and “why” of signage must be addressed. In deciding “when” information should be presented, Tetsuya argues that it is “important that essential visual information be provided at a relatively early stage in walking routes.” Therefore, we can assume that immediately relevant information such as the direction of available facilities should be placed near the entrance of patient rooms, or at the end of hallways housing patient rooms. This observation also leads into “where” appropriate placement would be for information, and “why” it is being presented. In regards to wayfinding signage, making navigation as understandable as possible can be achieved by avoiding distraction while navigating. Addressing this, Romedi Passini suggests that “graphic wayfinding information notices along circulation routes should be clear and limited in number and other information should be placed somewhere else.” Signage not related to wayfinding can be distracting if placed nearby, and detract from the effectiveness of wayfinding signage. Instead, Passini suggests “to create little alcoves specifically designed for posting public announcements, invitations, and publicity.” These alcoves would best be placed in areas of low stimulation, as they would be better understood in a context that is not overwhelming. In a study done by Kristen Day, it was observed that areas of high stimulation were “found to occur in elevators, corridors, nursing stations, bathing rooms, and other residents’ rooms, whereas low stimulation has been observed in activity and dining rooms”. As of such, we can assume that activity and dining rooms would be the best place for these alcoves to reside.

Architectural cues

Another relevant method of wayfinding is the presence of architectural cues within North American senior retirement homes. This method is most often considered during the design of new senior care centers, however there are still multiple items that can easily be implemented within existing care homes as well. Architectural cues can impact residents by communicating purpose through the implied use of a setting or object, assisting in navigation without the need for cognitive mapping, and making areas more accessible and less distressing for those with decreased mobility.

We will investigate how architectural cues communicate purpose and influence the behavior of residents.  In a case study by Passini,“a patient, seeing a doorbell (for night use) at the hospital, immediately decided to ring”. This led to the conclusion that “architectural elements … determine to a certain extent the behavior of less independent patients.” In considering the influence of architectural cues on residents, this becomes an important observation, as it suggests that positive behavior can be encouraged through the use of careful planning of rooms. This claim is further supported in a case study by Day, in which “frequency of toilet use increased dramatically when toilets were visibly accessible to residents.” Having toilets placed within the sight lines of the residents encourages behavior of more frequently visiting the washroom, lessening the burden on nursing staff as well as leading to increased health of the residents. This communication of purpose though learned behavior can translate into creating more legible interior design as well. Through the use of distinctive furniture and flooring such as a bookshelf in a communal living room, the functionality and differentiation of spaces can become much easier for residents to navigate. Improving environmental legibility can also be useful in assisting with navigation within a care home.

Assistance in navigation through reducing a need for complicated cognitive mapping is an asset that can be achieved in multiple ways within care centers. Visual landmarks existing in both architectural and interior design helps provide differentiation between spaces. Burton notes “residents reported that...landmarks (features such as clocks and plants at key sections of corridors)[were useful in wayfinding]". Navigating using distinct landmarks can also define individual resident rooms. Tetsuya suggests that “doors of residents' rooms should have differentiated characteristics” in order to help in differentiating their own personal rooms. This can be done through the use of personal objects placed on or beside doorways, or in providing distinctive doors for each room.

Finally, considering accessibility is integral in designing architecture within care homes. Many members of the senior community require the use of equipment and mobility aids. As such, requirements of these items must be considered in designing a senior specific space. Open and clear routes of travel benefit the user by clearly directing residents along the path and reducing difficulty caused by the use of mobility aids. Similarly, creating shorter routes of travel by moving fundamental facilities such as the dining room closer to patient rooms has also been shown to reduce anxiety and distress. Moving between spaces becomes simpler, avoiding high stimulation areas such as elevators while also assisting wayfinding by making a simpler, smaller layout. Each of these methods can be achieved through the use of open core spaces. These spaces integrate multiple rooms into a single open concept space, "giving visual access and allowing a certain understanding of space without having to integrate into an ensemble that is perceived in parts, which is the most difficult aspect of cognitive mapping". In integrating more open core spaces into North American senior facilities, spaces become more accessible and easier to navigate.

Additional Information

Old age, also called senescence, in human beings, is the final stage of the normal life span. Definitions of old age are not consistent from the standpoints of biology, demography (conditions of mortality and morbidity), employment and retirement, and sociology. For statistical and public administrative purposes, however, old age is frequently defined as 60 or 65 years of age or older.

Old age has a dual definition. It is the last stage in the life processes of an individual, and it is an age group or generation comprising a segment of the oldest members of a population. The social aspects of old age are influenced by the relationship of the physiological effects of aging and the collective experiences and shared values of that generation to the particular organization of the society in which it exists.

There is no universally accepted age that is considered old among or within societies. Often discrepancies exist as to what age a society may consider old and what members in that society of that age and older may consider old. Moreover, biologists are not in agreement about the existence of an inherent biological cause for aging. However, in most contemporary Western countries, 60 or 65 is the age of eligibility for retirement and old-age social programs, although many countries and societies regard old age as occurring anywhere from the mid-40s to the 70s.

Social programs

State institutions to aid the elderly have existed in varying degrees since the time of the ancient Roman Empire. England in 1601 enacted the Poor Law, which recognized the state’s responsibility to the aged, although programs were carried out by local church parishes. An amendment to this law in 1834 instituted workhouses for the poor and aged, and in 1925 England introduced social insurance for the aged regulated by statistical evaluations. In 1940 programs for the aged came under England’s welfare state system.

In the 1880s Otto von Bismarck in Germany introduced old-age pensions whose model was followed by most other western European countries. Today more than 100 nations have some form of social security program for the aged. The United States was one of the last countries to institute such programs. Not until the Social Security Act of 1935 was formulated to relieve hardships caused by the Great Depression were the elderly granted old-age pensions. For the most part, these state programs, while alleviating some burdens of aging, still do not bring older people to a level of income comparable to that of younger people.

Physiological effects

The physiological effects of aging differ widely among individuals. However, chronic ailments, especially aches and pains, are more prevalent than acute ailments, requiring older people to spend more time and money on medical problems than younger people. The rising cost of medical care has caused a growing concern among older people and societies, in general resulting in constant reevaluation and reform of institutions and programs designed to aid the elderly with these expenses.

In ancient Rome and medieval Europe the average life span is estimated to have been between 20 and 30 years. Life expectancy today has expanded in historically unprecedented proportions, greatly increasing the numbers of people who survive over the age of 65. Therefore, the instances of medical problems associated with aging, such as certain kinds of cancer and heart disease, have increased, giving rise to greater consideration, both in research and in social programs, for accommodating this increase.

Certain aspects of sensory and perceptual skills, muscular strength, and certain kinds of memory tend to diminish with age, rendering older people unsuitable for some activities. There is, however, no conclusive evidence that intelligence deteriorates with age, but rather that it is more closely associated with education and standard of living. Sexual activity tends to decrease with age, but if an individual is healthy there is no age limit for its continuance.

Many of the myths surrounding the process of aging are being invalidated by increased studies in gerontology, but there still is not sufficient information to provide adequate conclusions.

Demographic and socioeconomic influences

In general the social status of an age group is related to its effective influence in its society, which is associated with that group’s function in productivity. In agrarian societies the elderly have a status of respectability. Their life experiences and knowledge are regarded as valuable, especially in preliterate societies where knowledge is orally transmitted. The range of activities in these societies allows the elderly to continue to be productive members of their communities.

In industrialized nations the status of the elderly has altered as the socioeconomic conditions have changed, tending to reduce the status of the elderly as a society becomes more technologically oriented. Since physical disability is less a factor in productive capability in industrialized countries, this reduction in social status is thought to have been generated by several interrelated factors: the numbers of still able-bodied older workers outstripping the number of available employment opportunities, the decline in self-employment which allows a worker to gradually decrease activity with age, and the continual introduction of new technology requiring special training and education.

Although in certain fields old age is still considered an asset, particularly in the political arena, older people are increasingly being forced into retirement before their productive years are over, causing problems in their psychological adaptations to old age. Retirement is not regarded unfavourably in all instances, but its economic limitations tend to further remove older people from the realm of influence and raise problems in the extended use of leisure time and housing. As a consequence, financial preparation for retirement has become an increased concern for individuals and society. For an essay on retirement, medical care, and other issues affecting the elderly, see John Kenneth Galbraith’s Notes on Aging, a Britannica sidebar by the distinguished economist, ambassador, and public servant.

Familial relationships tend to be the focus of the elderly’s attention. However, as the family structure in industrialized countries has changed in the past 100 years from a unit encompassing several generations living in close proximity to self-contained nuclear families of only parents and young children, older people have become isolated from younger people and each other. Studies have shown that as a person ages he or she prefers to remain in the same locale. However, the tendency for young people in industrialized countries to be highly mobile has forced older people to decide whether to move to keep up with their families or to remain in neighbourhoods which also change, altering their familiar patterns of activity. Although most older people do live within an hour from their closest child, industrialized societies are faced with formulating programs to accommodate increasing numbers of older people who function independently of their families.

A significant factor in the social aspects of old age concerns the values and education of the generation itself. In industrialized countries especially, where changes occur more rapidly than in agrarian societies, a generation born 65 years ago may find that the dominant mores, expectations, definitions of the quality of life, and roles of older people have changed considerably by the time it reaches old age. Formal education, which usually takes place in the early years and forms collective opinions and mores, tends to enhance the difficulties in adapting to old age. However, resistance to change, which is often associated with the elderly, is being shown to be less an inability to change than a trend in older people to regard life with a tolerant attitude. Apparent passivity may actually be a choice based on experience, which has taught older people to perceive certain aspects of life as unchangeable. Adult education programs are beginning to close the generation gap; however, as each successive generation reaches old age, bringing with it its particular biases and preferences, new problems arise requiring new social accommodations.

PRI_190265822.jpg?resize=640,360&strip=all&quality=90


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1729 2023-04-11 14:01:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1632) Random Number Generator/Generation

Summary:

Random Number Generator

A random number generator is a hardware device or software algorithm that generates a number that is taken from a limited or unlimited distribution and outputs it. The two main types of random number generators are pseudo random number generators and true random number generators.

Pseudo Random Number Generators

Random number generators are typically software, pseudo random number generators. Their outputs are not truly random numbers. Instead they rely on algorithms to mimic the selection of a value to approximate true randomness. Pseudo random number generators work with the user setting the distribution, or scope from which the random number is selected (e.g. lowest to highest), and the number is instantly presented.

The outputted values from a pseudo random number are adequate for use in most applications but they should not always be relied on for secure cryptographic implementations. For such uses, a cryptographically secure pseudo random number generator is called for.

True Random Number Generators

A true random number generator — a hardware random number generator (HRNG) or true random number generator (TRNG) — is cryptographically secure and takes into account physical attributes such as atmospheric or thermal conditions. Such tools may also take into account measurement biases. They may also utilize physical coin flipping and dice rolling processes. A TRNG or HRNG is useful for creating seed tokens.

Example:

“To assure a high degree of arbitrariness in games or even non-mission-critical security, you can use a random number generator to come up with different values since these software tools greatly increase the choice while cutting out most human biases.”

Details

Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols that cannot be reasonably predicted better than by random chance is generated. This means that the particular outcome sequence will contain some patterns detectable in hindsight but unpredictable to foresight. True random number generators can be hardware random-number generators (HRNGs), wherein each generation is a function of the current value of a physical environment's attribute that is constantly changing in a manner that is practically impossible to model. This would be in contrast to so-called "random number generations" done by pseudorandom number generators (PRNGs), which generate numbers that only look random but are in fact pre-determined—these generations can be reproduced simply by knowing the state of the PRNG.

Various applications of randomness have led to the development of different methods for generating random data. Some of these have existed since ancient times, including well-known examples like the rolling of dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (for divination) in the I Ching, as well as countless other techniques. Because of the mechanical nature of these techniques, generating large quantities of sufficiently random numbers (important in statistics) required much work and time. Thus, results would sometimes be collected and distributed as random number tables.

Several computational methods for pseudorandom number generation exist. All fall short of the goal of true randomness, although they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). This generally makes them unusable for applications such as cryptography. However, carefully designed cryptographically secure pseudorandom number generators (CSPRNGS) also exist, with special features specifically designed for use in cryptography.

Practical applications and uses

Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable. Generally, in applications having unpredictability as the paramount feature, such as in security applications, hardware generators are generally preferred over pseudorandom algorithms, where feasible.

Pseudorandom number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed. They are also used in cryptography – so long as the seed is secret. Sender and receiver can generate the same set of numbers automatically to use as keys.

The generation of pseudorandom numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "random quote of the day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms.

Some applications which appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a true random system would have no restriction on the same item appearing two or three times in succession.

"True" vs. pseudo-random numbers

There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy (as a measure of unpredictability or surprise of the number generation process).

The speed at which entropy can be obtained from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring "true" entropy are said to be blocking – they are rate-limited until enough entropy is harvested to meet the demand. On some Unix-like systems, including most Linux distributions, the pseudo device file /dev/random will block until sufficient entropy is harvested from the environment. Due to this blocking behavior, large bulk reads from /dev/random, such as filling a hard disk drive with random bits, can often be slow on systems that use this type of entropy source.

The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed value or key. As a result, the entire seemingly random sequence can be reproduced if the seed value is known. This type of random number generator is often called a pseudorandom number generator. This type of generator typically does not rely on sources of naturally occurring entropy, though it may be periodically seeded by natural sources. This generator type is non-blocking, so they are not rate-limited by an external event, making large bulk reads a possibility.

Some systems take a hybrid approach, providing randomness harvested from natural sources when available, and falling back to periodically re-seeded software-based cryptographically secure pseudorandom number generators (CSPRNGs). The fallback occurs when the desired read rate of randomness exceeds the ability of the natural harvesting approach to keep up with the demand. This approach avoids the rate-limited blocking behavior of random number generators based on slower and purely environmental methods.

While a pseudorandom number generator based solely on deterministic logic can never be regarded as a "true" random number source in the purest sense of the word, in practice they are generally sufficient even for demanding security-critical applications. Carefully designed and implemented pseudorandom number generators can be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna. The former is the basis of the /dev/random source of entropy on FreeBSD, AIX, OS X, NetBSD, and others. OpenBSD uses a pseudorandom number algorithm known as arc4random.

Generation methods

Physical methods

The earliest methods for generating random numbers, such as dice, coin flipping and roulette wheels, are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.

A physical random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics. Sources of entropy include radioactive decay, thermal noise, shot noise, avalanche noise in Zener diodes, clock drift, the timing of actual movements of a hard disk read-write head, and radio noise. However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor, such as a cryptographic hash function, can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate.

The appearance of wideband photonic entropy sources, such as optical chaos and amplified spontaneous emission noise, greatly aid the development of the physical random number generator. Among them, optical chaos has a high potential to physically produce high-speed random numbers due to its high bandwidth and large amplitude. A prototype of a high speed, real-time physical random bit generator based on a chaotic laser was built in 2013.

Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measures radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio.

Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators.

Computational methods

Most computer generated random numbers use PRNGs which are algorithms that can automatically create long runs of numbers with good random properties but eventually the sequence repeats (or the memory usage grows without bound). These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. The series of values generated by such algorithms is generally determined by a fixed number called a seed. One of the most common PRNG is the linear congruential generator, which uses the recurrence

The maximum number of numbers the formula can produce is the modulus, m. The recurrence relation can be extended to matrices to have much longer periods and better statistical properties . To avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, a, can be used in parallel, with a "master" random number generator that selects from among the several different generators.

A simple pen-and-paper method for generating random numbers is the so-called middle-square method suggested by John von Neumann. While simple to implement, its output is of poor quality. It has a very short period and severe weaknesses, such as the output sequence almost always converging to zero. A recent innovation is to combine the middle square with a Weyl sequence. This method produces high quality output through a long period.

Most computer programming languages include functions or library routines that provide random number generators. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1.

The quality i.e. randomness of such library functions varies widely from completely predictable output, to cryptographically secure. The default random number generator in many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne Twister algorithm and is not sufficient for cryptography purposes, as is explicitly stated in the language documentation. Such library functions often have poor statistical properties and some will repeat patterns after only tens of thousands of trials. They are often initialized using a computer's real-time clock as the seed, since such a clock is 64 bit and measures in nanoseconds, far beyond the person's precision. These functions may provide enough randomness for certain tasks (for example video games) but are unsuitable where high-quality randomness is required, such as in cryptography applications, statistics or numerical analysis.

Much higher quality random number sources are available on most operating systems; for example /dev/random on various BSD flavors, Linux, Mac OS X, IRIX, and Solaris, or CryptGenRandom for Microsoft Windows. Most programming languages, including those mentioned above, provide a means to access these higher quality sources.

By humans

Random number generation may also be performed by humans, in the form of collecting various inputs from end users and using them as a randomization source. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e.g. digits or letters. They may alternate too much between choices when compared to a good random generator; thus, this approach is not widely used.

Post-processing and statistical checks

Even given a source of plausible random numbers (perhaps from a quantum mechanically based hardware generator), obtaining numbers which are completely unbiased takes care. In addition, behavior of these generators often changes with temperature, power supply voltage, the age of the device, or other outside interference. And a software bug in a pseudorandom number routine, or a hardware bug in the hardware it runs on, may be similarly difficult to detect.

Generated random numbers are sometimes subjected to statistical tests before use to ensure that the underlying source is still working, and then post-processed to improve their statistical properties. An example would be the TRNG9803 hardware random number generator, which uses an entropy measurement as a hardware test, and then post-processes the random sequence with a shift register stream cipher. It is generally hard to use statistical tests to validate the generated random numbers. Wang and Nicol proposed a distance-based statistical testing technique that is used to identify the weaknesses of several random generators. Li and Wang proposed a method of testing random numbers based on laser chaotic entropy sources using Brownian motion properties.

Other considerations:

Uniform distributions

Most random number generators natively work with integers or individual bits, so an extra step is required to arrive at the "canonical" uniform distribution between 0 and 1. The implementation is not as trivial as dividing the integer by its maximum possible value. Specifically:

* The integer used in the transformation must provide enough bits for the intended precision.
* The nature of floating-point math itself means there exists more precision the closer the number is to zero. This extra precision is usually not used due to the sheer number of bits required.
* Rounding error in division may bias the result. At worst, a supposedly excluded bound may be drawn contrary to expectations based on real-number math.

The mainstream algorithm, used by OpenJDK, Rust, and NumPy, is described in a proposal for C++'s STL. It does not use the extra precision and suffers from bias only in the last bit due to round-to-even. Other numeric concerns are warranted when shifting this "canonical" uniform distribution to a different range. A proposed method for the Swift programming language claims to use the full precision everywhere.

Uniformly distributed integers are commonly used in algorithms such as the Fisher–Yates shuffle. Again, a naive implementation may induce a modulo bias into the result, so more involved algorithms must be used. A method that nearly never performs division was described in 2018 by Daniel Lemire, with the current state-of-the-art being the arithmetic encoding-inspired 2021 "optimal algorithm" by Stephen Canon of Apple Inc.[22]

Most 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both.

Other distributions

Given a source of uniform random numbers, there are a couple of methods to create a new random source that corresponds to a probability density function. One method, called the inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method, called the acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again.

As an example for rejection sampling, to generate a pair of statistically independent standard normally distributed random numbers (x, y), one may first generate the polar coordinates (r, θ), where r2~χ22 and θ~UNIFORM(0,2π) (see Box–Muller transform).

Whitening

The outputs of multiple independent RNGs can be combined (for example, using a bit-wise XOR operation) to provide a combined RNG at least as good as the best RNG used. This is referred to as software whitening.

Computational and hardware random number generators are sometimes combined to reflect the benefits of both kinds. Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate "true randomness."

Low-discrepancy sequences as an alternative

Some computations making use of a random number generator can be summarized as the computation of a total or average value, such as the computation of integrals by the Monte Carlo method. For such problems, it may be possible to find a more accurate solution by the use of so-called low-discrepancy sequences, also called quasirandom numbers. Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a truly random sequence may, and usually does, leave larger gaps.

Activities and demonstrations

The following sites make available random number samples:

* The SOCR resource pages contain a number of hands-on interactive activities and demonstrations of random number generation using Java applets.
* The Quantum Optics Group at the ANU generates random numbers sourced from quantum vacuum. Sample of random numbers are available at their quantum random number generator research page.
* Random.org makes available random numbers that are sourced from the randomness of atmospheric noise.

The Quantum Random Bit Generator Service at the Ruđer Bošković Institute harvests randomness from the quantum process of photonic emission in semiconductors. They supply a variety of ways of fetching the data, including libraries for several programming languages.

The Group at the Taiyuan University of Technology generates random numbers sourced from a chaotic laser. Samples of random number are available at their Physical Random Number Generator Service.

Backdoors

Since much cryptography depends on a cryptographically secure random number generator for key and cryptographic nonce generation, if a random number generator can be made predictable, it can be used as backdoor by an attacker to break the encryption.

The NSA is reported to have inserted a backdoor into the NIST certified cryptographically secure pseudorandom number generator Dual EC DRBG. If for example an SSL connection is created using this random number generator, then according to Matthew Green it would allow NSA to determine the state of the random number generator, and thereby eventually be able to read all data sent over the SSL connection. Even though it was apparent that Dual_EC_DRBG was a very poor and possibly backdoored pseudorandom number generator long before the NSA backdoor was confirmed in 2013, it had seen significant usage in practice until 2013, for example by the prominent security company RSA Security. There have subsequently been accusations that RSA Security knowingly inserted a NSA backdoor into its products, possibly as part of the Bullrun program. RSA has denied knowingly inserting a backdoor into its products.

It has also been theorized that hardware RNGs could be secretly modified to have less entropy than stated, which would make encryption using the hardware RNG susceptible to attack. One such method which has been published works by modifying the dopant mask of the chip, which would be undetectable to optical reverse-engineering. For example, for random number generation in Linux, it is seen as unacceptable to use Intel's RDRAND hardware RNG without mixing in the RDRAND output with other sources of entropy to counteract any backdoors in the hardware RNG, especially after the revelation of the NSA Bullrun program.

In 2010, a U.S. lottery draw was rigged by the information security director of the Multi-State Lottery Association (MUSL), who surreptitiously installed backdoor malware on the MUSL's secure RNG computer during routine maintenance. During the hacks the man won a total amount of $16,500,000 by predicting the numbers correctly a few times in year.

Address space layout randomization (ASLR), a mitigation against rowhammer and related attacks on the physical hardware of memory chips has been found to be inadequate as of early 2017 by VUSec. The random number algorithm, if based on a shift register implemented in hardware, is predictable at sufficiently large values of p and can be reverse engineered with enough processing power (Brute Force Hack). This also indirectly means that malware using this method can run on both GPUs and CPUs if coded to do so, even using GPU to break ASLR on the CPU itself.

media%2F66a%2F66abcb93-c722-4003-9ad4-20a506e86648%2FphpoHvB43.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1730 2023-04-12 13:50:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1633) Sedative

Gist

A drug or medicine that makes you feel calm or want to sleep.

Summary

Sedatives, or central nervous system depressants, are a group of drugs that slow brain activity. People use these drugs to help them calm down, feel more relaxed, and get better sleep.

There has been a recent increase in sedative prescriptions. Doctors prescribe sedatives to treat conditions such as:

* anxiety disorders
* sleep disorders
* seizures
* tension
* panic disorders
* alcohol withdrawal syndrome

Sedatives are drugs that people commonly misuse. Misusing sedatives and prolonging their use may lead to dependency and eventual withdrawal symptoms.

This article examines the different types of sedatives available and their possible uses. It also looks at the potential risks associated with using them and some alternative options.

Sedatives have numerous clinical uses. For example, they can induce sedation before surgical procedures, and this can range from mild sedation to general anesthesia.

Doctors also give sedatives and analgesics to individuals to reduce anxiety and provide pain relief before and after procedures.

Obstetric anesthesiologists may also give sedatives to people experiencing distress or restlessness during labor.

Because of their ability to relieve physical stress and anxiety and promote relaxation, doctors may also prescribe sedatives to people with insomnia, anxiety disorders, and muscle spasms.

People with bipolar disorder, post-traumatic stress disorder, and seizures may also benefit from prescription sedatives.

How sedatives affect the body

Sedatives act by increasing the activity of the brain chemical gamma-aminobutyric acid (GABA). This can slow down brain activity in general.

The inhibition of brain activity causes a person to become more relaxed, drowsy, and calm. Sedatives also allow GABA to have a stronger inhibitory effect on the brain.

Details

A sedative or tranquilliser is a substance that induces sedation by reducing irritability or excitement. They are CNS (central nervous system) depressants and interact with brain activity causing its deceleration. Various kinds of sedatives can be distinguished, but the majority of them affect the neurotransmitter gamma-aminobutyric acid (GABA). In spite of the fact that each sedative acts in its own way, most produce relaxing effects by increasing GABA activity.

This group is related to hypnotics. The term sedative describes drugs that serve to calm or relieve anxiety, whereas the term hypnotic describes drugs whose main purpose is to initiate, sustain, or lengthen sleep. Because these two functions frequently overlap, and because drugs in this class generally produce dose-dependent effects (ranging from anxiolysis to loss of consciousness) they are often referred to collectively as sedative-hypnotic drugs.

Sedatives can be used to produce an overly-calming effect (alcohol being the most common sedating drug). In the event of an overdose or if combined with another sedative, many of these drugs can cause deep unconsciousness and even death.

Terminology

There is some overlap between the terms "sedative" and "hypnotic".

Advances in pharmacology have permitted more specific targeting of receptors, and greater selectivity of agents, which necessitates greater precision when describing these agents and their effects:

Anxiolytic refers specifically to the effect upon anxiety. (However, some benzodiazepines can be all three: sedatives, hypnotics, and anxiolytics).

Tranquilizer can refer to anxiolytics or antipsychotics.

Soporific and sleeping pill are near-synonyms for hypnotics.

The term "chemical cosh"

The term "chemical cosh" (a club) is sometimes used popularly for a strong sedative, particularly for:

* widespread dispensation of antipsychotic drugs in residential care to make people with dementia easier to manage.
* use of methylphenidate to calm children with attention deficit hyperactivity disorder, though paradoxically this drug is known to be a stimulant.

Therapeutic use

Doctors and veterinarians often administer sedatives to patients in order to dull the patient's anxiety related to painful or anxiety-provoking procedures. Although sedatives do not relieve pain in themselves, they can be a useful adjunct to analgesics in preparing patients for surgery, and are commonly given to patients before they are anaesthetized, or before other highly uncomfortable and invasive procedures like cardiac catheterization, colonoscopy or MRI.

Risks:

Sedative dependence

Some sedatives can cause psychological and physical dependence when taken regularly over a period of time, even at therapeutic doses. Dependent users may get withdrawal symptoms ranging from restlessness and insomnia to convulsions and death. When users become psychologically dependent, they feel as if they need the drug to function, although physical dependence does not necessarily occur, particularly with a short course of use. In both types of dependences, finding and using the sedative becomes the focus in life. Both physical and psychological dependence can be treated with therapy.

Misuse

Many sedatives can be misused, but barbiturates and benzodiazepines are responsible for most of the problems with sedative use due to their widespread recreational or non-medical use. People who have difficulty dealing with stress, anxiety or sleeplessness may overuse or become dependent on sedatives. Some heroin users may take them either to supplement their drug or to substitute for it. Stimulant users may take sedatives to calm excessive jitteriness. Others take sedatives recreationally to relax and forget their worries. Barbiturate overdose is a factor in nearly one-third of all reported drug-related deaths. These include suicides and accidental drug poisonings. Accidental deaths sometimes occur when a drowsy, confused user repeats doses, or when sedatives are taken with alcohol.

A study from the United States found that in 2011, sedatives and hypnotics were a leading source of adverse drug events (ADEs) seen in the hospital setting: Approximately 2.8% of all ADEs present on admission and 4.4% of ADEs that originated during a hospital stay were caused by a sedative or hypnotic drug. A second study noted that a total of 70,982 sedative exposures were reported to U.S. poison control centers in 1998, of which 2310 (3.2%) resulted in major toxicity and 89 (0.1%) resulted in death. About half of all the people admitted to emergency rooms in the U.S. as a result of nonmedical use of sedatives have a legitimate prescription for the drug, but have taken an excessive dose or combined it with alcohol or other drugs.

There are also serious paradoxical reactions that may occur in conjunction with the use of sedatives that lead to unexpected results in some individuals. Malcolm Lader at the Institute of Psychiatry in London estimates the incidence of these adverse reactions at about 5%, even in short-term use of the drugs. The paradoxical reactions may consist of depression, with or without suicidal tendencies, phobias, aggressiveness, violent behavior and symptoms sometimes misdiagnosed as psychosis.

Dangers of combining sedatives and alcohol

Sedatives and alcohol are sometimes combined recreationally or carelessly. Since alcohol is a strong depressant that slows brain function and depresses respiration, the two substances compound each other's actions and this combination can prove fatal.

Worsening of psychiatric symptoms

The long-term use of benzodiazepines may have a similar effect on the brain as alcohol, and are also implicated in depression, anxiety, posttraumatic stress disorder (PTSD), mania, psychosis, sleep disorders, sexual dysfunction, delirium, and neurocognitive disorders (including benzodiazepine-induced persisting dementia which persists even after the medications are stopped). As with alcohol, the effects of benzodiazepine on neurochemistry, such as decreased levels of serotonin and norepinephrine, are believed to be responsible for their effects on mood and anxiety. Additionally, benzodiazepines can indirectly cause or worsen other psychiatric symptoms (e.g., mood, anxiety, psychosis, irritability) by worsening sleep (i.e., benzodiazepine-induced sleep disorder). Like alcohol, benzodiazepines are commonly used to treat insomnia in the short-term (both prescribed and self-medicated), but worsen sleep in the long-term. While benzodiazepines can put people to sleep, they disrupt sleep architecture: decreasing sleep time, delaying time to REM (rapid eye movement) sleep, and decreasing deep slow-wave sleep (the most restorative part of sleep for both energy and mood).

Dementia

Sedatives and hypnotics should be avoided in people with dementia, according to the medication appropriateness tool for co‐morbid health conditions in dementia criteria. The use of these medications can further impede cognitive function for people with dementia, who are also more sensitive to side effects of medications.

Amnesia

Sedatives can sometimes leave the patient with long-term or short-term amnesia. Lorazepam is one such pharmacological agent that can cause anterograde amnesia. Intensive care unit patients who receive higher doses over longer periods, typically via IV drip, are more likely to experience such side effects. Additionally, the prolonged use of tranquilizers increases the risk of obsessive and compulsive disorder, where the person becomes unaware whether he has performed a scheduled activity or not, he may also repetitively perform tasks and still re-performs the same task trying to make-up for continuous doubts. Remembering names that were earlier known becomes an issue such that the memory loss becomes apparent.

Disinhibition and crime

Sedatives — most commonly alcohol but also GHB, Flunitrazepam (Rohypnol), and to a lesser extent, temazepam (Restoril), and midazolam (Versed) — have been reported for their use as date math drugs (also called a Mickey) and being administered to unsuspecting patrons in bars or guests at parties to reduce the intended victims' defenses. These drugs are also used for robbing people.

Statistical overviews suggest that the use of sedative-spiked drinks for robbing people is actually much more common than their use for math. Cases of criminals taking rohypnol themselves before they commit crimes have also been reported, as the loss of inhibitions from the drug may increase their confidence to commit the offence, and the amnesia produced by the drug makes it difficult for police to interrogate them if they are caught.

drugabuse_shutter101272600_pills_on_a_table.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1731 2023-04-13 01:44:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1634) Water heating

Summary

Water heating is a heat transfer process that uses an energy source to heat water above its initial temperature. Typical domestic uses of hot water include cooking, cleaning, bathing, and space heating. In industry, hot water and water heated to steam have many uses.

Domestically, water is traditionally heated in vessels known as water heaters, kettles, cauldrons, pots, or coppers. These metal vessels that heat a batch of water do not produce a continual supply of heated water at a preset temperature. Rarely, hot water occurs naturally, usually from natural hot springs. The temperature varies with the consumption rate, becoming cooler as flow increases.

Appliances that provide a continual supply of hot water are called water heaters, hot water heaters, hot water tanks, boilers, heat exchangers, geysers (Southern Africa and the Arab world), or calorifiers. These names depend on region, and whether they heat potable or non-potable water, are in domestic or industrial use, and their energy source. In domestic installations, potable water heated for uses other than space heating is also called domestic hot water (DHW).

Fossil fuels (natural gas, liquefied petroleum gas, oil), or solid fuels are commonly used for heating water. These may be consumed directly or may produce electricity that, in turn, heats water. Electricity to heat water may also come from any other electrical source, such as nuclear power or renewable energy. Alternative energy such as solar energy, heat pumps, hot water heat recycling, and geothermal heating can also heat water, often in combination with backup systems powered by fossil fuels or electricity.

Densely populated urban areas of some countries provide district heating of hot water. This is especially the case in Scandinavia, Finland and Poland. District heating systems supply energy for water heating and space heating from combined heat and power (CHP) plants such as incinerators, central heat pumps, waste heat from industries, geothermal heating, and central solar heating. Actual heating of tap water is performed in heat exchangers at the consumers' premises. Generally the consumer has no in-building backup system as redundancy is usually significant on the district heating supply side.

Today in the United States, domestic hot water used in homes is most commonly heated with natural gas, electric resistance, or a heat pump. Electric heat pump water heaters are significantly more efficient than electric resistance water heaters, but also more expensive to purchase. Some energy utilities offer their customers funding to help offset the higher first cost of energy efficient water heaters.

Details

Heating is a process and system of raising the temperature of an enclosed space forHhe primary purpose of ensuring the comfort of the occupants. By regulating the ambient temperature, heating also serves to maintain a building’s structural, mechanical, and electrical systems.

Historical development

The earliest method of providing interior heating was an open fire. Such a source, along with related methods such as fireplaces, cast-iron stoves, and modern space heaters fueled by gas or electricity, is known as direct heating because the conversion of energy into heat takes place at the site to be heated. A more common form of heating in modern times is known as central, or indirect, heating. It consists of the conversion of energy to heat at a source outside of, apart from, or located within the site or sites to be heated; the resulting heat is conveyed to the site through a fluid medium such as air, water, or steam.

Except for the ancient Greeks and Romans, most cultures relied upon direct-heating methods. Wood was the earliest fuel used, though in places where only moderate warmth was needed, such as China, Japan, and the Mediterranean, charcoal (made from wood) was used because it produced much less smoke. The flue, or chimney, which was first a simple aperture in the centre of the roof and later rose directly from the fireplace, had appeared in Europe by the 13th century and effectively eliminated the fire’s smoke and fumes from the living space. Enclosed stoves appear to have been used first by the Chinese about 600 BC and eventually spread through Russia into northern Europe and from there to the Americas, where Benjamin Franklin in 1744 invented an improved design known as the Franklin stove. Stoves are far less wasteful of heat than fireplaces because the heat of the fire is absorbed by the stove walls, which heat the air in the room, rather than passing up the chimney in the form of hot combustion gases.

Central heating appears to have been invented in ancient Greece, but it was the Romans who became the supreme heating engineers of the ancient world with their hypocaust system. In many Roman buildings, mosaic tile floors were supported by columns below, which created air spaces, or ducts. At a site central to all the rooms to be heated, charcoal, brushwood, and, in Britain, coal were burned, and the hot gases traveled beneath the floors, warming them in the process. The hypocaust system disappeared with the decline of the Roman Empire, however, and central heating was not reintroduced until some 1,500 years later.

Central heating was adopted for use again in the early 19th century when the Industrial Revolution caused an increase in the size of buildings for industry, residential use, and services. The use of steam as a source of power offered a new way to heat factories and mills, with the steam conveyed in pipes. Coal-fired boilers delivered hot steam to rooms by means of standing radiators. Steam heating long predominated in the North American continent because of its very cold winters. The advantages of hot water, which has a lower surface temperature and milder general effect than steam, began to be recognized about 1830. Twentieth-century central-heating systems generally use warm air or hot water for heat conveyance. Ducted warm air has supplanted steam in most newly built American homes and offices, but in Great Britain and much of the European continent, hot water succeeded steam as the favoured method of heating; ducted warm air has never been popular there. Most other countries have adopted either the American or European preference in heating methods.

Central-heating systems and fuels

The essential components of a central-heating system are an appliance in which fuel may be burned to generate heat; a medium conveyed in pipes or ducts for transferring the heat to the spaces to be heated; and an emitting apparatus in those spaces for releasing the heat either by convection or radiation or both. Forced-air distribution moves heated air into the space by a system of ducts and fans that produce pressure differentials. Radiant heating, by contrast, involves the direct transmission of heat from an emitter to the walls, ceiling, or floor of an enclosed space independent of the air temperature between them; the emitted heat sets up a convection cycle throughout the space, producing a uniformly warmed temperture within it.

Air temperature and the effects of solar radiation, relative humidity, and convection all influence the design of a heating system. An equally important consideration is the amount of physical activity that is anticipated in a particular setting. In a work atmosphere in which strenuous activity is the norm, the human body gives off more heat. In compensation, the air temperature is kept lower in order to allow the extra body heat to dissipate. An upper temperature limit of 24° C (75° F) is appropriate for sedentary workers and domestic living rooms, while a lower temperature limit of 13° C (55° F) is appropriate for persons doing heavy manual work.

In the combustion of fuel, carbon and hydrogen react with atmospheric oxygen to produce heat, which is transferred from the combustion chamber to a medium consisting of either air or water. The equipment is so arranged that the heated medium is constantly removed and replaced by a cooler supply—i.e., by circulation. If air is the medium, the equipment is called a furnace, and if water is the medium, a boiler or water heater. The term “boiler” more correctly refers to a vessel in which steam is produced, and “water heater” to one in which water is heated and circulated below its boiling point.

Natural gas and fuel oil are the chief fuels used to produce heat in boilers and furnaces. They require no labour except for occasional cleaning, and they are handled by completely automatic burners, which may be thermostatically controlled. Unlike their predecessors, coal and coke, there is no residual ash product left for disposal after use. Natural gas requires no storage whatsoever, while oil is pumped into storage tanks that may be located at some distance from the heating equipment. The growth of natural-gas heating has been closely related to the increased availability of gas from networks of underground pipelines, the reliability of underground delivery, and the cleanliness of gas combustion. This growth is also linked to the popularity of warm-air heating systems, to which gas fuel is particularly adaptable and which accounts for most of the natural gas consumed in residences. Gas is easier to burn and control than oil, the user needs no storage tank and pays for the fuel after he has used it, and fuel delivery is not dependent on the vagaries of motorized transport. Gas burners are generally simpler than those required for oil and have few moving parts. Because burning gas produces a noxious exhaust, gas heaters must be vented to the outside. In areas outside the reach of natural-gas pipelines, liquefied petroleum gas (propane or butane) is delivered in special tank trucks and stored under pressure in the home until ready for use in the same manner as natural gas. Oil and gas fuels owe much of their convenience to the automatic operations of their heating plant. This automation rests primarily on the thermostat, a device that, when the temperature in a space drops to a predetermined point, will activate the furnace or boiler until the demand for heat is satisfied. Automatic heating plants are so thoroughly protected by thermostats that nearly every conceivable circumstance that could be dangerous is anticipated and controlled.

Warm-air heating

Because of its low density, air carries less heat for shorter distances than do hot water or steam. The use of air as the primary heat conveyor is nevertheless the rule in American homes and offices, though there has been a growing preference for hot-water systems, which have been used in European countries for some time. The heat of the furnace is transferred to the air in ducts, which rise to rooms above where the hot air is emitted through registers. The warm air from a furnace, being lighter than the cooler air around it, can be carried by gravity in ducts to the rooms, and until about 1930 this was the usual method employed. But a gravity system requires ducts of rather large diameter (20–36 cm [8–14 inches]) in order to reduce air friction, and this resulted in the basement’s being filled with ductwork. Moreover, rooms distant from the furnace tended to be underheated, owing to the small pressure difference between the heated supply air and cooler air returning to the furnace. These difficulties were solved by the use of motor-driven fans, which can force the heated air through small, compact, rectangular ducts to the most distant rooms in a building. The heated air is introduced into individual rooms through registers, grilles, or diffusers of various types, including arrangements resembling baseboards along walls. Air currents through open doors and return air vents help distribute the heat evenly. The warm air, after giving up its heat to the room, is returned to the furnace. The entire system is controlled by thermostats that sample temperatures and then activate the gas burner and the blowers that circulate the warm air through ducts. An advantage of forced warm-air heating is that the air can be passed through filters and cleaned as it circulates through the system. And if the ductwork is propery sized, the addition of a cooling coil connected to suitable refrigeration machinery easily converts the system to a year-round air-conditioning system.

Air also works in conjunction with other systems. When the primary heated medium is steam or hot water, forced air propelled by fans distributes heat by convection (air movement). Even the common steam radiator depends more on convection than on radiation for heat emission.

Hot-water heating

Water is especially favoured for central-heating systems because its high density allows it to hold more heat and because its temperature can be regulated more easily. A hot-water heating system consists of the boiler and a system of pipes connected to radiators, piping, or other heat emitters located in rooms to be heated. The pipes, usually of steel or copper, feed hot water to radiators or convectors, which give up their heat to the room. The water, now cooled, is then returned to the boiler for reheating. Two important requirements of a hot-water system are (1) provision to allow for the expansion of the water in the system, which fills the boiler, heat emitters, and piping, and (2) means for allowing air to escape by a manually or automatically operated valve. Early hot-water systems, like warm-air systems, operated by gravity, the cool water, being more dense, dropping back to the boiler, and forcing the heated lighter water to rise to the radiators. Neither the gravity warm-air nor gravity hot-water system could be used to heat rooms below the furnace or boiler. Consequently, motor-driven pumps are now used to drive hot water through the pipes, making it possible to locate the boiler at any elevation in relation to the heat emitters. As with warm air, smaller pipes can be used when the fluid is pumped than with gravity operation.

Steam heating

Steam systems are those in which steam is generated, usually at less than 35 kilopascals (5 pounds per square inch) in the boiler, and the steam is led to the radiators through steel or copper pipes. The steam gives up its heat to the radiator and the radiator to the room, and the cooling of the steam condenses it to water. The condensate is returned to the boiler either by gravity or by a pump. The air valve on each radiator is necessary to allow air to escape; otherwise it would prevent steam from entering the radiator. In this system, both the steam supply and the condensate return are conveyed by the same pipe. More sophisticated systems use a two-pipe distribution system, keeping the steam supply and the condensate return as two separate streams. Steam’s chief advantage, its high heat-carrying capacity, is also the source of its disadvantages. The high temperature (about 102° C [215° F]) of the steam inside the system makes it hard to control and requires frequent adjustments in its rate of input to the rooms. To perform most efficiently, steam systems require more apparatus than do hot-water or warm-air systems, and the radiators used are bulky and unattractive. As a result, warm air and hot water have generally replaced steam in the heating of homes built from the 1930s and ’40s.

Electric heat

Electricity can also be used in central heating. Though generally more expensive than fossil fuels, its relatively high cost can be offset by the use of electric current when normal demand decreases, either at night or in the wintertime—i.e., when lighting, power, and air-conditioning demands are low and there is excess power capacity in regional or local electrical grids. The most common method of converting electricity to heat is by resistors, which become hot when an electric current is sent through them and meets resistance. The current is automatically activated by thermostats in the rooms to be heated. Resistors can be used to heat circulating air or water, or, in the form of baseboard convectors, they can directly heat the air along the walls of an individual room, establishing convective currents.

Heat pump

Another method for heating with electricity involves the use of the heat pump. Every refrigeration machine is technically a heat pump, pumping heat from an area of lower temperature (normally the space to be cooled or refrigerated) to an area of higher temperature (normally, the outdoors). The refrigeration machine may be used to pump heat, in winter, from the outdoor air, or groundwater, or any other source of low-temperature heat, and deliver this heat at higher temperature to a space to be heated. Usually, the heat pump is designed to function as an air conditioner in summer, then to reverse and serve as a heat pump in winter.

A heat pump’s operations can be explained using the following example. The typical window-mounted air-conditioning unit has a heat-rejection unit (condenser) mounted outside. This unit discharges the heat removed by the indoor coil (evaporator) to the outside air. Therefore the evaporator subtracts heat from the residence and transfers it to the refrigerant gas, which is pumped to the outside condenser, where by means of a fan the heat is dissipated in the air outside. This cycle can be inverted: heat is subtracted from the outside air and is transferred via the refrigerant gas to the indoor coil (evaporator) and discharged into a residence’s ductwork by means of the evaporator fan. This is a basic heat-pump system. Where winter climates reach freezing temperatures, however, the system is limited by the freezing of the condenser (outdoor coil);. thus, heat pumps work best in mild climates with fairly warm winter temperatures. The complexity of their machinery also makes them uneconomical in many contexts.

eyJidWNrZXQiOiJjb250ZW50Lmhzd3N0YXRpYy5jb20iLCJrZXkiOiJnaWZcL3dhdGVyLWhlYXRlci11cGRhdGUzLmpwZyIsImVkaXRzIjp7InJlc2l6ZSI6eyJ3aWR0aCI6ODI4fX19


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1732 2023-04-13 12:52:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1635) Taxonomy

Gist

Taxonomy is the science of naming, describing and classifying organisms and includes all plants, animals and microorganisms of the world.

Summary

Taxonomy is in a broad sense the science of classification, but more strictly the classification of living and extinct organisms—i.e., biological classification. The term is derived from the Greek taxis (“arrangement”) and nomos (“law”). Taxonomy is, therefore, the methodology and principles of systematic botany and zoology and sets up arrangements of the kinds of plants and animals in hierarchies of superior and subordinate groups. Among biologists the Linnaean system of binomial nomenclature, created by Swedish naturalist Carolus Linnaeus in the 1750s, is internationally accepted.

Popularly, classifications of living organisms arise according to need and are often superficial. Anglo-Saxon terms such as worm and fish have been used to refer, respectively, to any creeping thing—snake, earthworm, intestinal parasite, or dragon—and to any swimming or aquatic thing. Although the term fish is common to the names shellfish, crayfish, and starfish, there are more anatomical differences between a shellfish and a starfish than there are between a bony fish and a man. Vernacular names vary widely. The American robin (Turdus migratorius), for example, is not the English robin (Erithacus rubecula), and the mountain ash (Sorbus) has only a superficial resemblance to a true ash.

Biologists, however, have attempted to view all living organisms with equal thoroughness and thus have devised a formal classification. A formal classification provides the basis for a relatively uniform and internationally understood nomenclature, thereby simplifying cross-referencing and retrieval of information.

The usage of the terms taxonomy and systematics with regard to biological classification varies greatly. American evolutionist Ernst Mayr has stated that “taxonomy is the theory and practice of classifying organisms” and “systematics is the science of the diversity of organisms”; the latter in such a sense, therefore, has considerable interrelations with evolution, ecology, genetics, behaviour, and comparative physiology that taxonomy need not have.

Details

Taxonomy is the practice and science of categorization or classification.

A taxonomy (or taxonomical classification) is a scheme of classification, especially a hierarchical classification, in which things are organized into groups or types. Among other things, a taxonomy can be used to organize and index knowledge (stored as documents, articles, videos, etc.), such as in the form of a library classification system, or a search engine taxonomy, so that users can more easily find the information they are searching for. Many taxonomies are hierarchies (and thus, have an intrinsic tree structure), but not all are.

Originally, taxonomy referred only to the categorisation of organisms or a particular categorisation of organisms. In a wider, more general sense, it may refer to a categorisation of things or concepts, as well as to the principles underlying such a categorisation. Taxonomy organizes taxonomic units known as "taxa" (singular "taxon")."

Taxonomy is different from meronomy, which deals with the categorisation of parts of a whole.

Applications

Wikipedia categories form a taxonomy, which can be extracted by automatic means. As of 2009, it has been shown that a manually-constructed taxonomy, such as that of computational lexicons like WordNet, can be used to improve and restructure the Wikipedia category taxonomy.

In a broader sense, taxonomy also applies to relationship schemes other than parent-child hierarchies, such as network structures. Taxonomies may then include a single child with multi-parents, for example, "Car" might appear with both parents "Vehicle" and "Steel Mechanisms"; to some however, this merely means that 'car' is a part of several different taxonomies. A taxonomy might also simply be organization of kinds of things into groups, or an alphabetical list; here, however, the term vocabulary is more appropriate. In current usage within knowledge management, taxonomies are considered narrower than ontologies since ontologies apply a larger variety of relation types.

Mathematically, a hierarchical taxonomy is a tree structure of classifications for a given set of objects. It is also named containment hierarchy. At the top of this structure is a single classification, the root node, that applies to all objects. Nodes below this root are more specific classifications that apply to subsets of the total set of classified objects. The progress of reasoning proceeds from the general to the more specific.

By contrast, in the context of legal terminology, an open-ended contextual taxonomy is employed—a taxonomy holding only with respect to a specific context. In scenarios taken from the legal domain, a formal account of the open-texture of legal terms is modeled, which suggests varying notions of the "core" and "penumbra" of the meanings of a concept. The progress of reasoning proceeds from the specific to the more general.

Additional Information

What is Taxonomy? Taxonomy is the branch of biology that studies the naming, arranging, classifying, and describing organisms into groups and levels.

Defining Taxonomy

What do you mean by taxonomy? The scientific definition of taxonomy is that it involves the classification of organisms both alive and extinct. Also, it includes the naming and arranging of organisms in higher groups.  So what does taxonomy mean? Taxonomy involves studying living organisms such as animals, plants, microorganisms, and humans to classify them in different categories to study further and identify. For instance, humans and whales are two unrelated organisms from different perspectives; however, both are considered mammals and taxonomically related.

Different definitions of taxonomy

The definitions of taxonomy in biology vary from one source two to another. Different authors have different points of view regarding taxonomy in biology. Here are the compiled definitions of taxonomy.

Taxonomy involves seven different types of processes: description, naming, recognition, comparison, and classification of taxa, genetic variation, identifying specimens, and defining taxa in the ecosystem. (Enghoff & Seberg, 2006).
Taxonomy is the practice of making groups of organisms (individuals) into species and arranging those species into larger groups and assigning names to the groups, to produce a classification. (Judd et al., 2007)

The taxonomy is the field of science, including units of systematics to enclose the identification, description, classification, and nomenclature (Simpson, 2010).

Taxonomy is an analysis of an individual’s characteristics to classify them. (Lawrence, 2005).

Taxonomy is the study of the classification of living beings and their formation of species and vice versa (Walker, 1988).

Taxonomy in biology is the arrangement of (living) organisms into classification. (Kirk, et al. 2008).

Based on different definitions, the taxonomy is considered a sub-branch of systematics or a synonym of the latter term. It is also thought that the biological nomenclature is either a part of taxonomy or a unit of systematics. To remove this confusion, consider the given below definition of systematics:

Systematics is the consideration to identify the taxonomy of organisms and their nomenclature, classification based on their natural relatedness, and the analysis of variation and evolution among the taxa.

Taxonomy (biology definition): The science of finding, describing, classifying, and naming organisms, including the studying of the relationships between taxa and the principles underlying such a classification. Etymology: from Greek taxis, meaning “arrangement”, “order”.

Overview of taxonomy biology

In biology, taxonomy is defined as the classification of biological organisms. Starting from grouping the organisms into taxa (singular: taxon) and then given taxonomic rank. These groups can be collected to form high-ranked supergroups that lead to the taxonomy hierarchy.

Who are taxonomists?

Taxonomists are biologists who analyze the relationship among organisms and aggregate them into groups. For instance, the insect taxonomist understands the relationship between different fly types and vice versa to group them into one category.

History of Taxonomy

The scientific taxonomy (classification of organisms) primarily occurred in the 18th century. However, the only basis that meets in the past to the early works only includes the descriptives of plants for agriculture and medicine purposes. Moreover, the early taxonomy was based on artificial systems or arbitrary criteria, including Linnaeus’s system. The later systems were called the natural systems, which include a more scientific taxonomy basis. The nature of these natural systems was based on pre-evolutionary thinking. Charles Darwin’s work on “Origin of Species” gave a more solid basis for the classification of organisms. The classification was based on the evolutionary relationships leading to phyletic systems since 1883.

Before the Linnaean era

The pre-Linnaean era includes the work of earlier taxonomists back in the Egyptian period from 1500 BC.

Early Egyptian taxonomists

The process of classifying and naming the surrounding organisms in human history is as long as language development occurs in humankind. An example of earlier taxonomy has been shown on Egypt’s walls about medicinal plants by the early Egyptian taxonomists.

Aristotle

During Aristotle’s stay at the Island of Lesbos, he was able to classify organisms for the first time. He identified the organisms and then grouped them into two major categories, i.e., plants and animals. He classified the animals based on attributes and parts including, the number of legs, laying eggs, warm-blooded or no blood, and birth. Several groups of animals proposed by Aristotle, such as Anhaima, Enhaima, sharks, and cetaceans, are still used to this day. Anhaima includes animals with no blood. Currently, they are referred to as invertebrates, and Enhaima includes animals with blood. They are also called vertebrates.

His student, Theophrastus, continued the classification process and mentioned 500 plants (also their uses) in his book, Historia Plantarum. Some plant groups, such as Cornus, Crocus, and Narcissus, can be traced back to Theophrastus’s findings.

Medieval thinkers

In the middle ages, medieval thinkers took account of organisms’ classification on a philosophical basis. As during Aristotle’s time, the unavailability of a microscope leads to no classification of the plants and fungi. Afterward, the classification systems used in this era were Aristotelian systems, including the Scala Naturae, a nature ladder used to order organisms in one unit. The concepts that were originated in the middle ages were a Great chain of being. The medieval thinkers include most prominent scholars like Thomas Aquinas, Demetrios, Procopius, and others. These scholars studied the above consideration to extract philosophical abstracts in organisms’ classification rather than logical or traditional taxonomy.

Andrea Cesalpino

Andrea Cesalpino, the first-ever taxonomist in history, was an Italian physician. He described up to 1500 species of plants in his book “Magnum Opus De Plantis“. The largest families of plants known as Asteraceae and Brassicaceae are still considered and used today.

John Ray

In the 17th century, John Ray took taxonomy to the next level and covered more important taxonomic aspects. His biggest accomplishment was his described details of 18,000 plant species and more in his book “Methodus Plantarum Nova” and produced more complex classification (the taxa was based on many characteristics).

Joseph Pitton de Tournefort

Joseph Pitton de Tournefort defined and classified 9000 species in 698 genera in his work Institutiones Rei Herbariae, 1700.

Carl Linnaeus

Carl Linnaeus (also known as Carl von Linné or Carolus Linnæus) proposed his taxonomy classification system known as the Linnaean System. His works, such as Systema Naturae, Species Plantarum, and Systema Naturae, were the most famous and revolutionized in the taxonomy field. His works include introducing a binomial naming system that leads to organized and standardized animals and plant species taxonomy.

The system contains proper classification factors, including class, order, genus, and species. His books allow identifying animals and plants (for instance, the flower can be identified from its smaller parts). Moreover, in the present century, his work was being used in the 18th century. However, modern taxonomists consider the Linnaean System as the starter of taxonomy (biology). Furthermore, the period before Linnaean work (taxonomic names) is considered pre-Linnaean, and some of his proposed taxonomic names are also considered in the later era.

Different Taxonomists works on classification:

Andrea Caesalpino (1519-1603 AD)  :  Divides plants into 15 groups
John Ray (1627-1705 AD)  :  Published work on plants classification
Augustus Rivinus (1652-1723 AD)  :  Added taxa of Order
Tournefort (1656-1708 AD)  :  Introduced taxa of Class
Linnaeus (1707-1778 AD)  :  Grouped species based on physical characteristics

Classification Systems

The biologist needed a system to provide all this information to get full information about similarities and differences among different organisms. In earlier classification systems, the organism was divided and classified into two kingdoms that are plants and animals. In previous years the taxonomist was agreed on five-kingdom classification after developing the two-kingdom classification system and three kingdom classification systems. However, in recent years the taxonomist agreed on five-kingdom classification systems. The only difference in the five-kingdom classification system and six kingdom classification system is that the kingdom Protista was further divided into two kingdoms.

Two-kingdom classification system

In the beginning, the organisms were classified only into kingdoms: Animalia and Plantae. Following this system, Kingdom Plantae includes organisms capable of preparing their food from simple inorganic materials. They are also named autotrophs. On the contrary, those organisms that cannot prepare their food from simple inorganic material and depend on other autotrophs to get food were named heterotrophs and placed in Kingdom Animalia.

In this classification system, the taxonomists placed bacteria, fungi, and algae in Kingdom Plantae. Based on more advanced research, the taxonomists contested this classification system and thought it was not workable. Euglena, for instance, has both plant-like (e.g. the presence of chlorophyll for photosynthesis) and animal-like features (e.g. heterotrophic mode of nutrition in the night and no cell wall). So where should this organism be classified? As a result, another classification system was needed to classify such organisms.

Three-kingdom classification system

A taxonomist named Ernst Hackel solve the previous problem in 1866. He made a third kingdom Protista to place organisms like Euglena. He also included bacteria in the third kingdom. However, the fungi have remained in the Plantae kingdom. Many biologists did not accept the position of fungi in the Kingdom Plantae. Thus, the three kingdom classification system was still insufficient to clearly classify organisms. Besides, the differences between prokaryotes and eukaryotes are not considered in this classification scheme.

Fungi resemble plants in many ways but cannot prepare their food (not considered autotrophs). Instead, they get food by absorption. They do not contain cellulose and have chitin in cell walls, making them characteristically heterotrophs.

file-20170628-7313-1au5deb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1733 2023-04-14 00:07:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1636) Dyslexia

Gist

Overview. Dyslexia is a learning disorder that involves difficulty reading due to problems identifying speech sounds and learning how they relate to letters and words (decoding). Also called a reading disability, dyslexia is a result of individual differences in areas of the brain that process language.

Summary

Dyslexia, also known until the 1960s as word blindness, is a disorder characterized by reading below the expected level for one's age. Different people are affected to different degrees. Problems may include difficulties in spelling words, reading quickly, writing words, "sounding out" words in the head, pronouncing words when reading aloud and understanding what one reads. Often these difficulties are first noticed at school. The difficulties are involuntary, and people with this disorder have a normal desire to learn. People with dyslexia have higher rates of attention deficit hyperactivity disorder (ADHD), developmental language disorders, and difficulties with numbers.

Dyslexia is believed to be caused by the interaction of genetic and environmental factors. Some cases run in families. Dyslexia that develops due to a traumatic brain injury, stroke, or dementia is sometimes called "acquired dyslexia" or alexia. The underlying mechanisms of dyslexia result from differences within the brain's language processing. Dyslexia is diagnosed through a series of tests of memory, vision, spelling, and reading skills. Dyslexia is separate from reading difficulties caused by hearing or vision problems or by insufficient teaching or opportunity to learn.

Treatment involves adjusting teaching methods to meet the person's needs. While not curing the underlying problem, it may decrease the degree or impact of symptoms. Treatments targeting vision are not effective. Dyslexia is the most common learning disability and occurs in all areas of the world. It affects 3–7% of the population; however, up to 20% of the general population may have some degree of symptoms. While dyslexia is more often diagnosed in boys, this is partly explained by a self-fulfilling referral bias among teachers and professionals. It has even been suggested that the condition affects men and women equally. Some believe that dyslexia is best considered as a different way of learning, with both benefits and downsides.

Details

Overview

Dyslexia is a learning disorder that involves difficulty reading due to problems identifying speech sounds and learning how they relate to letters and words (decoding). Also called a reading disability, dyslexia is a result of individual differences in areas of the brain that process language.

Dyslexia is not due to problems with intelligence, hearing or vision. Most children with dyslexia can succeed in school with tutoring or a specialized education program. Emotional support also plays an important role.

Though there's no cure for dyslexia, early assessment and intervention result in the best outcome. Sometimes dyslexia goes undiagnosed for years and isn't recognized until adulthood, but it's never too late to seek help.

Symptoms

Signs of dyslexia can be difficult to recognize before your child enters school, but some early clues may indicate a problem. Once your child reaches school age, your child's teacher may be the first to notice a problem. Severity varies, but the condition often becomes apparent as a child starts learning to read.

Before school

Signs that a young child may be at risk of dyslexia include:

* Late talking
* Learning new words slowly
* Problems forming words correctly, such as reversing sounds in words or confusing words that sound alike
* Problems remembering or naming letters, numbers and colors
* Difficulty learning nursery rhymes or playing rhyming games

School age

Once your child is in school, dyslexia symptoms may become more apparent, including:

* Reading well below the expected level for age
* Problems processing and understanding what is heard
* Difficulty finding the right word or forming answers to questions
* Problems remembering the sequence of things
* Difficulty seeing (and occasionally hearing) similarities and differences in letters and words
* Inability to sound out the pronunciation of an unfamiliar word
* Difficulty spelling
* Spending an unusually long time completing tasks that involve reading or writing
* Avoiding activities that involve reading
* Teens and adults

Dyslexia signs in teens and adults are a lot like those in children. Some common dyslexia symptoms in teens and adults include:

* Difficulty reading, including reading aloud
* Slow and labor-intensive reading and writing
* Problems spelling
* Avoiding activities that involve reading
* Mispronouncing names or words, or problems retrieving words
* Spending an unusually long time completing tasks that involve reading or writing
* Difficulty summarizing a story
* Trouble learning a foreign language
* Difficulty doing math word problems

When to see a doctor

Though most children are ready to learn reading by kindergarten or first grade, children with dyslexia often have trouble learning to read by that time. Talk with your health care provider if your child's reading level is below what's expected for your child's age or if you notice other signs of dyslexia.

When dyslexia goes undiagnosed and untreated, childhood reading difficulties continue into adulthood.

Causes

Dyslexia results from individual differences in the parts of the brain that enable reading. It tends to run in families. Dyslexia appears to be linked to certain genes that affect how the brain processes reading and language.

Risk factors

A family history of dyslexia or other reading or learning disabilities increases the risk of having dyslexia.

Complications

Dyslexia can lead to several problems, including:

* Trouble learning. Because reading is a skill basic to most other school subjects, a child with dyslexia is at a disadvantage in most classes and may have trouble keeping up with peers.
* Social problems. Left untreated, dyslexia may lead to low self-esteem, behavior problems, anxiety, aggression, and withdrawal from friends, parents and teachers.
* Problems as adults. The inability to read and comprehend can prevent children from reaching their potential as they grow up. This can have negative long-term educational, social and economic impacts.
* Children who have dyslexia are at increased risk of having attention-deficit/hyperactivity disorder (ADHD), and vice versa. ADHD can cause difficulty keeping attention. It can also cause hyperactivity and impulsive behavior, which can make dyslexia harder to treat.

Additional Information

Dyslexia is an an inability or pronounced difficulty to learn to read or spell, despite otherwise normal intellectual functions. Dyslexia is a chronic neurological disorder that inhibits a person’s ability to recognize and process graphic symbols, particularly those pertaining to language. Primary symptoms include extremely poor reading skills owing to no apparent cause, a tendency to read and write words and letters in reversed sequences, similar reversals of words and letters in the person’s speech, and illegible handwriting.

Dyslexia is three times more common in boys than in girls and usually becomes evident in the early school years. The disorder tends to run in families. Only a minority of dyslexics remain nonreaders into adulthood, but many dyslexics continue to read and spell poorly throughout their lifetime. Dyslexics frequently perform above average on nonverbal tests of intelligence, however. Dyslexia is best treated by a sustained course of proper instruction in reading. The cause of the disorder is unknown; dyslexia is usually diagnosed for children or adults who have reading difficulties for which there is no apparent explanation.

Dyslexia%20Brain.JPG


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1734 2023-04-14 13:29:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1637) The Giving Pledge

Summary

The Giving Pledge is a campaign, founded by Bill Gates and Warren Buffett, to encourage wealthy people to contribute a majority of their wealth to philanthropic causes. As of June 2022, the pledge has 236 signatories from 28 countries. Most of the signatories of the pledge are billionaires, and as of 2016, their pledges are estimated to total US$600 billion.

Description

The organization's stated goal is to inspire the wealthy people of the world to give at least half of their net worth to philanthropy, either during their lifetime or upon their death. The pledge is a public gesture of an intention to give, not a legal contract. On The Giving Pledge's website, each individual or couple writes a letter explaining why they chose to give.

History

In June 2010, the Giving Pledge campaign was formally announced and Bill Gates and Warren Buffett began recruiting members. As of August 2010, the aggregate wealth of the first 40 pledgers was $125 billion. As of April 2011, 69 billionaires had joined the campaign and given a pledge, and by the following year, The Huffington Post reported that a total of 81 billionaires had pledged. By May 2017, 158 individuals and/or couples were listed as pledgers.

Details:

About the Giving Pledge

The Giving Pledge is a movement of philanthropists who commit to give the majority of their wealth to charitable causes, either during their lifetimes or in their wills.

In August 2010, 40 of America’s wealthiest people made a commitment to give the majority of their wealth to address some of society’s most pressing problems. Created by Warren Buffett, Melinda French Gates, and Bill Gates, the Giving Pledge came to life following a series of conversations with philanthropists about how they could set a new standard of generosity among the ultra-wealthy. While originally focused on the United States, the Giving Pledge quickly saw interest from philanthropists around the world.

“This is about building on a wonderful tradition of philanthropy that will ultimately help the world become a much better place.”
– Bill Gates

The Giving Pledge is a simple concept: an open invitation for billionaires, or those who would be if not for their giving, to publicly commit to give the majority of their wealth to philanthropy either during their lifetimes or in their wills. It is inspired by the example set by millions of people at all income levels who give generously – and often at great personal sacrifice – to make the world better. Envisioned as a multi-generational effort, the Giving Pledge aims over time to help shift the social norms of philanthropy among the world’s wealthiest and inspire people to give more, establish their giving plans sooner, and give in smarter ways. Signatories fund a diverse range of issues of their choosing. Those who join the Giving Pledge are encouraged to write a letter explaining their decision to engage deeply and publicly in philanthropy and describing the causes that motivate them.

Joining the Giving Pledge is more than a one-time event. It means becoming part of an energized community of some of the world’s most engaged philanthropists to discuss challenges, successes, and failures, and to share ideas to get smarter about giving. Signatories are united by a shared commitment to learning and giving. The Giving Pledge team provides opportunities – both specifically for signatories, and for families and staff – to gather throughout the year to learn from experts and from one another how to best leverage their philanthropy to address some of the world’s biggest challenges.

Additional Information

Warren Buffett

“Were we to use more than 1% of my claim checks (Berkshire Hathaway stock certificates) on ourselves, neither our happiness nor our well-being would be enhanced. In contrast, that remaining 99% can have a huge effect on the health and welfare of others.”

My Philanthropic Pledge

In 2006, I made a commitment to gradually give all of my Berkshire Hathaway stock to philanthropic foundations. I couldn’t be happier with that decision.

Now, Bill and Melinda Gates and I are asking hundreds of rich Americans to pledge at least 50% of their wealth to charity. So I think it is fitting that I reiterate my intentions and explain the thinking that lies behind them.

First, my pledge: More than 99% of my wealth will go to philanthropy during my lifetime or at death. Measured by dollars, this commitment is large. In a comparative sense, though, many individuals give more to others every day.

Millions of people who regularly contribute to churches, schools, and other organizations thereby relinquish the use of funds that would otherwise benefit their own families. The dollars these people drop into a collection plate or give to United Way mean forgone movies, dinners out, or other personal pleasures. In contrast, my family and I will give up nothing we need or want by fulfilling this 99% pledge.

Moreover, this pledge does not leave me contributing the most precious asset, which is time. Many people, including—I’m proud to say—my three children, give extensively of their own time and talents to help others. Gifts of this kind often prove far more valuable than money. A struggling child, befriended and nurtured by a caring mentor, receives a gift whose value far exceeds what can be bestowed by a check. My sister, Doris, extends significant person- to-person help daily. I’ve done little of this.

What I can do, however, is to take a pile of Berkshire Hathaway stock certificates—“claim checks” that when converted to cash can command far-ranging resources—and commit them to benefit others who, through the luck of the draw, have received the short straws in life. To date about 20% of my shares have been distributed (including shares given by my late wife, Susan Buffett). I will continue to annually distribute about 4% of the shares I retain. At the latest, the proceeds from all of my Berkshire shares will be expended for philanthropic purposes by 10 years after my estate is settled. Nothing will go to endowments; I want the money spent on current needs.

This pledge will leave my lifestyle untouched and that of my children as well. They have already received significant sums for their personal use and will receive more in the future. They live comfortable and productive lives. And I will continue to live in a manner that gives me everything that I could possibly want in life.

Some material things make my life more enjoyable; many, however, would not. I like having an expensive private plane, but owning a half-dozen homes would be a burden. Too often, a vast collection of possessions ends up possessing its owner. The asset I most value, aside from health, is interesting, diverse, and long-standing friends.

My wealth has come from a combination of living in America, some lucky genes, and compound interest. Both my children and I won what I call the ovarian lottery. (For starters, the odds against my 1930 birth taking place in the U.S. were at least 30 to 1. My being male and white also removed huge obstacles that a majority of Americans then faced.)

My luck was accentuated by my living in a market system that sometimes produces distorted results, though overall it serves our country well. I’ve worked in an economy that rewards someone who saves the lives of others on a battlefield with a medal, rewards a great teacher with thank-you notes from parents, but rewards those who can detect the mispricing of securities with sums reaching into the billions. In short, fate’s distribution of long straws is wildly capricious.

The reaction of my family and me to our extraordinary good fortune is not guilt, but rather gratitude.Were we to use more than 1% of my claim checks on ourselves, neither our happiness nor our well-being would be enhanced. In contrast, that remaining 99% can have a huge effect on the health and welfare of others. That reality sets an obvious course for me and my family: Keep all we can conceivably need and distribute the rest to society, for its needs. My pledge starts us down that course.

Buffett_500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1735 2023-04-14 20:51:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1638) Tachycardia

Gist

Tachycardia (tak-ih-KAHR-dee-uh) is the medical term for a heart rate over 100 beats a minute. Many types of irregular heart rhythms (arrhythmias) can cause tachycardia. A fast heart rate isn't always a concern. For instance, the heart rate typically rises during exercise or as a response to stress.

Summary

Tachycardia, also called tachyarrhythmia, is a heart rate that exceeds the normal resting rate. In general, a resting heart rate over 100 beats per minute is accepted as tachycardia in adults. Heart rates above the resting rate may be normal (such as with exercise) or abnormal (such as with electrical problems within the heart).

Complications

Tachycardia can lead to fainting.

When the rate of blood flow becomes too rapid, or fast blood flow passes on damaged endothelium, it increases the friction within vessels resulting in turbulence and other disturbances. According to the Virchow's triad, this is one of the three conditions that can lead to thrombosis (i.e., blood clots within vessels).

Details

Tachycardia is a condition that makes your heart beat more than 100 times per minute. There are three types of it:

* Supraventricular. This happens when the electrical signals in the organ's upper chambers misfire and cause the heart rate to speed up. It beats so fast that it can’t fill with blood before it contracts. That reduces blood flow to the rest of your body.

* Ventricular. This is a rapid heart rate that starts in your heart's lower chambers. It happens when the electrical signals in these chambers fire the wrong way. Again, the heart beats so fast that it can’t fill with blood or pump it through the rest of your body.

* Sinus tachycardia. This happens when your heart’s natural pacemaker sends out electrical signals faster than normal. Your ticker beats fast, but it beats the way it should.

What Causes It?

Any number of things.

Strenuous exercise, a fever, fear, stress, anxiety, certain medications, and street drugs can lead to sinus tachycardia. It can also be triggered by anemia, an overactive thyroid, or damage from a heart attack or heart failure.

Supraventricular tachycardia is most likely to affect people who smoke, drink too much alcohol, or have a lot of caffeine. In some cases it’s linked to heart attacks. It’s more common in women and children.

The ventricular type is associated with abnormal electrical pathways which are present at birth (long QT), structural problems of the heart such as a cardiomyopathy or coronary disease, medications, or electrolyte imbalance. Sometimes, the reason is unclear.

Symptoms

No matter which type of tachycardia you have, you may feel:

* Dizziness
* Lightheadedness
* Shortness of breath
* Chest pain
* Heart palpitations

In extreme cases, you could become unconscious or go into cardiac arrest.

But sometimes, a super-fast heart rate causes no symptoms at all.

Tests

These may include:

* Electrocardiogram (ECG or EKG). This records the electrical activity in your heart and helps your doctor search for things that don’t look normal. You may have to wear a holter monitor, a portable machine that records your ECG signals over 24 hours.

* Exercise stress test. Your doctor will have you walk on a treadmill while they monitor your heart activity.

* Magnetic source imaging: This measures the heart muscle’s magnetic fields and looks for weaknesses.


RELATED

Treatment

Your doctor will decide what’s best after they get your test results.

If you have sinus tachycardia, they’ll help you pinpoint the cause and suggest things to lower your heart rate. These might include lifestyle changes like easing stress or taking medicine to lower a fever.

If you have supraventricular tachycardia, your doctor may recommend that you drink less caffeine or alcohol, get more sleep, or quit smoking.

Treatments for ventricular tachycardia may include medication to reset the heart’s electrical signals or ablation, a procedure that destroys the abnormal heart tissue that is leading to the condition. Your doctor might also use a defibrillator to disrupt rapid heart rhythms.

A rapid heart rate doesn’t always need treatment. But sometimes it can be life-threatening. So play it safe -- let your doctor know right away if you have any type of irregular heartbeat.

Additional Information

Overview

Tachycardia (tak-ih-KAHR-dee-uh) is the medical term for a heart rate over 100 beats a minute. Many types of irregular heart rhythms (arrhythmias) can cause tachycardia.

A fast heart rate isn't always a concern. For instance, the heart rate typically rises during exercise or as a response to stress.

Tachycardia may not cause any symptoms or complications. But if left untreated, some forms of tachycardia can lead to serious health problems, including heart failure, stroke or sudden cardiac death.

Treatment for tachycardia may include specific maneuvers, medication, cardioversion or surgery to control a rapid heartbeat.

Types

There are many different types of tachycardia. Sinus tachycardia refers to a typical increase in the heart rate often caused by exercise or stress.

Other types of tachycardia are grouped according to the part of the heart responsible for the fast heart rate and the cause. Common types of tachycardia caused by irregular heart rhythms (arrhythmias) include:

* Atrial fibrillation (A-fib). This is the most common type of tachycardia. Chaotic, irregular electrical signals in the upper chambers of the heart (atria) cause a fast heartbeat. A-fib may be temporary, but some episodes won't end unless treated.

* Atrial flutter. Atrial flutter is similar to A-fib, but heartbeats are more organized. Episodes of atrial flutter may go away themselves or may require treatment. People who have atrial flutter also often have atrial fibrillation at other times.

* Ventricular tachycardia. This type of arrhythmia starts in the lower heart chambers (ventricles). The rapid heart rate doesn't allow the ventricles to fill and squeeze (contract) to pump enough blood to the body. Ventricular tachycardia episodes may be brief and last only a couple of seconds without causing harm. But episodes lasting more than a few seconds can be life-threatening.

* Supraventricular tachycardia (SVT). Supraventricular tachycardia is a broad term that includes arrhythmias that start above the ventricles. Supraventricular tachycardia causes episodes of a pounding heartbeat (palpitations) that begin and end abruptly.

* Ventricular fibrillation. Rapid, chaotic electrical signals cause the ventricles to quiver instead of contracting in a coordinated way. This serious problem can lead to death if the heart rhythm isn't restored within minutes. Most people who have ventricular fibrillation have an underlying heart disease or have experienced serious trauma, such as being struck by lightning.

Symptoms

When the heart beats too fast, it may not pump enough blood to the rest of the body. As a result, the organs and tissues may not get enough oxygen.

In general, tachycardia may lead to the following signs and symptoms:

* Sensation of a racing, pounding heartbeat or flopping in the chest (palpitations)
* Chest pain
* Fainting (syncope)
* Lightheadedness
* Rapid pulse rate
* Shortness of breath

Some people with tachycardia have no symptoms. The condition may be discovered when a physical exam or heart tests are done for another reason.

When to see a doctor

A number of things can cause a rapid heart rate (tachycardia). If you feel like your heart is beating too fast, make an appointment to see a health care provider.

Seek immediate medical help if you have shortness of breath, weakness, dizziness, lightheadedness, fainting or near fainting, and chest pain or discomfort.

A type of tachycardia called ventricular fibrillation can cause blood pressure to drop dramatically. Collapse can occur within seconds. Soon the affected person's breathing and pulse will stop. If this occurs, do the following:

Call 911 or the emergency number in your area.

If you or someone nearby is well trained in CPR, start CPR. CPR can help maintain blood flow to the organs until an electrical shock (defibrillation) can be given.

If you're not trained in CPR or worried about giving rescue breaths, then provide hands-only CPR. Push hard and fast on the center of the chest at a rate of 100 to 120 compressions a minute until paramedics arrive. You don't need to do rescue breathing.

If an automated external defibrillator (AED) is available nearby, have someone get the device for you, and then follow the instructions. An AED is a portable defibrillation device that can deliver a shock to reset the heart rhythm. No training is required to use the device. The AED will tell you what to do. It's programmed to give a shock only when appropriate.

Causes

Tachycardia is an increased heart rate for any reason. It can be a usual rise in heart rate caused by exercise or a stress response (sinus tachycardia). Sinus tachycardia is considered a symptom, not a disease.

Tachycardia can also be caused by an irregular heart rhythm (arrhythmia).

Things that may lead to tachycardia include:

* Fever
* Heavy alcohol use or alcohol withdrawal
* High levels of caffeine
* High or low blood pressure
* Imbalance of substances in the blood called electrolytes — such as potassium, sodium, calcium and magnesium
* Medication side effects
* Overactive thyroid (hyperthyroidism)
* Reduced volume of red blood cells (anemia), often caused by bleeding
* Smoking
* Use of illegal drugs, including stimulants such as cocaine or methamphetamine

Sometimes the exact cause of tachycardia can't be determined.

How does the heart beat?

To understand the cause of tachycardia, it may be helpful to know how the heart typically works.

The heart is made of four chambers — two upper chambers (atria) and two lower chambers (ventricles).

The heart's rhythm is controlled by a natural pacemaker (the sinus node) in the right upper chamber (atrium). The sinus node sends electrical signals that normally start each heartbeat. These electrical signals move across the atria, causing the heart muscles to squeeze (contract) and pump blood into the ventricles.

Next, the signals arrive at a cluster of cells called the AV node, where they slow down. This slight delay allows the ventricles to fill with blood. When the electrical signals reach the ventricles, the chambers contract and pump blood to the lungs or to the rest of the body.

In a typical heart, this heart signaling process usually goes smoothly, resulting in a resting heart rate of 60 to 100 beats a minute.

Risk factors

In general, growing older or having a family history of certain heart rhythm problems (arrhythmias) may increase the risk of arrhythmias that commonly cause tachycardia.

Lifestyle changes or medical treatment for related heart or other health conditions may decrease the risk of tachycardia.

Complications

Complications of tachycardia depend on:

* The type of tachycardia
* How fast the heart is beating
* How long the rapid heart rate lasts
* If there are other heart conditions

Some people with tachycardia have an increased risk of developing a blood clot that could cause a stroke (risk is highest with atrial fibrillation) or heart attack. Your health care provider may prescribe a blood-thinning medication to help lower your risk.

Other potential complications of tachycardia include:

* Frequent fainting or unconsciousness
* Inability of the heart to pump enough blood (heart failure)
* Sudden death, usually only associated with ventricular tachycardia or ventricular fibrillation

Prevention

The best ways to prevent tachycardia are to maintain a healthy heart and prevent heart disease. If you already have heart disease, monitor it and follow your treatment plan. Be sure you understand your treatment plan, and take all medications as prescribed.

Lifestyle changes to reduce the risk of heart disease may help prevent heart arrhythmias that can cause tachycardia. Take the following steps:

* Eat a healthy diet. Choose a diet rich in whole grains, lean meat, low-fat dairy, and fruits and vegetables. Limit salt, sugar, alcohol, and saturated fat and trans fats.

* Exercise regularly. Try to exercise for at least 30 minutes on most days.

* Maintain a healthy weight. Being overweight increases the risk of developing heart disease.

* Keep blood pressure and cholesterol levels under control. Make lifestyle changes and take medications as prescribed to control high blood pressure (hypertension) or high cholesterol.

* Stop smoking. If you smoke and can't quit on your own, talk to your health care provider about strategies or programs to help break the smoking habit.

* Drink in moderation. If you choose to drink alcohol, do so in moderation. For healthy adults, that means up to one drink a day for women and up to two drinks a day for men. For some health conditions, it's recommended that you completely avoid alcohol. Ask your health care provider for advice specific to your condition.

* Don't use illegal drugs or stimulants, such as cocaine. Talk to your health care provider about an appropriate program for you if you need help ending illegal drug use.

* Use medications with caution. Some cold and cough medications contain stimulants that may trigger a rapid heartbeat. Ask your health care provider which medications you need to avoid.

* Limit caffeine. If you drink caffeinated beverages, do so in moderation (no more than one to two beverages daily).

* Manage stress. Find ways to help reduce emotional stress. Getting more exercise, practicing mindfulness and connecting with others in support groups are some ways to reduce stress.

* Go to scheduled checkups. Have regular physical exams and report any changes in your heartbeat to your health care provider. If your symptoms change or get worse or you develop new ones, tell your health care provider immediately.

(The SA node, also known as the sinus node, represents a crescent-like shaped cluster of myocytes divided by connective tissue, spreading over a few square millimeters. It is located at the junction of the crista terminalis in the upper wall of the right atrium and the opening of the superior vena cava.

The sinoatrial node (also known as the sinuatrial node, SA node or sinus node) is an oval shaped region of special cardiac muscle in the upper back wall of the right atrium made up of cells known as pacemaker cells. The sinus node is approximately 15 mm long, 3 mm wide, and 1 mm thick, located directly below and to the side of the superior vena cava.

These cells can produce an electrical impulse known as a cardiac action potential that travels through the electrical conduction system of the heart, causing it to contract. In a healthy heart, the SA node continuously produces action potentials, setting the rhythm of the heart (sinus rhythm), and so is known as the heart's natural pacemaker. The rate of action potentials produced (and therefore the heart rate) is influenced by the nerves that supply it.)

7a18151a-f77a-4592-9ce9-456b794bd2e4.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1736 2023-04-15 01:52:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1639) Tetanus vaccine

Gist

Babies and children younger than 7 years old receive DTaP or DT, while older children and adults receive Tdap and Td. CDC recommends tetanus vaccination for all babies and children, preteens and teens, and adults. Talk with your or your child's doctor if you have questions about tetanus vaccines.

Details

Tetanus vaccine, also known as tetanus toxoid (TT), is a toxoid vaccine used to prevent tetanus. During childhood, five doses are recommended, with a sixth given during adolescence.

After three doses, almost everyone is initially immune, but additional doses every ten years are recommended to maintain immunity. A booster shot should be given within 48 hours of an injury to people whose immunization is out of date.

Confirming that pregnant women are up to date on tetanus immunization during each pregnancy can prevent both maternal and neonatal tetanus. The vaccine is very safe, including during pregnancy and in those with HIV/AIDS.

Redness and pain at the site of injection occur in between 25% and 85% of people. Fever, feeling tired, and minor muscle pain occurs in less than 10% of people. Severe allergic reactions occur in less than one in 100,000 people.

A number of vaccine combinations include the tetanus vaccine, such as DTaP and Tdap, which contain diphtheria, tetanus, and pertussis vaccines, and DT and Td, which contain diphtheria and tetanus vaccines. DTaP and DT are given to children less than seven years old, while Tdap and Td are given to those seven years old and older. The lowercase d and p denote lower strengths of diphtheria and pertussis vaccines.

Tetanus antiserum was developed in 1890, with its protective effects lasting a few weeks. The tetanus toxoid vaccine was developed in 1924, and came into common use for soldiers in World War II. Its use resulted in a 95% decrease in the rate of tetanus. It is on the World Health Organization's List of Essential Medicines.

Medical uses:

Effectiveness

Following vaccination, 95% of people are protected from diphtheria, 80% to 85% from pertussis, and 100% from tetanus.[ Globally deaths from tetanus in newborns decreased from 787,000 in 1988 to 58,000 in 2010, and 34,000 deaths in 2015 (a 96% decrease from 1988).

In the 1940s, before the vaccine, there were about 550 cases of tetanus per year in the United States, which has decreased to about 30 cases per year in the 2000s. Nearly all cases are among those who have never received a vaccine, or adults who have not stayed up to date on their 10-year booster shots.

Pregnancy

Guidelines on prenatal care in the United States specify that women should receive a dose of the Tdap vaccine during each pregnancy, preferably between weeks 27 and 36, to allow antibody transfer to the fetus. All postpartum women who have not previously received the Tdap vaccine are recommended to get it prior to discharge after delivery. It is recommended for pregnant women who have never received the tetanus vaccine (i.e., neither DTP or DTaP, nor DT as a child or Td or TT as an adult) to receive a series of three Td vaccinations starting during pregnancy to ensure protection against maternal and neonatal tetanus. In such cases, Tdap is recommended to be substituted for one dose of Td, again preferably between 27 and 36 weeks of gestation, and then the series completed with Td.

Specific types

The first vaccine is given in infancy. The baby is injected with the DTaP vaccine, which is three inactive toxins in one injection. DTaP protects against diphtheria, pertussis, and tetanus. This vaccine is safer than the previously used DTP. Another option is DT, which is a combination of diphtheria and tetanus vaccines. This is given as an alternative to infants who have conflicts with the DTaP vaccine. Quadrivalent, pentavalent, and hexavalent formulations contain DTaP with one or more of the additional vaccines: inactivated polio virus vaccine (IPV), Haemophilus influenzae type b conjugate, Hepatitis B, with the availability varying in different countries.

For the every ten-year booster Td or Tdap may be used, though Tdap is more expensive.

Schedule

Because DTaP and DT are administered to children less than a year old, the recommended location for injection is the anterolateral thigh muscle. However, these vaccines can be injected into the deltoid muscle if necessary.

The World Health Organization (WHO) recommends six doses in childhood starting at six weeks of age. Four doses of DTaP are to be given in early childhood. The first dose should be around two months of age, the second at four months, the third at six, and the fourth from fifteen to eighteen months of age. There is a recommended fifth dose to be administered to four- to six-year-olds.

Td and Tdap are for older children, adolescents, and adults and can be injected into the deltoid muscle. These are boosters and are recommended every ten years. It is safe to have shorter intervals between a single dose of Tdap and a dose of the Td booster.

Additional doses

Booster shots are important because lymphocyte production (antibodies) is not at a constant high rate of activity. This is because after the introduction of the vaccine when lymphocyte production is high, the production activity of white blood cells will start to decline. The decline in activity of the T-helper cells means that there must be a booster to help keep the white blood cells active.

Td and Tdap are the booster shots given every ten years to maintain immunity for adults nineteen years of age to sixty-five years of age.

Tdap is given as a one-time, first-time-only dose that includes the tetanus, diphtheria, and acellular pertussis vaccinations. This should not be administered to those who are under the age of eleven or over the age of sixty-five.

Td is the booster shot given to people over the age of seven and includes the tetanus and diphtheria toxoids. However, Td has less of the diphtheria toxoid, which is why the "d" is lowercase and the "T" is capitalized.

In 2020, the US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) recommended that either tetanus and diphtheria toxoids (Td) vaccine or Tdap to be used for the decennial Td booster, tetanus prevention during wound management, and for additional required doses in the catch-up immunization schedule if a person has received at least one Tdap dose.

Side effects

Common side effects of the tetanus vaccine include fever, redness, and swelling with soreness or tenderness around the injection site (one in five people have redness or swelling). Body aches and tiredness have been reported following Tdap. Td / Tdap can cause painful swelling of the entire arm in one of 500 people. Tetanus toxoid containing vaccines (DTaP, DTP, Tdap, Td, DT) may cause brachial neuritis at a rate of one out of every 100,000 to 200,000 doses.

Mechanism of action

The type of vaccination for this disease is called artificial active immunity. This type of immunity is generated when a dead or weakened version of the disease enters the body, causing an immune response which includes the production of antibodies. This is beneficial because it means that if the disease is ever introduced into the body, the immune system will recognize the antigen and produce antibodies more rapidly.

History

The first vaccine for passive immunology was discovered by a group of German scientists under the leadership of Emil von Behring in 1890. The first inactive tetanus toxoid was discovered and produced in 1924. A more effective adsorbed version of the vaccine, created in 1938, was proven to be successful when it was used to prevent tetanus in the military during World War II. DTP (which is the combined vaccine for diphtheria, tetanus, and pertussis) was first used in 1948, and was continued until 1991, when it was replaced with an acellular form of the pertussis vaccine due to safety concerns. Half of those who received the DTP vaccine had redness, swelling, and pain around the injection site, which convinced researchers to find a replacement vaccine.

Two new vaccines were launched in 1992. These combined tetanus and diphtheria with acellular pertussis (TDaP or DTaP), which could be given to adolescents and adults (as opposed to previously when the vaccine was only given to children).

ITJEM-3-2019-Tetanus-prone-wounds-1-1024x683.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1737 2023-04-15 13:36:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1640) Brute force attack

Gist

A brute force attack is a hacking method that uses trial and error to crack passwords, login credentials, and encryption keys. It is a simple yet reliable tactic for gaining unauthorized access to individual accounts and organizations' systems and networks.

Summary

What is a brute-force attack?

A brute-force attack is a trial-and-error method used by application programs to decode login information and encryption keys to use them to gain unauthorized access to systems. Using brute force is an exhaustive effort rather than employing intellectual strategies.

Just as a criminal might break into and crack a safe by trying many possible combinations, a brute-force attack of applications tries all possible combinations of legal characters in a sequence. Cybercriminals typically use a brute-force attack to obtain access to a website, account or network. They may then install malware, shut down web applications or conduct data breaches.

A simple brute-force attack commonly uses automated tools to guess all possible passwords until the correct input is identified. This is an old but still effective attack method for cracking common passwords.

How long a brute-force attack lasts can vary. Brute-forcing can break weak passwords in a matter of seconds. Strong passwords can typically take hours or days.

Organizations can use complex password combinations to extend the attack time, buying time to respond to and thwart the cyber attack.

What are the different types of brute-force attacks?

Different types of brute-force attacks exist, such as the following:

* Credential stuffing occurs after a user account has been compromised and the attacker tries the username and password combination across multiple systems.
* A reverse brute-force attack begins with the attacker using a common password -- or already knowing a password -- against multiple usernames or encrypted files to gain network and data access. The hacker will then follow the same algorithm as a typical brute-force attack to find the correct username.
* A dictionary attack is another type of brute-force attack, where all words in a dictionary are tested to find a password. Attackers can augment words with numbers, characters and more to crack longer passwords.

Additional forms of brute-force attacks might try and use the most commonly used passwords, such as "password," "12345678" -- or any numerical sequence like this -- and "qwerty," before trying other passwords.

What is the best way to protect against brute-force attacks?

Organizations can strengthen cybersecurity against brute-force attacks by using a combination strategies, including the following:

* Increasing password complexity. This extends the time required to decrypt a password. Implement password manager rules, like minimum passphrase length, compulsory use of special characters, etc.
* Limiting failed login attempts. Protect systems and networks by implementing rules that lock a user out for a specified amount of time after repeat login attempts.
* Encrypting and hashing. 256-bit encryption and password hashes exponentially increase the time and computing power required for a brute-force attack. In password hashing, strings are stored in a separate database and hashed so the same password combinations have a different hash value.
* Implementing CAPTCHAs. These prevent the use of brute-force attacking tools, like John the Ripper, while still keeping networks, systems and websites accessible for humans.
* Enacting two-factor authentication. This is a type of multifactor authentication that adds an additional layer of login security by requiring two forms of authentication -- as an example, to sign in to a new Apple device, users need to put in their Apple ID along with a six-digit code that is displayed on another one of their devices previously marked as trusted.

A good way to secure against brute-force attacks is to use all or a combination of the above strategies.

How can brute-force attack tools improve cybersecurity?

Brute-force attack tools are sometimes used to test network security. Some common ones are the following:

* Aircrack-ng can be used to test Windows, iOS, Linux and Android. It uses a collection of widely used passwords to attack wireless networks.
* Hashcat can be used to strength test Windows, Linux and iOS from brute-force and rule-based attacks.
* L0phtCrack is used to test Windows system vulnerabilities against rainbow table attacks. No longer supported, new owners -- as of summer 2021 -- are exploring open sourcing, among other unnamed options for the software.
* John the Ripper is a free, open source tool for implementing brute-force and dictionary attacks. It is often used by organizations to detect weak passwords and improve network security.

What are examples of brute-force attacks?

* In 2009, Attackers targeted Yahoo accounts using automated password cracking scripts on a Yahoo web services-based authentication application thought to be used by internet service providers and third-party web applications.
* In 2015, threat actors breached nearly 20,000 accounts by making millions of automated brute-force attempts to access Dunkin's mobile app rewards program for DD Perks.
* In 2017, cybersecurity criminals used brute-force attacks to access the U.K. and Scottish Parliament internal networks.
* In 2018, brute-force attackers cracked passwords and sensitive information of millions of Cathay Pacific airline passengers.
* In 2018, it became known that a Firefox bug exposed the browser's master password to brute-force attacks against insufficient Secure Hash Algorithm 1 hashing left unfixed for almost nine years.
* In 2021, the National Security Agency warned of brute-force password attacks being launched from a specially crafted Kubernetes cluster by a unit within Russia's foreign intelligence agency.

Details

In cryptography, a brute-force attack consists of an attacker submitting many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search.

A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier.

When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones.

Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.

Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack, with 'anti-hammering' for countermeasures.

Basic concept

Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.

Theoretical limits

The resources required for a brute-force attack grow exponentially with increasing key size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bit symmetric keys (e.g. Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.

There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. The Landauer limit implied by the laws of physics sets a lower limit on the energy required to perform a computation of kT  · ln 2 per bit erased in a computation, where T is the temperature of the computing device in kelvins, k is the Boltzmann constant, and the natural logarithm of 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle. Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require {2}^{128} − 1 bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈ {10}^{18} joules, which is equivalent to consuming 30 gigawatts of power for one year. This is equal to 30×{10}^{9} W×365×24×3600 s = 9.46×{10}^{17} J or 262.7 TWh (about 0.1% of the yearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0.

However, this argument assumes that the register values are changed using conventional set and clear operations which inevitably generate entropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction, though no such computers are known to have been constructed.

As commercial successors of governmental ASIC solutions have become available, also known as custom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is modern graphics processing unit (GPU) technology, the other is the field-programmable gate array (FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from their energy efficiency per cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGA PCI Express card up to dedicated FPGA computers. WPA and WPA2 encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs and some hundred in case of FPGAs.

Advanced Encryption Standard (AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute force requires 2128 times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100 petaFLOPS which could theoretically check 100 million million (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×{10}^{55} years to exhaust the 256-bit key space.

An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effective random number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute force have nevertheless been cracked because the key space to search through was found to be much smaller than originally thought, because of a lack of entropy in their pseudorandom number generators. These include Netscape's implementation of SSL (famously cracked by Ian Goldberg and David Wagner in 1995) and a Debian/Ubuntu edition of OpenSSL discovered in 2008 to be flawed. A similar lack of implemented entropy led to the breaking of Enigma's code.

Credential recycling

Credential recycling refers to the hacking practice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling is pass the hash, where unsalted hashed credentials are stolen and re-used without first being brute forced.

Unbreakable codes

Certain types of encryption, by their mathematical properties, cannot be defeated by brute force. An example of this is one-time pad cryptography, where every cleartext bit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by the Venona project, generally relies not on pure cryptography, but upon mistakes in its implementation: the key pads not being truly random, intercepted keypads, operators making mistakes – or other errors.

Countermeasures

In case of an offline attack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case of online attacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring a CAPTCHA answer or employing multi-factor authentication), and/or locking accounts out after unsuccessful login attempts. Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site.

Reverse brute-force attack

In a reverse brute-force attack, a single (usually common) password is tested against multiple usernames or encrypted files. The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.

In 2021, hackers gained access to T-Mobile testing environments and then used brute-force attacks and other means to hack into other IT servers, including those that contained customer data.

in-post-01-guest-networks-simplified.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1738 2023-04-16 19:34:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1641. Antihistamine

Gist

Antihistamines are medicines often used to relieve symptoms of allergies, such as hay fever, hives, conjunctivitis and reactions to insect bites or stings. They're also sometimes used to prevent motion sickness and as a short-term treatment for insomnia.

Summary

Antihistamines are drugs which treat allergic rhinitis, common cold, influenza, and other allergies. Typically, people take antihistamines as an inexpensive, generic (not patented) drug that can be bought without a prescription and provides relief from nasal congestion, sneezing, or hives caused by pollen, dust mites, or animal allergy with few side effects. Antihistamines are usually for short-term treatment. Chronic allergies increase the risk of health problems which antihistamines might not treat, including asthma, sinusitis, and lower respiratory tract infection. Consultation of a medical professional is recommended for those who intend to take antihistamines for longer-term use.

Although people typically use the word "antihistamine" to describe drugs for treating allergies, doctors and scientists use the term to describe a class of drug that opposes the activity of histamine receptors in the body. In this sense of the word, antihistamines are subclassified according to the histamine receptor that they act upon. The two largest classes of antihistamines are H1-antihistamines and H2-antihistamines.

H1-antihistamines work by binding to histamine H1 receptors in mast cells, smooth muscle, and endothelium in the body as well as in the tuberomammillary nucleus in the brain. Antihistamines that target the histamine H1-receptor are used to treat allergic reactions in the nose (e.g., itching, runny nose, and sneezing). In addition, they may be used to treat insomnia, motion sickness, or vertigo caused by problems with the inner ear. H2-antihistamines bind to histamine H2 receptors in the upper gastrointestinal tract, primarily in the stomach. Antihistamines that target the histamine H2-receptor are used to treat gastric acid conditions (e.g., peptic ulcers and acid reflux). Other antihistamines also target H3 receptors and H4 receptors.

Histamine receptors exhibit constitutive activity, so antihistamines can function as either a neutral receptor antagonist or an inverse agonist at histamine receptors. Only a few currently marketed H1-antihistamines are known to function as inverse agonists.

Medical uses

Histamine makes blood vessels more permeable (vascular permeability), causing fluid to escape from capillaries into tissues, which leads to the classic symptoms of an allergic reaction — a runny nose and watery eyes. Histamine also promotes angiogenesis.

Antihistamines suppress the histamine-induced wheal response (swelling) and flare response (vasodilation) by blocking the binding of histamine to its receptors or reducing histamine receptor activity on nerves, vascular smooth muscle, glandular cells, endothelium, and mast cells. Antihistamines can also help correct Eustachian Tube dysfunction, thereby helping correct problems such as muffled hearing, fullness in the ear and even tinnitus.

Itching, sneezing, and inflammatory responses are suppressed by antihistamines that act on H1-receptors. In 2014, antihistamines such as desloratadine were found to be effective to complement standardized treatment of acne due to their anti-inflammatory properties and their ability to suppress sebum production.

Types

H1-antihistamines refer to compounds that inhibit the activity of the H1 receptor. Since the H1 receptor exhibits constitutive activity, H1-antihistamines can be either neutral receptor antagonists or inverse agonists. Normally, histamine binds to the H1 receptor and heightens the receptor's activity; the receptor antagonists work by binding to the receptor and blocking the activation of the receptor by histamine; by comparison, the inverse agonists bind to the receptor and both block the binding of histamine, and reduce its constitutive activity, an effect which is opposite to histamine's. Most antihistamines are inverse agonists at the H1 receptor, but it was previously thought that they were antagonists.

Clinically, H1-antihistamines are used to treat allergic reactions and mast cell-related disorders. Sedation is a common side effect of H1-antihistamines that readily cross the blood–brain barrier; some of these drugs, such as diphenhydramine and doxylamine, may therefore be used to treat insomnia. H1-antihistamines can also reduce inflammation, since the expression of NF-κB, the transcription factor the regulates inflammatory processes, is promoted by both the receptor's constitutive activity and agonist (i.e., histamine) binding at the H1 receptor.

A combination of these effects, and in some cases metabolic ones as well, lead to most first-generation antihistamines having analgesic-sparing (potentiating) effects on opioid analgesics and to some extent with non-opioid ones as well. The most common antihistamines utilized for this purpose include hydroxyzine, promethazine (enzyme induction especially helps with codeine and similar prodrug opioids), phenyltoloxamine, orphenadrine, and tripelennamine; some may also have intrinsic analgesic properties of their own, orphenadrine being an example.

Second-generation antihistamines cross the blood–brain barrier to a much lesser extent than the first-generation antihistamines. They minimize sedatory effects due to their focused effect on peripheral histamine receptors. However, upon high doses second-generation antihistamines will begin to act on the central nervous system and thus can induce drowsiness when ingested in higher quantity. Additionally, some second-generation antihistamines, notably cetirizine, can interact with CNS psychoactive drugs such as bupropion and benzodiazepines.

Details

By blocking the effects of histamines, this class of drugs can treat allergies and many other ailments.

An antihistamine is a type of medicine used to treat common allergy symptoms, such as sneezing, watery eyes, hives, and a runny nose.

According to the American College of Allergy, Asthma, & Immunology, nasal allergies affect about 50 million people in the United States.

Certain antihistamines are also sometimes used to treat motion sickness, nausea, vomiting, dizziness, cough, sleep problems, anxiety, and Parkinson's disease.

The drugs work by blocking the effects of histamine, a substance in the body that can cause allergy symptoms.

Antihistamines come in different forms, such as capsules, tablets, liquids, eye drops, injections, and nasal sprays.

They can be purchased over-the-counter (OTC) or given as a prescription.

Some antihistamines are taken daily, while others are used only when symptoms occur.

Types of Antihistamines

Some common antihistamines include:

* Allegra (fexofenadine)
* Astelin and Astepro (azelastine) nasal sprays
* Atarax and Vistaril (hydroxyzine)
* Benadryl (diphenhydramine)
* Chlor-Trimeton (chlorpheniramine)
* Clarinex (desloratadine)
* Claritin and Alavert (loratadine)
* Cyproheptadine
* Dimetane (brompheniramine)
* Emadine (emedastine) eye drops
* Livostin (levocabastine) eye drops
* Optivar (azelastine) eye drops
* Palgic (carbinoxamine)
* Xyzal (levocetirizine)
* Tavist (clemastine)
* Zyrtec (cetirizine)

Antihistamine Side Effects

Common side effects of antihistamines include:

* Drowsiness or sleepiness
* Dizziness
* Dry mouth, nose, or throat
* Increased appetite and weight gain
* Upset stomach
* Thickening of mucus
* Changes in vision
* Feeling nervous, excited, or irritable

Antihistamine Precautions

Before taking an antihistamine, tell your doctor about all medical conditions you have, especially:

* Diabetes
* An overactive thyroid (hyperthyroidism)
* Heart disease
* High blood pressure
* Glaucoma
* Epilepsy (seizure disorder)
* An enlarged prostate or trouble urinating

Don't drive or perform activities that require alertness until you know how the antihistamine you're taking affects you.

Follow the instructions on your prescription or package label carefully when taking an antihistamine. Don't take more of the medicine than is recommended.

Tell your doctor about all prescription, non-prescription, illegal, recreational, herbal, nutritional, or dietary drugs you're taking before starting on an antihistamine.

You may need to avoid grapefruit and grapefruit juice while taking an antihistamine, as they can affect how these drugs work in your body. Talk to your doctor if this is a concern.

Antihistamines and Alcohol

Alcohol may worsen certain side effects of antihistamines.

Avoid drinking alcohol while taking these medicines.

Antihistamines and Pregnancy

Tell your doctor if you're pregnant, or might become pregnant, while using an antihistamine.

You'll have to discuss the risks and benefits of taking the medicine during pregnancy.

Also, talk to your healthcare provider before taking antihistamines if you're breastfeeding.

medicine-agencies.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1739 2023-04-17 14:26:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1642) Bank

Summary

A bank is a financial institution that accepts deposits from the public and creates a demand deposit while simultaneously making loans. Lending activities can be directly performed by the bank or indirectly through capital markets.

Because banks play an important role in financial stability and the economy of a country, most jurisdictions exercise a high degree of regulation over banks. Most countries have institutionalized a system known as fractional-reserve banking, under which banks hold liquid assets equal to only a portion of their current liabilities. In addition to other regulations intended to ensure liquidity, banks are generally subject to minimum capital requirements based on an international set of capital standards, the Basel Accords.

Banking in its modern sense evolved in the fourteenth century in the prosperous cities of Renaissance Italy but in many ways functioned as a continuation of ideas and concepts of credit and lending that had their roots in the ancient world. In the history of banking, a number of banking dynasties – notably, the Medicis, the Fuggers, the Welsers, the Berenbergs, and the Rothschilds – have played a central role over many centuries. The oldest existing retail bank is Banca Monte dei Paschi di Siena (founded in 1472), while the oldest existing merchant bank is Berenberg Bank (founded in 1590).

Details

A bank is an institution that deals in money and its substitutes and provides other money-related services. In its role as a financial intermediary, a bank accepts deposits and makes loans. It derives a profit from the difference between the costs (including interest payments) of attracting and servicing deposits and the income it receives through interest charged to borrowers or earned through securities. Many banks provide related services such as financial management and products such as mutual funds and credit cards. Some bank liabilities also serve as money—that is, as generally accepted means of payment and exchange.

Principles of banking

The central practice of banking consists of borrowing and lending. As in other businesses, operations must be based on capital, but banks employ comparatively little of their own capital in relation to the total volume of their transactions. Instead banks use the funds obtained through deposits and, as a precaution, maintain capital and reserve accounts to protect against losses on their loans and investments and to provide for unanticipated cash withdrawals. Genuine banks are distinguished from other kinds of financial intermediaries by the readily transferable or “spendable” nature of at least some of their liabilities (also known as IOUs), which allows those liabilities to serve as means of exchange—that is, as money.

Types of banks

The principal types of banks in the modern industrial world are commercial banks, which are typically private-sector profit-oriented firms, and central banks, which are public-sector institutions. Commercial banks accept deposits from the general public and make various kinds of loans (including commercial, consumer, and real-estate loans) to individuals and businesses and, in some instances, to governments. Central banks, in contrast, deal mainly with their sponsoring national governments, with commercial banks, and with each other. Besides accepting deposits from and extending credit to these clients, central banks also issue paper currency and are responsible for regulating commercial banks and national money stocks.

The term commercial bank covers institutions ranging from small neighbourhood banks to huge metropolitan institutions or multinational organizations with hundreds of branches. Although U.S. banking regulations limited the development of nationwide bank chains through most of the 20th century, legislation in 1994 easing these limitations led American commercial banks to organize along the lines of their European counterparts, which typically operated offices and bank branches in many regions.

In the United States a distinction exists between commercial banks and so-called thrift institutions, which include savings and loan associations (S&Ls), credit unions, and savings banks. Like commercial banks, thrift institutions accept deposits and fund loans, but unlike commercial banks, thrifts have traditionally focused on residential mortgage lending rather than commercial lending. The growth of a separate thrift industry in the United States was largely fostered by regulations unique to that country; these banks therefore lack a counterpart elsewhere in the world. Moreover, their influence has waned: the pervasive deregulation of American commercial banks, which originated in the wake of S&L failures during the late 1980s, weakened the competitiveness of such banks and left the future of the U.S. thrift industry in doubt.

While these and other institutions are often called banks, they do not perform all the banking functions described above and are best classified as financial intermediaries. Institutions that fall into this category include finance companies, savings banks, investment banks (which deal primarily with large business clients and are mainly concerned with underwriting and distributing new issues of corporate bonds and equity shares), trust companies, finance companies (which specialize in making risky loans and do not accept deposits), insurance companies, mutual fund companies, and home-loan banks or savings and loan associations. One particular type of commercial bank, the merchant bank (known as an investment bank in the United States), engages in investment banking activities such as advising on mergers and acquisitions. In some countries, including Germany, Switzerland, France, and Italy, so-called universal banks supply both traditional (or “narrow”) commercial banking services and various nonbank financial services such as securities underwriting and insurance. Elsewhere, regulations, long-established custom, or a combination of both have limited the extent to which commercial banks have taken part in the provision of nonbank financial services.

Bank money

The development of trade and commerce drove the need for readily exchangeable forms of money. The concept of bank money originated with the Amsterdamsche Wisselbank (the Bank of Amsterdam), which was established in 1609 during Amsterdam’s ascent as the largest and most prosperous city in Europe. As an exchange bank, it permitted individuals to bring money or bullion for deposit and to withdraw the money or the worth of the bullion. The original ordinance that established the bank further required that all bills of 600 gulden or upward should be paid through the bank—in other words, by the transfer of deposits or credits at the bank. These transfers later came to be known as “bank money.” The charge for making the transfers represented the bank’s sole source of income.

In contrast to the earliest forms of money, which were commodity moneys based on items such as seashells, tobacco, and precious-metal coin, practically all contemporary money takes the form of bank money, which consists of checks or drafts that function as commercial or central bank IOUs. Commercial bank money consists mainly of deposit balances that can be transferred either by means of paper orders (e.g., checks) or electronically (e.g., debit cards, wire transfers, and Internet payments). Some electronic-payment systems are equipped to handle transactions in a number of currencies.

Circulating “banknotes,” yet another kind of commercial bank money, are direct claims against the issuing institution (rather than claims to any specific depositor’s account balance). They function as promissory notes issued by a bank and are payable to a bearer on demand without interest, which makes them roughly equivalent to money. Although their use was widespread before the 20th century, banknotes have been replaced largely by transferable bank deposits. In the early 21st century only a handful of commercial banks, including ones located in Northern Ireland, Scotland, and Hong Kong, issued banknotes. For the most part, contemporary paper currency consists of fiat money (from the medieval Latin term meaning “let it be done”), which is issued by central banks or other public monetary authorities.

All past and present forms of commercial bank money share the characteristic of being redeemable (that is, freely convertible at a fixed rate) in some underlying base money, such as fiat money (as is the case in contemporary banking) or a commodity money such as gold or silver coin. Bank customers are effectively guaranteed the right to seek unlimited redemptions of commercial bank money on demand (that is, without delay); any commercial bank refusing to honour the obligation to redeem its bank money is typically deemed insolvent. The same rule applies to the routine redemption requests that a bank makes, on behalf of its clients, upon another bank—as when a check drawn upon Bank A is presented to Bank B for collection.

While commercial banks remain the most important sources of convenient substitutes for base money, they are no longer exclusive suppliers of money substitutes. Money-market mutual funds and credit unions offer widely used money substitutes by permitting the persons who own shares in them to write checks from their accounts. (Money-market funds and credit unions differ from commercial banks in that they are owned by and lend only to their own depositors.) Another money substitute, traveler’s checks, resembles old-fashioned banknotes to some degree, but they must be endorsed by their users and can be used for a single transaction only, after which they are redeemed and retired.

For all the efficiencies that bank money brings to financial transactions and the marketplace, a heavy reliance upon it—and upon spendable bank deposits in particular—can expose economies to banking crises. This is because banks hold only fractional reserves of basic money, and any concerted redemption of a bank’s deposits—which could occur if the bank is suspected of insolvency—can cause it to fail. On a larger scale, any concerted redemption of a country’s bank deposits (assuming the withdrawn funds are not simply redeposited in other banks) can altogether destroy an economy’s banking system, depriving it of needed means of exchange as well as of business and consumer credit. Perhaps the most notorious example of this was the U.S. banking crisis of the early 1930s (see Banking panics and monetary contraction); a more recent example was the Asian currency crisis that originated in Thailand in 1997.

Bank loans

Bank loans, which are available to businesses of all types and sizes, represent one of the most important sources of commercial funding throughout the industrialized world. Key sources of funding for corporations include loans, stock and bond issues, and income. In the United States, for example, the funding that business enterprises obtain from banks is roughly twice the amount they receive by marketing their own bonds, and funding from bank loans is far greater still than what companies acquire by issuing shares of stock. In Germany and Japan bank loans represent an even larger share of total business funding. Smaller and more specialized sources of funding include venture capital firms and hedge funds.

Although all banks make loans, their lending practices differ, depending on the areas in which they specialize. Commercial loans, which can cover time frames ranging from a few weeks to a decade or more, are made to all kinds of businesses and represent a very important part of commercial banking worldwide. Some commercial banks devote an even greater share of their lending to real-estate financing (through mortgages and home-equity loans) or to direct consumer loans (such as personal and automobile loans). Others specialize in particular areas, such as agricultural loans or construction loans. As a general business practice, most banks do not restrict themselves to lending but acquire and hold other assets, such as government and corporate securities and foreign exchange (that is, cash or securities denominated in foreign currency units).

Historical development:

Early banking

Some authorities, relying upon a broad definition of banking that equates it with any sort of intermediation activity, trace banking as far back as ancient Mesopotamia, where temples, royal palaces, and some private houses served as storage facilities for valuable commodities such as grain, the ownership of which could be transferred by means of written receipts. There are records of loans by the temples of Babylon as early as 2000 BCE; temples were considered especially safe depositories because, as they were sacred places watched over by gods, their contents were believed to be protected from theft. Companies of traders in ancient times provided banking services that were connected with the buying and selling of goods.

Many of these early “protobanks” dealt primarily in coin and bullion, much of their business being money changing and the supplying of foreign and domestic coin of the correct weight and fineness. Full-fledged banks did not emerge until medieval times, with the formation of organizations specializing in the depositing and lending of money and the creation of generally spendable IOUs that could serve in place of coins or other commodity moneys. In Europe so-called “merchant bankers” paralleled the development of banking by offering, for a consideration, to assist merchants in making distant payments, using bills of exchange instead of actual coin. The merchant banking business arose from the fact that many merchants traded internationally, holding assets at different points along trade routes. For a certain consideration, a merchant stood prepared to accept instructions to pay money to a named party through one of his agents elsewhere; the amount of the bill of exchange would be debited by his agent to the account of the merchant banker, who would also hope to make an additional profit from exchanging one currency against another. Because there was a possibility of loss, any profit or gain was not subject to the medieval ban on usury. There were, moreover, techniques for concealing a loan by making foreign exchange available at a distance but deferring payment for it so that the interest charge could be camouflaged as a fluctuation in the exchange rate.

The earliest genuine European banks, in contrast, dealt neither in goods nor in bills of exchange but in gold and silver coins and bullion, and they emerged in response to the risks involved in storing and transporting precious metal moneys and, often, in response to the deplorable quality of available coins, which created a demand for more reliable and uniform substitutes.

In continental Europe dealers in foreign coin, or “money changers,” were among the first to offer basic banking services, while in London money “scriveners” and goldsmiths played a similar role. Money scriveners were notaries who found themselves well positioned for bringing borrowers and lenders together, while goldsmiths began their transition to banking by keeping money and valuables in safe custody for their customers. Goldsmiths also dealt in bullion and foreign exchange, acquiring and sorting coin for profit. As a means of attracting coin for sorting, they were prepared to pay a rate of interest, and it was largely in this way that they eventually began to outcompete money scriveners as deposit bankers.

Specialization

Banks in Europe from the 16th century onward could be divided into two classes: exchange banks and banks of deposit. The last were banks that, besides receiving deposits, made loans and thus associated themselves with the trade and industries of a country. The exchange banks included in former years institutions such as the Bank of Hamburg and the Bank of Amsterdam. These were established to deal with foreign exchange and to facilitate trade with other countries. The others—founded at very different dates—were established as, or early became, banks of deposit, such as the Bank of England, the Bank of Venice, the Bank of Sweden, the Bank of France, the Bank of Germany, and others. Important as exchange banks were in their day, the period of their activity had generally passed by the last half of the 19th century.

In one particularly notable respect, the business carried on by the exchange banks differed from banking as generally understood at the time. Exchange banks were established for the primary purpose of turning the values with which they were entrusted into bank money—that is, into a currency that merchants accepted immediately, with no need to test the value of the coin or the bullion given to them. The value the banks provided was equal to the value they received, with the only difference being the small amount charged to their customers for performing such transactions. No exchange bank had capital of its own, nor did it require any for the performance of its business.

In every case deposit banking at first involved little more than the receipt of coins for safekeeping or warehousing, for which service depositors were required to pay a fee. By early modern times this warehousing function had given way in most cases to genuine intermediation, with deposits becoming debt, as opposed to bailment (delivery in trust) contracts, and depositors sharing in bank interest earnings instead of paying fees. (See bailment.) Concurrent with this change was the development of bank money, which had begun with transfers of deposit credits by means of oral and later written instructions to bankers and also with the endorsement and assignment of written deposit receipts; each transaction presupposed legal acknowledgement of the fungible (interchangeable) status of deposited coins. Over time, deposit transfers by means of written instructions led directly to modern checks.

The development of banknotes

Although the Bank of England is usually credited with being the source of the Western world’s first widely circulated banknotes, the Stockholms Banco (Bank of Stockholm, founded in 1656 and the predecessor of the contemporary Bank of Sweden) is known to have issued banknotes several decades before the Bank of England’s establishment in 1694, and some authorities claim that notes issued by the Casa di San Giorgio (Bank of Genoa, established in 1407), although payable only to specific persons, were made to circulate by means of repeated endorsements. In Asia paper money has a still longer history, its first documented use having been in China during the 9th century, when “flying money,” a sort of draft or bill of exchange developed by merchants, was gradually transformed into government-issued fiat money. The 12th-century Tatar war caused the government to abuse this new financial instrument, and China thereby earned credit not merely for the world’s first paper money but also for the world’s first known episode of hyperinflation. Several more such episodes caused the Chinese government to cease issuing paper currency, leaving the matter to private bankers. By the late 19th century, China had developed a unique and, according to many accounts, successful bank money system, consisting of paper notes issued by unregulated local banks and redeemable in copper coin. Yet the system was undermined in the early 20th century, first by demands made by the government upon the banks and ultimately by the decision to centralize and nationalize China’s paper currency system.

The development of bank money increased bankers’ ability to extend credit by limiting occasions when their clients would feel the need to withdraw currency. The increasingly widespread use of bank money eventually allowed bankers to exploit the law of large numbers, whereby withdrawals would be offset by new deposits. Market competition, however, prevented banks from extending credit beyond reasonable means, and each bank set aside cash reserves, not merely to cover occasional coin withdrawals but also to settle interbank accounts. Bankers generally found it to be in their interest to receive, on deposit, checks drawn upon or notes issued by rivals in good standing; it became a standard practice for such notes or checks to be cleared (that is, returned to their sources) on a routine (usually daily) basis, where the net amounts due would be settled in coin or bullion. Starting in the late 18th century, bankers found that they could further economize on cash reserves by setting up clearinghouses in major cities to manage nonlocal bank money clearings and settlements, as doing so allowed further advantage to be taken of opportunities for “netting out” offsetting items, that is, offsetting gross credits with gross debits, leaving net dues alone to be settled with specie (coin money). Clearinghouses were the precursors to contemporary institutions such as clearing banks, automated clearinghouses, and the Bank for International Settlements. Other financial innovations, such as the development of bailment and bank money, created efficiencies in transactions that complemented the process of industrialization. In fact, many economists, starting with the Scottish philosopher Adam Smith, have attributed to banks a crucial role in promoting industrialization.

Commercial banks

Operations and management

The essential business of banking involves granting bank deposit credits or issuing IOUs in exchange for deposits (which are claims to base money, such as coins or fiat paper money); banks then use the base money—or that part of it not needed as cash reserves—to purchase other IOUs with the goal of earning a profit on that investment. The business may be most readily understood by considering the elements of a simplified bank balance sheet, where a bank’s available resources—its “assets”—are reckoned alongside its obligations, or “liabilities.”

Assets

Bank assets consist mainly of various kinds of loans and marketable securities and of reserves of base money, which may be held either as actual central bank notes and coins or in the form of a credit (deposit) balance at the central bank. The bank’s main liabilities are its capital (including cash reserves and, often, subordinated debt) and deposits. The latter may be from domestic or foreign sources (corporations and firms, private individuals, other banks, and even governments). They may be repayable on demand (sight deposits or current accounts) or after a period of time (time, term, or fixed deposits and, occasionally, savings deposits). The bank’s assets include cash; investments or securities; loans and advances made to customers of all kinds, though primarily to corporations (including term loans and mortgages); and, finally, the bank’s premises, furniture, and fittings.

The difference between the fair market value of a bank’s assets and the book value of its outstanding liabilities represents the bank’s net worth. A bank lacking positive net worth is said to be “insolvent,” and it generally cannot remain open unless it is kept afloat by means of central bank support. At all times a bank must maintain cash balances to pay its depositors upon demand. It must also keep a proportion of its assets in forms that can readily be converted into cash. Only in this way can confidence in the banking system be maintained.

The main resource of a modern bank is borrowed money (that is, deposits), which the bank loans out as profitably as is prudent. Banks also hold cash reserves for interbank settlements as well as to provide depositors with cash on demand, thereby maintaining a “safe” ratio of cash to deposits. The safe cash-to-assets ratio may be established by convention or by statute. If a minimum cash ratio is required by law, a portion of a bank’s assets is in effect frozen and not available to meet sudden demands for cash from the bank’s customers (though the requirement can be enforced in such a way as to allow banks to dip into required reserves on occasion—e.g., by substituting “lagged” for “contemporaneous” reserve accounting). To provide more flexibility, required ratios are frequently based on the average of cash holdings over a specified period, such as a week or a month.

Unless a bank held cash equivalent to 100 percent of its demand deposits, it could not meet the claims of depositors were they all to exercise in full and at the same time their right to demand cash. If that were a common phenomenon, deposit banking could not survive. For the most part, however, the public is prepared to leave its surplus funds on deposit with banks, confident that money will be available when needed. But there may be times when unexpected demands for cash exceed what might reasonably have been anticipated; therefore, a bank must not only hold part of its assets in cash but also must keep a proportion of the remainder in assets that can be quickly converted into cash without significant loss.

Asset management

A bank may mobilize its assets in several ways. It may demand repayment of loans, immediately or at short notice; it may sell securities; or it may borrow from the central bank, using paper representing investments or loans as security. Banks do not precipitately call in loans or sell marketable assets, because this would disrupt the delicate debtor-creditor relationship and lessen confidence, which probably would result in a run on the banks. Banks therefore maintain cash reserves and other liquid assets at a certain level or have access to a “lender of last resort,” such as a central bank. In a number of countries, commercial banks have at times been required to maintain a minimum liquid assets ratio. Among the assets of commercial banks, investments are less liquid than money-market assets. By maintaining an appropriate spread of maturities (through a combination of long-term and short-term investments), however, it is possible to ensure that a proportion of a bank’s investments will regularly approach redemption. This produces a steady flow of liquidity and thereby constitutes a secondary liquid assets reserve.

Yet this necessity—to convert a significant portion of its liabilities into cash on demand—forces banks to “borrow short and lend long.” Because most bank loans have definite maturity dates, banks must exchange IOUs that may be redeemed at any time for IOUs that will not come due until some definite future date. That makes even the most solvent banks subject to liquidity risk—that is, the risk of not having enough cash (base money) on hand to meet demands for immediate payment.

Banks manage this liquidity risk in a number of ways. One approach, known as asset management, concentrates on adjusting the composition of the bank’s assets—its portfolio of loans, securities, and cash. This approach exerts little control over the bank’s liabilities and overall size, both of which depend on the number of customers who deposit savings in the bank. In general, bank managers build a portfolio of assets capable of earning the greatest interest revenue possible while keeping risks within acceptable bounds. Bankers must also set aside cash reserves sufficient to meet routine demands (including the demand for reserves to meet minimum statutory requirements) while devoting remaining funds mainly to short-term commercial loans. The presence of many short-term loans among a bank’s assets means that some bank loans are always coming due, making it possible for a bank to meet exceptional cash withdrawals or settlement dues by refraining from renewing or replacing some maturing loans.

The practice among early bankers of focusing on short-term commercial loans, which was understandable given the assets they had to choose from, eventually became the basis for a fallacious theory known as the “real bills doctrine,” according to which there could be no risk of banks overextending themselves or generating inflation as long as they stuck to short-term lending, especially if they limited themselves to discounting commercial bills or promissory notes supposedly representing “real” goods in various stages of production. The real bills doctrine erred in treating both the total value of outstanding commercial bills and the proportion of such bills presented to banks for discounting as being values independent of banking policy (and independent of bank discount and interest rates in particular). According to the real bills doctrine, if such rates are set low enough, the volume of loans and discounts will increase while the outstanding quantity of bank money will expand; in turn, this expansion may cause the general price level to rise. As prices rise, the nominal stock of “real bills” will tend to grow as well. Inflation might therefore continue forever despite strict adherence by banks to the real bills rule.

Although the real bills doctrine continues to command a small following among some contemporary economists, by the late 19th century most bankers had abandoned the practice of limiting themselves to short-term commercial loans, preferring instead to mix such loans with higher-yielding long-term investments. This change stemmed in part from increased transparency and greater efficiency in the market for long-term securities. These improvements have made it easy for an individual bank to find buyers for such securities whenever it seeks to exchange them for cash. Banks also have made greater use of money-market assets such as treasury bills, which combine short maturities with ready marketability and are a favoured form of collateral for central bank loans.

Commercial banks in some countries, including Germany, also make long-term loans to industry (also known as commercial loans) despite the fact that such loans are neither self-liquidating (capable of generating cash) nor readily marketable. These banks must ensure their liquidity by maintaining relatively high levels of capital (including conservatively valued shares in the enterprises they are helping to fund) and by relying more heavily on longer-term borrowings (including time deposits as well as the issuance of bonds or unsecured debt, such as debentures). In other countries, including Japan and the United States, long-term corporate financing is handled primarily by financial institutions that specialize in commercial loans and securities underwriting rather than by banks.

Liability and risk management

The traditional asset-management approach to banking is based on the assumption that a bank’s liabilities are both relatively stable and unmarketable. Historically, each bank relied on a market for its deposit IOUs that was influenced by the bank’s location, meaning that any changes in the extent of the market (and hence in the total amount of resources available to fund the bank’s loans and investments) were beyond a bank’s immediate control. In the 1960s and ’70s, however, this assumption was abandoned. The change occurred first in the United States, where rising interest rates, together with regulations limiting the interest rates banks could pay, made it increasingly difficult for banks to attract and maintain deposits. Consequently, bankers devised a variety of alternative devices for acquiring funds, including repurchase agreements, which involve the selling of securities on the condition that buyers agree to repurchase them at a stated date in the future, and negotiable certificates of deposit (CDs), which can be traded in a secondary market. Having discovered new ways to acquire funds, banks no longer waited for funds to arrive through the normal course of business. The new approaches enabled banks to manage the liability as well as the asset side of their balance sheets. Such active purchasing and selling of funds by banks, known as liability management, allows bankers to exploit profitable lending opportunities without being limited by a lack of funds for loans. Once liability management became an established practice in the United States, it quickly spread to Canada and the United Kingdom and eventually to banking systems worldwide.

A more recent approach to bank management synthesizes the asset- and liability-management approaches. Known as risk management, this approach essentially treats banks as bundles of risks; the primary challenge for bank managers is to establish acceptable degrees of risk exposure. This means bank managers must calculate a reasonably reliable measure of their bank’s overall exposure to various risks and then adjust the bank’s portfolio to achieve both an acceptable overall risk level and the greatest shareholder value consistent with that level.

Contemporary banks face a wide variety of risks. In addition to liquidity risk, they include credit risk (the risk that borrowers will fail to repay their loans on schedule), interest-rate risk (the risk that market interest rates will rise relative to rates being earned on outstanding long-term loans), market risk (the risk of suffering losses in connection with asset and liability trading), foreign-exchange risk (the risk of a foreign currency in which loans have been made being devalued during the loans’ duration), and sovereign risk (the risk that a government will default on its debt). The risk-management approach differs from earlier approaches to bank management in advocating not simply the avoidance of risk but the optimization of it—a strategy that is accomplished by mixing and matching various risky assets, including investment instruments traditionally shunned by bankers, such as forward and futures contracts, options, and other so-called “derivatives” (securities whose value derives from that of other, underlying assets). Despite the level of risk associated with them, derivatives can be used to hedge losses on other risky assets. For example, a bank manager may wish to protect his bank against a possible fall in the value of its bond holdings if interest rates rise during the following three months. In this case he can purchase a three-month forward contract—that is, by selling the bonds for delivery in three months’ time—or, alternatively, take a short position—a promise to sell a particular amount at a specific price—in bond futures. If interest rates do happen to rise during that period, profits from the forward contract or short futures position should completely offset the loss in the capital value of the bonds. The goal is not to change the expected portfolio return but rather to reduce the variance of the return, thereby keeping the actual return closer to its expected value.

The risk-management approach relies upon techniques, such as value at risk, or VAR (which measures the maximum likely loss on a portfolio during the next 100 days or so), that quantify overall risk exposure. One shortcoming of such risk measures is that they generally fail to consider high-impact low-probability events, such as the bombing of the Central Bank of Sri Lanka in 1996 or the September 11 attacks in 2001. Another is that poorly selected or poorly monitored hedge investments can become significant liabilities in themselves, as occurred when the U.S. bank JPMorgan Chase lost more than $3 billion in trades of credit-based derivatives in 2012. For these reasons, traditional bank management tools, including reliance upon bank capital, must continue to play a role in risk management.

The role of bank capital

Because even the best risk-management techniques cannot guarantee against losses, banks cannot rely on deposits alone to fund their investments. Funding also comes from share owners’ equity, which means that bank managers must concern themselves with the value of the bank’s equity capital as well as the composition of the bank’s assets and liabilities. A bank’s shareholders, however, are residual claimants, meaning that they may share in the bank’s profits but are also the first to bear any losses stemming from bad loans or failed investments. When the value of a bank’s assets declines, shareholders bear the loss, at least up to the point at which their shares become worthless, while depositors stand to suffer only if losses mount high enough to exhaust the bank’s equity, rendering the bank insolvent. In that case, the bank may be closed and its assets liquidated, with depositors (and, after them, if anything remains, other creditors) receiving prorated shares of the proceeds. Where bank deposits are not insured or otherwise guaranteed by government authorities, bank equity capital serves as depositors’ principal source of security against bank losses.

Deposit guarantees, whether explicit (as with deposit insurance) or implicit (as when government authorities are expected to bail out failing banks), can have the unintended consequence of reducing a bank’s equity capital, for which such guarantees are a substitute. Regulators have in turn attempted to compensate for this effect by regulating bank capital. For example, the first (1988) and second (2004) Basel Accords (Basel I and Basel II), which were implemented within the European Union and, to a limited extent, in the United States, established minimum capital requirements for different banks based on formulas that attempted to account for the risks to which each is exposed. Thus, Basel I established an 8 percent capital-to-asset ratio target, with bank assets weighted according to the risk of loss; weights ranged from zero (for top-rated government securities) to one (for some corporate bonds). Following the global financial crisis of 2008–09, a new agreement, known as Basel III (2010), increased capital requirements and imposed other safeguards in rules that would be implemented gradually through early 2019.

panama-bank-account.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1740 2023-04-18 13:49:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1643) Angina Pectoris

Gist

Overview. Angina (an-JIE-nuh or AN-juh-nuh) is a type of chest pain caused by reduced blood flow to the heart. Angina is a symptom of coronary artery disease. Angina is also called angina pectoris. Angina pain is often described as squeezing, pressure, heaviness, tightness or pain in the chest.

Summary

Angina, also known as angina pectoris, is chest pain or pressure, usually caused by insufficient blood flow to the heart muscle (myocardium). It is most commonly a symptom of coronary artery disease.

Angina is typically the result of obstruction or spasm of the arteries that supply blood to the heart muscle. The main mechanism of coronary artery obstruction is atherosclerosis as part of coronary artery disease. Other causes of angina include abnormal heart rhythms, heart failure and, less commonly, anemia. The term derives from the Latin angere ("to strangle") and pectus ("chest"), and can therefore be translated as "a strangling feeling in the chest".

There is a weak relationship between severity of angina and degree of oxygen deprivation in the heart muscle, however, the severity of angina does not always match the degree of oxygen deprivation to the heart or the risk of a myocardial infarction (heart attack). Some people may experience severe pain even though there is little risk of a heart attack. Others may have a heart attack and experience little or no pain. In some cases, angina can be quite severe. Worsening angina attacks, sudden-onset angina at rest, and angina lasting more than 15 minutes are symptoms of unstable angina (usually grouped with similar conditions as the acute coronary syndrome). As these may precede a heart attack, they require urgent medical attention and are, in general, treated similarly to myocardial infarction.

In the early 20th century, severe angina was seen as a sign of impending death. However, modern medical therapies have improved the outlook substantially. Middle-age patients who experience moderate to severe angina (grading by classes II, III, and IV) have a five-year survival rate of approximately 92%.

Details

Overview

Angina (an-JIE-nuh or AN-juh-nuh) is a type of chest pain caused by reduced blood flow to the heart. Angina is a symptom of coronary artery disease.

Angina is also called angina pectoris.

Angina pain is often described as squeezing, pressure, heaviness, tightness or pain in the chest. It may feel like a heavy weight lying on the chest. Angina may be a new pain that needs to be checked by a health care provider, or recurring pain that goes away with treatment.

Although angina is relatively common, it can still be hard to distinguish from other types of chest pain, such as the discomfort of indigestion. If you have unexplained chest pain, seek medical help right away.

Types

There are different types of angina. The type depends on the cause and whether rest or medication relieve symptoms.

* Stable angina. Stable angina is the most common form of angina. It usually happens during activity (exertion) and goes away with rest or angina medication. For example, pain that comes on when you're walking uphill or in the cold weather may be angina.

Stable angina pain is predictable and usually similar to previous episodes of chest pain. The chest pain typically lasts a short time, perhaps five minutes or less.

* Unstable angina (a medical emergency). Unstable angina is unpredictable and occurs at rest. Or the angina pain is worsening and occurs with less physical effort. It's typically severe and lasts longer than stable angina, maybe 20 minutes or longer. The pain doesn't go away with rest or the usual angina medications. If the blood flow doesn't improve, the heart is starved of oxygen and a heart attack occurs. Unstable angina is dangerous and requires emergency treatment.

* Variant angina (Prinzmetal angina). Variant angina, also called Prinzmetal angina, isn't due to coronary artery disease. It's caused by a spasm in the heart's arteries that temporarily reduces blood flow. Severe chest pain is the main symptom of variant angina. It most often occurs in cycles, typically at rest and overnight. The pain may be relieved by angina medication.

* Refractory angina. Angina episodes are frequent despite a combination of medications and lifestyle changes.

Symptoms
 
Angina symptoms include chest pain and discomfort. The chest pain or discomfort may feel like:

* Burning
* Fullness
* Pressure
* Squeezing
* Pain may also be felt in the arms, neck, jaw, shoulder or back.

Other symptoms of angina include:

* Dizziness
* Fatigue
* Nausea
* Shortness of breath
* Sweating

The severity, duration and type of angina can vary. New or different symptoms may signal a more dangerous form of angina (unstable angina) or a heart attack.

Any new or worsening angina symptoms need to be evaluated immediately by a health care provider who can determine whether you have stable or unstable angina.

Angina in women

Symptoms of angina in women can be different from the classic angina symptoms. These differences may lead to delays in seeking treatment. For example, chest pain is a common symptom in women with angina, but it may not be the only symptom or the most prevalent symptom for women. Women may also have symptoms such as:

* Discomfort in the neck, jaw, teeth or back
* Nausea
* Shortness of breath
* Stabbing pain instead of chest pressure
* Stomach (abdominal) pain

When to see a doctor

If your chest pain lasts longer than a few minutes and doesn't go away when you rest or take your angina medications, it may be a sign you're having a heart attack. Call 911 or emergency medical help. Only drive yourself to the hospital if there is no other transportation option.

If chest discomfort is a new symptom for you, it's important to see your health care provider to determine the cause and to get proper treatment. If you've been diagnosed with stable angina and it gets worse or changes, seek medical help immediately.

Causes

Angina is caused by reduced blood flow to the heart muscle. Blood carries oxygen, which the heart muscle needs to survive. When the heart muscle isn't getting enough oxygen, it causes a condition called ischemia.

The most common cause of reduced blood flow to the heart muscle is coronary artery disease (CAD). The heart (coronary) arteries can become narrowed by fatty deposits called plaques. This is called atherosclerosis.

If plaques in a blood vessel rupture or a blood clot forms, it can quickly block or reduce flow through a narrowed artery. This can suddenly and severely decrease blood flow to the heart muscle.

During times of low oxygen demand — when resting, for example — the heart muscle may still be able to work on the reduced amount of blood flow without triggering angina symptoms. But when the demand for oxygen goes up, such as when exercising, angina can result.

Risk factors

The following things may increase the risk of angina:

* Increasing age. Angina is most common in adults age 60 and older.
* Family history of heart disease. Tell your health care provider if your mother, father or any siblings have or had heart disease or a heart attack.
* Tobacco use. Smoking, chewing tobacco and long-term exposure to secondhand smoke can damage the lining of the arteries, allowing deposits of cholesterol to collect and block blood flow.
* Diabetes. Diabetes increases the risk of coronary artery disease, which leads to angina and heart attacks by speeding up atherosclerosis and increasing cholesterol levels.
* High blood pressure. Over time, high blood pressure damages arteries by accelerating hardening of the arteries.
High cholesterol or triglycerides. Too much bad cholesterol — low-density lipoprotein (LDL) — in the blood can cause arteries to narrow. A high LDL increases the risk of angina and heart attacks. A high level of triglycerides in the blood also is unhealthy.
* Other health conditions. Chronic kidney disease, peripheral artery disease, metabolic syndrome or a history of stroke increases the risk of angina.
* Not enough exercise. An inactive lifestyle contributes to high cholesterol, high blood pressure, type 2 diabetes and obesity. Talk to your health care provider about the type and amount of exercise that's best for you.
* Obesity. Obesity is a risk factor for heart disease, which can cause angina. Being overweight makes the heart work harder to supply blood to the body.
* Emotional stress. Too much stress and anger can raise blood pressure. Surges of hormones produced during stress can narrow the arteries and worsen angina.
* Medications. Drugs that tighten blood vessels, such as some migraine drugs, may trigger Prinzmetal's angina.
* Drug misuse. Cocaine and other stimulants can cause blood vessel spasms and trigger angina.
* Cold temperatures. Exposure to cold temperatures can trigger Prinzmetal angina.

Complications

The chest pain that occurs with angina can make doing some activities, such as walking, uncomfortable. However, the most dangerous complication is a heart attack.

Warning signs and symptoms of a heart attack include:

* Pressure, fullness or a squeezing pain in the center of the chest that lasts for more than a few minutes
* Pain extending beyond the chest to the shoulder, arm, back, or even to the teeth and jaw
* Fainting
* Impending sense of doom
* Increasing episodes of chest pain
* Nausea and vomiting
* Continued pain in the upper belly area (abdomen)
* Shortness of breath
* Sweating

If you have any of these symptoms, seek emergency medical attention immediately.

Prevention

You can help prevent angina by following the same lifestyle changes that are used to treat angina. These include:

* Not smoking.
* Eating a healthy diet.
* Avoiding or limiting alcohol.
* Exercising regularly.
* Maintaining a healthy weight.
* Managing other health conditions related to heart disease.
* Reducing stress.
* Getting recommended vaccines to avoid heart complications.

Additional Information

Angina pectoris is pain or discomfort in the chest, usually caused by the inability of diseased coronary arteries to deliver sufficient oxygen-laden blood to the heart muscle. When insufficient blood reaches the heart, waste products accumulate in the heart muscle and irritate local nerve endings, causing a deep sensation of heaviness, squeezing, or burning that is most prominent behind or beneath the breastbone and over the heart and stomach. In some instances, the sensation may radiate into the shoulders, the neck, the jaw, or the arms on one or both sides of the body. A feeling of constriction or suffocation often accompanies the discomfort, though there is seldom actual difficulty in breathing. Symptoms usually subside within five minutes. In acute cases (e.g., unstable angina or acute coronary syndrome), the skin becomes pale and the pulse is weak. Although symptoms may be mild in some cases, the peculiar qualities of angina pectoris may induce anxiety.

Attacks of angina can be precipitated by walking or more strenuous exertion; by anger, fear, or other stressful emotional states; by exercise after a large meal; or by exposure to cold or wind. Attacks are apt to recur following less or no exertion as coronary heart disease worsens. Angina pectoris is rare in persons under middle age and tends to be more common in men than in women. Men and women sometimes experience different symptoms; women, for example, may experience nausea or vomiting, feel sharp pain rather than pressure in the chest, or have symptoms of increased duration. Differences in the characteristics of angina are attributed to differences in the underlying conditions that precipitate angina; for example, coronary artery disease frequently is associated with angina in men, whereas coronary microvascular disease is a common cause of angina in women.

An anginal attack can be relieved by rest or by taking nitroglycerin or other drugs that relax (and thus dilate) the blood vessels. The frequency of attacks can be lessened by the avoidance of emotional stress and by shifting to exercise that is less vigorous. In cases where the narrowing of the coronary arteries appears serious enough to cause a heart attack (myocardial infarction), methods must be used to widen the passages within the arteries or surgically replace the arteries with unblocked ones from another portion of the body.

59413590_s-e1477233211917_rufyxd.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1741 2023-04-19 13:26:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1644) Polestar

Summary

Polestar, also spelled pole star, also called (Northern Hemisphere) North Star, is the brightest star that appears nearest to either celestial pole at any particular time. Owing to the precession of the equinoxes, the position of each pole describes a small circle in the sky over a period of 25,772 years. Each of a succession of stars has thus passed near enough to the north celestial pole to serve as the polestar. At present the polestar is Polaris (α Ursae Minoris); Thuban (α Draconis) was closest to the North Pole about 2700 BCE, and the bright star Vega (α Lyrae) will be the star closest to the pole in 14,000 CE. The location of the northern polestar has made it a convenient object for navigators to use in determining latitude and north-south direction in the Northern Hemisphere. There is no bright star near the south celestial pole; the present southern polestar, Polaris Australis (also called σ Octantis), is only of the 5th magnitude and is thus barely visible to the naked eye.

Details

A pole star or polar star is a star, preferably bright, nearly aligned with the axis of a rotating astronomical body.

Currently, Earth's pole stars are Polaris (Alpha Ursae Minoris), a bright magnitude 2 star aligned approximately with its northern axis that serves as a pre-eminent star in celestial navigation, and a much dimmer magnitude 5.5 star on its southern axis, Polaris Australis (Sigma Octantis).

From around 1700 BC until just after 300 AD, Kochab (Beta Ursae Minoris) and Pherkad (Gamma Ursae Minoris) were twin northern pole stars, though neither was as close to the pole as Polaris is now.

History.

In classical antiquity, Beta Ursae Minoris (Kochab) was closer to the celestial north pole than Alpha Ursae Minoris. While there was no naked-eye star close to the pole, the midpoint between Alpha and Beta Ursae Minoris was reasonably close to the pole, and it appears that the entire constellation of Ursa Minor, in antiquity known as Cynosura (Greek Κυνόσουρα "dog's tail") was used as indicating the northern direction for the purposes of navigation by the Phoenicians. The ancient name of Ursa Minor, anglicized as cynosure, has since itself become a term for "guiding principle" after the constellation's use in navigation.

Alpha Ursae Minoris (Polaris) was described as ἀειφανής (transliterated as aeiphanes) meaning "always above the horizon", "ever-shining" by Stobaeus in the 5th century, when it was still removed from the celestial pole by about 8°. It was known as scip-steorra ("ship-star") in 10th-century Anglo-Saxon England, reflecting its use in navigation. In the Vishnu Purana, it is personified under the name Dhruva ("immovable, fixed").

The name stella polaris was coined in the Renaissance, even though at that time it was well recognized that it was several degrees away from the celestial pole; Gemma Frisius in the year 1547 determined this distance as 3°8'.[4] An explicit identification of Mary as stella maris with the North Star (Polaris) becomes evident in the title Cynosura seu Mariana Stella Polaris (i.e. "Cynosure, or the Marian Polar Star"), a collection of Marian poetry published by Nicolaus Lucensis (Niccolo Barsotti de Lucca) in 1655.

In 2022 Polaris' mean declination is 89.35 degrees North; (at epoch J2000 it was 89.26 degrees N). So it appears due north in the sky to a precision better than one degree, and the angle it makes with respect to the true horizon (after correcting for refraction and other factors) is within a degree of the latitude of the observer. The celestial pole will be nearest Polaris in 2100.

Due to the precession of the equinoxes (as well as the stars' proper motions), the role of North Star has passed (and will pass) from one star to another in the remote past (and in the remote future). In 3000 BC, the faint star Thuban in the constellation Draco was the North Star, aligning within 0.1° distance from the celestial pole, the closest of any of the visible pole stars. However, at magnitude 3.67 (fourth magnitude) it is only one-fifth as bright as Polaris, and today it is invisible in light-polluted urban skies.

During the 1st millennium BC, Beta Ursae Minoris ("Kochab") was the bright star closest to the celestial pole, but it was never close enough to be taken as marking the pole, and the Greek navigator Pytheas in ca. 320 BC described the celestial pole as devoid of stars. In the Roman era, the celestial pole was about equally distant between Polaris and Kochab.

The precession of the equinoxes takes about 25,770 years to complete a cycle. Polaris' mean position (taking account of precession and proper motion) will reach a maximum declination of +89°32'23", which translates to 1657" (or 0.4603°) from the celestial north pole, in February 2102. Its maximum apparent declination (taking account of nutation and aberration) will be +89°32'50.62", which is 1629" (or 0.4526°) from the celestial north pole, on 24 March 2100.

Precession will next point the north celestial pole at stars in the northern constellation Cepheus. The pole will drift to space equidistant between Polaris and Gamma Cephei ("Errai") by 3000 AD, with Errai reaching its closest alignment with the northern celestial pole around 4200 AD. Iota Cephei and Beta Cephei will stand on either side of the northern celestial pole some time around 5200 AD, before moving to closer alignment with the brighter star Alpha Cephei ("Alderamin") around 7500 AD.

Precession will then point the north celestial pole at stars in the northern constellation Cygnus. Like Beta Ursae Minoris during the 1st millennium BC, the bright star closest to the celestial pole in the 10th millennium AD, first-magnitude Deneb, will be a distant 7° from the pole, never close enough to be taken as marking the pole, while third-magnitude Delta Cygni will be a more helpful pole star, at a distance of 3° from celestial north, around 11,250 AD. Precession will then point the north celestial pole nearer the constellation Lyra, where the second brightest star in the northern celestial hemisphere, Vega, will be a pole star around 14,500 AD, though at a distance of 5° from celestial north.

Precession will eventually point the north celestial pole nearer the stars in the constellation Hercules, pointing towards Tau Herculis around 18,400 AD. The celestial pole will then return to the stars in constellation Draco (Thuban, mentioned above) before returning to the current constellation, Ursa Minor. When Polaris becomes the North Star again around 27,800 AD, due to its proper motion it then will be farther away from the pole than it is now, while in 23,600 BC it was closer to the pole.

Over the course of Earth's 26,000-year axial precession cycle, a series of bright naked eye stars (an apparent magnitude up to +6; a full moon is −12.9) in the northern hemisphere will hold the transitory title of North Star. While other stars might line up with the north celestial pole during the 26,000 year cycle, they do not necessarily meet the naked eye limit needed to serve as a useful indicator of north to an Earth-based observer, resulting in periods of time during the cycle when there is no clearly defined North Star. There will also be periods during the cycle when bright stars give only an approximate guide to "north", as they may be greater than 5° of angular diameter removed from direct alignment with the north celestial pole.

The 26,000 year cycle of North Stars, starting with the current star, with stars that will be "near-north" indicators when no North Star exists during the cycle, including each star's average brightness and closest alignment to the north celestial pole during the cycle.

Additional Information

Any star that aligns with Earth's polar axis. In the northern hemisphere, the pole star currently is Polaris, also known as the North Star. Thanks to precession, however, Earth wobbles on its axis, which causes the north pole to aim at different stars over a cycle of about 26,000 years. The next pole star will be Gamma Cephei, one of the leading lights of the constellation Cepheus. Other future pole stars include Vega, the brightest star of Lyra, and Thuban, in Draco, which was used to help align the Great Pyramids of Giza. In the southern hemisphere, the pole is marked by Polaris Australis, also known as Sigma Octantis. It is much fainter than Polaris, however, so it's not as useful a marker.

Polaris is currently the north star or pole star, and it is the brightest star located in the constellation of Ursa Minor, the little celestial bear.

Key Facts & Summary

* Polaris is located at only 433 light-years / 133 parsecs away from the Earth.
* Even though Polaris appears as a single star to the naked eye, it is actually a triple star system.
* This triple star system comprises Polaris Aa, the primary star, and Polaris Ab, and Polaris B.
* The primary star, Polaris Aa, is a yellow supergiant star of spectral type F7lb.
* The star Polaris Ab is a main-sequence star of spectral type F6V, while Polaris B is a main-sequence star of spectral type F3V.
* Polaris Aa, the primary star, is more massive and several times bigger than our Sun.
* It has around 5.4 solar masses and a whopping 37.5 solar radii.
* Polaris Aa is slightly hotter than our Sun, having surface average temperatures of 6,015 K.
* Polaris Aa is 1,260 times more luminous than our Sun.
* Polaris B is the second biggest planet of this star system, having 1.39 solar masses and 1.38 solar radii.
* On the other hand, Polaris Ab is the smallest, having 1.26 solar masses and only 1.04 solar radii.
* The hottest of these stars seems to be Polaris B, which has temperatures of around 6,900 K.
* Polaris B is also a speedy spinning star, having a rotational velocity of 110 km / 68.3 mi per second.
* Polaris Ab is three times more luminous than our Sun, while Polaris B is 3.9 times more luminous.
* The Polaris star system has an apparent magnitude of 1.98; however, its brightness varies from 1.86 to 2.13.
* The apparent magnitude of Polaris Ab is 9.2, while Polaris B is at magnitude 8.7.

Polaris Star for Kids

Polaris is a triple star system located in the constellation of Ursa Minor, the little celestial bear. It is the brightest star in the constellation, and it is currently our North Pole Star.

The ancients often used North pole stars for navigational purposes, so these stars were very important, and they still are even to this day. Let’s find out more about Polaris, the north star!

Polaris Characteristics

Though Polaris appears as a single star to the naked eye, it is actually a triple star system composed out of a yellow supergiant star and two considerably smaller main-sequence stars.

Polaris is located at around 433 light-years / 133 parsecs away from the Earth. Polaris Aa, the primary star, is slightly hotter than our Sun, having surface average temperatures of 6,015 K; however, it is 1,260 times more luminous than our Sun.

The hottest of these stars seems to be Polaris B, which has temperatures of around 6,900 K. Polaris B is also a speedy spinning star, having a rotational velocity of 110 km / 68.3 mi per second.

Polaris Ab is three times more luminous than our Sun, while Polaris B is 3.9 times more luminous. The Polaris star system has an apparent magnitude of 1.98; however, its brightness varies from 1.86 to 2.13. The apparent magnitude of Polaris Ab is 9.2, while Polaris B is at magnitude 8.7.

Formation

The Polaris star system formed around 70 million years ago from an interstellar medium of gas and dust. Gravity pulled the swirling gas and dust together and resulted in the brightest star in the constellation of Ursa Minor, Polaris, and its two smaller companions. Later on, Polaris would become the north pole star of our Earth.

polaris-star-5.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1742 2023-04-19 16:42:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1645) Kidney stone disease

Gist

Kidney stone disease is a crystal concretion formed usually within the kidneys. It is an increasing urological disorder of human health, affecting about 12% of the world population. It has been associated with an increased risk of end-stage renal failure. The etiology of kidney stone is multifactorial.

Details

Kidney stone disease, also known as nephrolithiasis or urolithiasis, is a crystallopathy where a solid piece of material (kidney stone) develops in the urinary tract. Kidney stones typically form in the kidney and leave the body in the urine stream. A small stone may pass without causing symptoms. If a stone grows to more than 5 millimeters (0.2 inches), it can cause blockage of the ureter, resulting in sharp and severe pain in the lower back or abdomen. A stone may also result in blood in the urine, vomiting, or painful urination. About half of people who have had a kidney stone are likely to have another within ten years.

Most stones form by a combination of genetics and environmental factors. Risk factors include high urine calcium levels, obesity, certain foods, some medications, calcium supplements, hyperparathyroidism, gout and not drinking enough fluids. Stones form in the kidney when minerals in urine are at high concentration. The diagnosis is usually based on symptoms, urine testing, and medical imaging. Blood tests may also be useful. Stones are typically classified by their location: nephrolithiasis (in the kidney), ureterolithiasis (in the ureter), cystolithiasis (in the bladder), or by what they are made of (calcium oxalate, uric acid, struvite, cystine).

In those who have had stones, prevention is by drinking fluids such that more than two liters of urine are produced per day. If this is not effective enough, thiazide diuretic, citrate, or allopurinol may be taken. It is recommended that soft drinks containing phosphoric acid (typically colas) be avoided. When a stone causes no symptoms, no treatment is needed; otherwise, pain control is usually the first measure, using medications such as nonsteroidal anti-inflammatory drugs or opioids. Larger stones may be helped to pass with the medication tamsulosin or may require procedures such as extracorporeal shock wave lithotripsy, ureteroscopy, or percutaneous nephrolithotomy.

Kidney stones have affected humans throughout history with descriptions of surgery to remove them dating from as early as 600 BC. Between 1% and 15% of people globally are affected by kidney stones at some point in their lives. In 2015, 22.1 million cases occurred, resulting in about 16,100 deaths. They have become more common in the Western world since the 1970s. Generally, more men are affected than women. The prevalence and incidence of the disease rises worldwide and continues to be challenging for patients, physicians, and healthcare systems alike. In this context, epidemiological studies are striving to elucidate the worldwide changes in the patterns and the burden of the disease and identify modifiable risk factors that contribute to the development of kidney stones.

Signs and symptoms

The hallmark of a stone that obstructs the ureter or renal pelvis is excruciating, intermittent pain that radiates from the flank to the groin or to the inner thigh. This is due to the transfer of referred pain signals from the lower thoracic splanchnic nerves to the lumbar splanchnic nerves as the stone passes down from the kidney or proximal ureter to the distal ureter. This pain, known as renal colic, is often described as one of the strongest pain sensations known. Renal colic caused by kidney stones is commonly accompanied by urinary urgency, restlessness, hematuria, sweating, nausea, and vomiting. It typically comes in waves lasting 20 to 60 minutes caused by peristaltic contractions of the ureter as it attempts to expel the stone.

The embryological link between the urinary tract, the genital system, and the gastrointestinal tract is the basis of the radiation of pain to the gonads, as well as the nausea and vomiting that are also common in urolithiasis. Postrenal azotemia and hydronephrosis can be observed following the obstruction of urine flow through one or both ureters.

Pain in the lower-left quadrant can sometimes be confused with diverticulitis because the sigmoid colon overlaps the ureter, and the exact location of the pain may be difficult to isolate due to the proximity of these two structures.

Diagnosis

Diagnosis of kidney stones is made on the basis of information obtained from the history, physical examination, urinalysis, and radiographic studies. Clinical diagnosis is usually made on the basis of the location and severity of the pain, which is typically colicky in nature (comes and goes in spasmodic waves). Pain in the back occurs when calculi produce an obstruction in the kidney. Physical examination may reveal fever and tenderness at the costovertebral angle on the affected side.

Imaging studies

Calcium-containing stones are relatively radiodense, and they can often be detected by a traditional radiograph of the abdomen that includes the kidneys, ureters, and bladder (KUB film). KUB radiograph, although useful in monitoring size of stone or passage of stone in stone formers, might not be useful in the acute setting due to low sensitivity. Some 60% of all renal stones are radiopaque. In general, calcium phosphate stones have the greatest density, followed by calcium oxalate and magnesium ammonium phosphate stones. Cystine calculi are only faintly radiodense, while uric acid stones are usually entirely radiolucent.

In people with a history of stones, those who are less than 50 years of age and are presenting with the symptoms of stones without any concerning signs do not require helical CT scan imaging. A CT scan is also not typically recommended in children.

Otherwise a noncontrast helical CT scan with 5 millimeters (0.2 in) sections is the diagnostic method to use to detect kidney stones and confirm the diagnosis of kidney stone disease. Near all stones are detectable on CT scans with the exception of those composed of certain drug residues in the urine, such as from indinavir.

Where a CT scan is unavailable, an intravenous pyelogram may be performed to help confirm the diagnosis of urolithiasis. This involves intravenous injection of a contrast agent followed by a KUB film. Uroliths present in the kidneys, ureters, or bladder may be better defined by the use of this contrast agent. Stones can also be detected by a retrograde pyelogram, where a similar contrast agent is injected directly into the distal ostium of the ureter (where the ureter terminates as it enters the bladder).

Renal ultrasonography can sometimes be useful, because it gives details about the presence of hydronephrosis, suggesting that the stone is blocking the outflow of urine. Radiolucent stones, which do not appear on KUB, may show up on ultrasound imaging studies. Other advantages of renal ultrasonography include its low cost and absence of radiation exposure. Ultrasound imaging is useful for detecting stones in situations where X-rays or CT scans are discouraged, such as in children or pregnant women. Despite these advantages, renal ultrasonography in 2009 was not considered a substitute for noncontrast helical CT scan in the initial diagnostic evaluation of urolithiasis. The main reason for this is that, compared with CT, renal ultrasonography more often fails to detect small stones (especially ureteral stones) and other serious disorders that could be causing the symptoms.

On the contrary, a 2014 study suggested that ultrasonography should be used as the initial diagnostic imaging test, with further imaging studies be performed at the discretion of the physician on the basis of clinical judgment, and using ultrasonography rather than CT as an initial diagnostic test results in less radiation exposure and equally good outcome.

Additional Information

Kidney stone disease is a crystal concretion formed usually within the kidneys. It is an increasing urological disorder of human health, affecting about 12% of the world population. It has been associated with an increased risk of end-stage renal failure. The etiology of kidney stone is multifactorial. The most common type of kidney stone is calcium oxalate formed at Randall's plaque on the renal papillary surfaces. The mechanism of stone formation is a complex process which results from several physicochemical events including supersaturation, nucleation, growth, aggregation, and retention of urinary stone constituents within tubular cells. These steps are modulated by an imbalance between factors that promote or inhibit urinary crystallization. It is also noted that cellular injury promotes retention of particles on renal papillary surfaces. Currently, there is no satisfactory drug to cure and/or prevent kidney stone recurrences. Thus, further understanding of the pathophysiology of kidney stone formation is a research area to manage urolithiasis using new drugs.

Kidney stones (renal calculi, urolithiasis, or nephrolithiasis) are hard masses, deposits of minerals and salts that form inside the kidneys.

The size of kidney stones are usually like a chickpea, but they can also be as small as a sand grain or as large as a golf ball. However, small stones can pass through the urinary tract, but larger ones may require surgery.

Kidney stones can be caused by a variety of factors, including diet, excess body weight, certain medical conditions, and certain supplements and medications. The stones can affect any part of your urinary tract, including the kidneys and bladder.

Kidney stones

If kidney stones are detected early, they may not cause permanent damage. To clear a kidney stone, you may only need to take pain medication and drink plenty of water, depending on your circumstances. Surgery may be required in some cases, such as when stones get lodged in the urinary tract, and are associated with a urinary infection, or cause complications.

Symptoms of Kidney Stones

A kidney stone normally does not cause symptoms until it moves around within the kidney or goes through the ureters, which connect the kidneys and bladder. It can block the flow of urine and cause the kidney to enlarge and the ureter to spasm, which can be quite painful. You may then experience the following signs and symptoms:

* Pain in the side and back, below the ribcage, is severe and intense
* Pain in the lower abdomen and groyne that radiates
* Urinating causes pain or a burning sensation.

Other signs and symptoms may include:

* Pink, brown or red urine
* Cloudy or foul-smelling urine
* Vomiting
* Blood in the urine
* Frequent urge to urinate
* Urinating in small amounts
* Nausea
* Fever and chills

As a kidney stone passes through the urinary tract, the pain it causes may vary — for example, migrating to a new spot or rising in intensity.

shutterstock_406457008.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1743 2023-04-19 22:42:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1646) Ventilator

Gist

Mechanical ventilators are machines that act as bellows to move air in and out of your lungs. Your respiratory therapist and doctor set the ventilator to control how often it pushes air into your lungs and how much air you get. You may be fitted with a mask to get air from the ventilator into your lungs.

Summary

A ventilator is a piece of medical technology that provides mechanical ventilation by moving breathable air into and out of the lungs, to deliver breaths to a patient who is physically unable to breathe, or breathing insufficiently. Ventilators are computerized microprocessor-controlled machines, but patients can also be ventilated with a simple, hand-operated bag valve mask. Ventilators are chiefly used in intensive-care medicine, home care, and emergency medicine (as standalone units) and in anesthesiology (as a component of an anesthesia machine).

Ventilators are sometimes called "respirators", a term commonly used for them in the 1950s (particularly the "Bird respirator"). However, contemporary medical terminology uses the word "respirator" to refer instead to a face-mask that protects wearers against hazardous airborne substances.

Function

In its simplest form, a modern positive pressure ventilator, consists of a compressible air reservoir or turbine, air and oxygen supplies, a set of valves and tubes, and a disposable or reusable "patient circuit". The air reservoir is pneumatically compressed several times a minute to deliver room-air, or in most cases, an air/oxygen mixture to the patient. If a turbine is used, the turbine pushes air through the ventilator, with a flow valve adjusting pressure to meet patient-specific parameters. When over pressure is released, the patient will exhale passively due to the lungs' elasticity, the exhaled air being released usually through a one-way valve within the patient circuit called the patient manifold.

Ventilators may also be equipped with monitoring and alarm systems for patient-related parameters (e.g., pressure, volume, and flow) and ventilator function (e.g., air leakage, power failure, mechanical failure), backup batteries, oxygen tanks, and remote control. The pneumatic system is nowadays often replaced by a computer-controlled turbopump.

Modern ventilators are electronically controlled by a small embedded system to allow exact adaptation of pressure and flow characteristics to an individual patient's needs. Fine-tuned ventilator settings also serve to make ventilation more tolerable and comfortable for the patient. In Canada and the United States, respiratory therapists are responsible for tuning these settings, while biomedical technologists are responsible for the maintenance. In the United Kingdom and Europe the management of the patient's interaction with the ventilator is done by critical care nurses.

The patient circuit usually consists of a set of three durable, yet lightweight plastic tubes, separated by function (e.g. inhaled air, patient pressure, exhaled air). Determined by the type of ventilation needed, the patient-end of the circuit may be either noninvasive or invasive.

Noninvasive methods, such as continuous positive airway pressure (CPAP) and non-invasive ventilation, which are adequate for patients who require a ventilator only while sleeping and resting, mainly employ a nasal mask. Invasive methods require intubation, which for long-term ventilator dependence will normally be a tracheotomy cannula, as this is much more comfortable and practical for long-term care than is larynx or nasal intubation.

Life-critical system

As failure may result in death, mechanical ventilation systems are classified as life-critical systems, and precautions must be taken to ensure that they are highly reliable, including their power supply. Ventilatory failure is the inability to sustain a sufficient rate of CO2 elimination to maintain a stable pH without mechanical assistance, muscle fatigue, or intolerable dyspnea. Mechanical ventilators are therefore carefully designed so that no single point of failure can endanger the patient. They may have manual backup mechanisms to enable hand-driven respiration in the absence of power (such as the mechanical ventilator integrated into an anaesthetic machine). They may also have safety valves, which open to atmosphere in the absence of power to act as an anti-suffocation valve for spontaneous breathing of the patient. Some systems are also equipped with compressed-gas tanks, air compressors or backup batteries to provide ventilation in case of power failure or defective gas supplies, and methods to operate or call for help if their mechanisms or software fail. Power failures, such as during a natural disaster, can create a life-threatening emergency for people using ventilators in a home care setting. Battery power may be sufficient for a brief loss of electricity, but longer power outages may require going to a hospital.

Details

A medical ventilator is a machine that helps your lungs work. It can be a lifesaving machine if you have a condition that makes it hard for you to breathe properly.

A medical ventilator is a machine that helps your lungs work. It can be a lifesaving machine if you have a condition that makes it hard for you to breathe properly or when you can’t breathe on your own at all.

A ventilator helps to push air in and out of your lungs so your body can get the oxygen it needs. You may wear a fitted mask to help get oxygen from the ventilator into your lungs. Or, if your condition is more serious, a breathing tube may be inserted down your throat to supply your lungs with oxygen.

Ventilators are most often used in hospital settings. A doctor or a respiratory therapist will control how much oxygen is pushed into your lungs by the ventilator.

Other names that a ventilator is known by include:

* respirator
* breathing machine
* mechanical ventilation

Why would you need a ventilator?

Not being able to breathe properly on your own is known as respiratory failure and is a life-threatening emergency.

If your brain, heart, liver, kidneys, and other organs don’t get enough oxygen, they won’t be able to function as they should. A ventilator can help you get the oxygen you need for your organs to function.

Health conditions

Many types of health conditions can cause you to have difficulty breathing, such as:

* acute respiratory distress syndrome (ARDS)
* chronic obstructive pulmonary disease (COPD)
* asthma
* brain injury
* cardiac arrest
* pneumonia
* collapsed lung
* stroke
* coma or loss of consciousness
* drug overdose
* hypercapnic respiratory failure
* lung infection
* myasthenia gravis
* sepsis, an infection in your blood
* upper spinal cord injuries
* premature lung development (in babies)
* Guillain-Barré syndrome
* amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig’s disease

Surgery

If you have general anesthesia for a surgical procedure, you may need to be on a ventilator while you’re asleep. This is because some anesthesia drugs can make it difficult for you to breathe properly on your own while you’re in a sleep-like state.

With surgery, you may need to be on a ventilator for a period of time as follows:

* During surgery. A ventilator can temporarily do the breathing for you while you’re under general anesthesia.
* Recovering from surgery. Sometimes, for very complicated surgeries, a patient may need a ventilator to help them breathe for hours or longer after surgery.

How long do you need to be on a ventilator?

The length of time you’ll be on a ventilator depends on the reason you need help breathing.

If you need a ventilator during surgery, you’ll typically only be on a ventilator while you’re in a sleep-like state. This could range from less than an hour to several hours or more.

If you need a ventilator for a health condition, you may need to be on it for hours, days, weeks or longer. It depends on how long it takes for your lungs to get stronger and to be able to function properly on their own.

A ventilator won’t cure an illness. The job of a ventilator is to keep you breathing while your body fights off an infection or illness or recovers from an injury.

How does a ventilator work?

A medical ventilator uses pressure to blow oxygenated air into your airways and to remove carbon dioxide from your body.

Your airway includes your:

* nose
* mouth
* throat (pharynx)
* voice box (larynx)
* windpipe (trachea)
* lung tubes (bronchi)

Oxygen from a ventilator may be pushed into your lungs in one of two ways: with a fitted mask or with a breathing tube.

With a face mask

The use of a face mask to get oxygen into your lungs is known as non-invasive ventilation.

With this type of ventilation, a fitted plastic face mask is placed over both your nose and mouth. A tube will be connected from the face mask to the ventilator, which will push air into your lungs. This method is typically used in cases where breathing issues are less severe.

There are several benefits to this method of ventilation:

* It’s more comfortable than a breathing tube that goes down your throat.
* It doesn’t require sedation.
* It allows you to talk, swallow, and cough.
* It may lower the risk of side effects and complications, such as infection and pneumonia, which are more common with breathing tube ventilation.

With a breathing tube

For more severe cases, you’ll need a breathing tube inserted into your throat and down your windpipe. This is known as invasive ventilation. You’ll usually be sedated before this procedure is done, as it can cause pain and discomfort.

The breathing tube that’s inserted into your windpipe is connected to a ventilator that forces air into your airways so your body will be able to get the oxygen it needs while you heal from your illness or injury.

If you’re on a ventilator for an extended period of time, you may need a tracheostomy. This involves a surgeon making a hole in the front of your neck. A tube will be inserted into your trachea, below your vocal chords, and then connected to a ventilator.

A tracheostomy may also be used to help wean you off a ventilator if you’ve been on it for a long time.

What to expect on a ventilator

Being on a ventilator while you’re conscious can be very uncomfortable, especially if you’re on a ventilator that has a breathing tube down your throat. You can’t talk, eat, or move around while you’re connected to the ventilator.

If you’re on a ventilator with a face mask, you’ll likely be able to talk, swallow, and cough.

Medication

Your doctor may give you medications that help you feel more relaxed and comfortable while you’re on a ventilator. This helps make being on a ventilator less traumatic. Medications that are most often given to people on a ventilator include:

* pain medications
* sedatives
* muscle relaxers
* sleep medications

These drugs often cause drowsiness and confusion. These effects will wear off once you stop taking them. You’ll no longer need medication once you’re done using the ventilator.

How you’re monitored

If you’re on a ventilator, you’ll likely need other medical equipment that monitors how you’re doing overall. You may need monitors for your:

* heart rate
* blood pressure
* respiratory rate (breathing)
* oxygen saturation

You may also need regular chest X-rays or scans.

Additionally, you may need blood tests to check how much oxygen and carbon dioxide are in your blood.

Risks of being on a ventilator

A ventilator can save your life. However, like other treatments, it can cause potential side effects. This is more common if you’re on a ventilator for a longer period of time.

Some of the most common risks associated with being on a ventilator include:

* Infection. This is one of the main risks of being on a ventilator with a breathing tube. Fluid and mucus build-up in your throat and windpipe can allow germs to accumulate on the breathing tube. These germs can then travel into your lungs. This can raise the risk of developing pneumonia. Sinus infections are also common with a breathing tube. You may need antibiotics to treat pneumonia or sinus infections.
* Irritation. The breathing tube can rub against and irritate your throat or lungs. It can also make it hard to cough. Coughing helps to get rid of dust and irritants in your lungs.
* Vocal cord issues. A breathing tube passes through your voice box (larynx), which contains your vocal cords. This is why you can’t speak when you’re using a ventilator. The breathing tube can damage your voice box.
* Pulmonary edema. The air sacs in your lungs can get filled up with fluid.
* Blood clots. Lying in the same position for a long time can increase the risk of blood clots forming.
* Sedation-related delirium. This can be caused by the sedatives and many other medications given to an individual who is on a ventilator with a breathing tube.
* Impairment of nerves and muscles. Lying still for many days, being sedated, and not breathing on your own can result in disorders of your nerves and muscles.
* Fluid overload. This can be caused by continuous infusions, drug toxicity, and renal failure.
* Lung injury. A ventilator can cause lung damage. This can happen for several reasons:

** too much air pressure in the lungs
** air leaks into the space between the lungs and chest wall (pneumothorax)
** oxygen toxicity (too much oxygen in the lungs)

What to expect when taken off a ventilator

If you’ve been on a ventilator for a long time, you may have difficulty breathing on your own once the ventilator isn’t breathing for you.

You may find that you have a sore throat or aching, weak chest muscles when you’re taken off the ventilator. This can happen because the muscles around your chest get weaker while the ventilator is doing the work of breathing for you. The medications you receive while on the ventilator may also contribute to your weakened muscles.

Sometimes it can take days or weeks for your lungs and chest muscles to get back to normal. Your doctor may recommend slowly weaning you off a ventilator. This means you won’t be completely taken off the ventilator. Instead, you’ll be taken off it gradually until your lungs are strong enough to breathe on their own without any help from the ventilator.

If you have pneumonia or another infection from a ventilator, you may still feel unwell after you’re off the ventilator. Tell your doctor if you feel worse or have new symptoms, like a fever.

If you’ve been on a ventilator for an extended period of time, many of the muscles in your body will be a lot weaker than they used to be. It may be hard to move around with ease and to do your usual daily activities. You may require prolonged physical therapy to regain your muscle strength and to be able to get back to your normal day-to-day life.

How to prepare if a loved one is put on a ventilator

If ventilation is being planned for your loved one, there are some steps you can take to help make things more comfortable for them and reduce their risk of complications:

* Be a supportive and calming presence to help ease their fears and discomfort. Being on a ventilator can be scary, and causing fuss and alarm can make things more uncomfortable and stressful for your loved one.
* Ask all visitors to properly wash their hands and wear face masks.
* Prevent young children or people who may be ill from visiting your loved one.
* Let your loved one rest. Avoid talking to them about topics or issues that may cause them distress.

The takeaway

Ventilators are breathing machines that help keep your lungs working. They can’t treat or fix a health problem. But they can do the breathing work for you while you’re being treated or recovering from an illness or health condition.

Ventilators can be lifesaving and an important part of treatment support for people of all ages, including children and babies.

How long you’re on a ventilator depends on how long you need help breathing or how long it takes for your underlying condition to be treated.

Some people may need a ventilator for only a few hours or less. Others may need it for days, weeks, or longer. You, your doctor, and your family can work together to decide whether using a ventilator is best for you and your health.

Additional Information

Mechanical ventilators have played an important, if controversial, role in the treatment of patients with severe coronavirus disease 2019 (COVID-19)—helping critically ill persons breathe in the near term, but with potentially harmful trade-offs for lung function over the long term. For COVID-19 patients the possibility of long-term harm is only beginning to surface, raising questions about how ventilators work and why they pose a risk to patients.

Mechanical ventilators are automated machines that do the work of breathing for patients who are unable to use their lungs. Ventilators commonly are used when patients are experiencing severe shortness of breath, such as that caused by respiratory infection or by conditions such as chronic obstructive pulmonary disease (COPD). They may also be used in persons with traumatic brain injury or stroke, when the nervous system is no longer able to control breathing.

Ventilators work by delivering oxygen directly to the lungs, and they can also be programmed to pump out carbon dioxide for patients who are unable to exhale on their own. The ventilator delivers oxygen via a tube that is inserted through the patient’s nose or mouth in a procedure known as intubation or that is placed directly into the trachea, or windpipe, in a surgical procedure known as tracheostomy. The opposite end of the tube is connected to a machine (the ventilator) that pumps a mixture of air and oxygen through the tube and into the lungs. The air is warmed and humidified before it goes into the body. The ventilator further plays a vital role in maintaining positive air pressure to help prevent small air sacs (alveoli) in the lungs from collapsing.

Ventilators are set to pump air into the lungs a certain number of times per minute. The patient’s heart rate, respiratory rate, and blood pressure are monitored constantly. Doctors and nurses use this information to maths the patient’s health and to make necessary adjustments to the ventilator. When a patient shows signs of recovery from infection or injury, the doctor may decide to begin the process of ventilator weaning, a trial in which the patient is given a chance to breathe on his or her own but is still connected to the ventilator in case it is needed. Once a patient is weaned from the ventilator, the breathing tube is removed.

Ventilators are not cures for infection, and their use poses serious risks to patients. While on a ventilator, patients are unable to cough and clear potentially infectious agents from their airways. As a result, some patients develop ventilator-associated pneumonia, in which bacteria enter the lungs. Sinus infections can also occur. Other problems include oxygen toxicity and excess air pressure, which can cause significant damage to lung tissue. In addition, the longer a person is on a ventilator, the greater the degree of respiratory muscle atrophy that will occur. This can make it difficult for patients to breathe on their own. Activities like climbing stairs or even walking short distances may become impossible, resulting in long-term disability and reduced quality of life.

How-Long-Can-A-Patient-Stay-On-A-Ventilator.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1744 2023-04-20 14:31:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1647) Beach

Summary

A beach is a landform alongside a body of water which consists of loose particles. The particles composing a beach are typically made from rock, such as sand, gravel, shingle, pebbles, etc., or biological sources, such as mollusc shells or coralline algae. Sediments settle in different densities and structures, depending on the local wave action and weather, creating different textures, colors and gradients or layers of material.

Though some beaches form on inland freshwater locations such as lakes and rivers, most beaches are in coastal areas where wave or current action deposits and reworks sediments. Erosion and changing of beach geologies happens through natural processes, like wave action and extreme weather events. Where wind conditions are correct, beaches can be backed by coastal dunes which offer protection and regeneration for the beach. However, these natural forces have become more extreme due to climate change, permanently altering beaches at very rapid rates. Some estimates describe as much as 50 percent of the earth's sandy beaches disappearing by 2100 due to climate-change driven sea level rise.

Sandy beaches occupy about one third of global coastlines. These beaches are popular for recreation, playing important economic and cultural roles—often driving local tourism industries. To support these uses, some beaches have man-made infrastructure, such as lifeguard posts, changing rooms, showers, shacks and bars. They may also have hospitality venues (such as resorts, camps, hotels, and restaurants) nearby or housing, both for permanent and seasonal residents.

Human forces have significantly changed beaches globally: direct impacts include bad construction practices on dunes and coastlines, while indirect human impacts include water pollution, plastic pollution and coastal erosion from sea level rise and climate change. Some coastal management practices are designed to preserve or restore natural beach processes, while some beaches are actively restored through practices like beach nourishment.

Wild beaches, also known as undeveloped or undiscovered beaches, are not developed for tourism or recreation. Preserved beaches are important biomes with important roles in aquatic or marine biodiversity, such as for breeding grounds for sea turtles or nesting areas for seabirds or penguins. Preserved beaches and their associated dune are important for protection from extreme weather for inland ecosystems and human infrastructure.[2]

Details

A beach is a narrow, gently sloping strip of land that lies along the edge of an ocean or a lake.

A beach is a narrow, gently sloping strip of land that lies along the edge of an ocean, lake, or river. Materials such as sand, pebbles, rocks, and seashell fragments cover beaches.

Most beach materials are the products of weathering and erosion. Over many years, water and wind wear away at the land. The continual action of waves beating against a rocky cliff, for example, may cause some rocks to come loose. Huge boulders can be worn town to tiny grains of sand.

Beach materials may travel long distances, carried by wind and waves. As the tide comes in, for example, it deposits ocean sediment. This sediment may contain sand, shells, seaweed, even marine organisms like crabs or sea anemones. When the tide goes out, it takes some sediment with it.

Tides and ocean currents can carry sediment a few meters or hundreds of kilometers away. Tides and currents are the main way beaches are created, changed, and even destroyed, as the currents move sediment and debris from one place to another.

Beaches are constantly changing. Tides and weather can alter beaches every day, bringing new materials and taking away others.

Beaches also change seasonally. During the winter, storm winds toss sand into the air. This can sometimes erode beaches and create sandbars. Sandbars are narrow, exposed areas of sand and sediment just off the beach. During the summer, waves retrieve sand from sandbars and build the beach back up again. These seasonal changes cause beaches to be wider and have a gentle slope in the summer, and be narrower and steeper in the winter.

Beach Berms

Every beach has a beach profile. A beach profile describes the landscape of the beach, both above the water and below it. Beaches can be warm, and rich in vegetation such as palm or mangrove trees. Beaches can also be barren desert coastlines. Other beaches are cold and rocky, while beaches in the Arctic and Antarctic are frozen almost all year.

The area above the water, including the intertidal zone, is known as the beach berm. Beach berm can include vegetation, such as trees, shrubs, or grasses. The most familiar characteristic of a beach berm is its type of sand or rock.

Sandy

Most beach sand comes from several different sources. Some sand may be eroded bits of a rocky reef just offshore. Others may be eroded rock from nearby cliffs. Pensacola Beach, in the U.S. state of Florida, for instance, has white, sandy beaches. Some sand is eroded from rocks and minerals in the Gulf of Mexico. Most sand, however, is made of tiny particles of weathered quartz from the Appalachian Mountains, hundreds of kilometers away.

The sandy beaches surrounding Chameis Bay, Namibia, are also full of quartz and seashells. However, the beaches of Chameis Bay contain another type of rock—diamonds. Mining companies have dug mines both on the beach and offshore to excavate these precious stones. Other gems, such as sapphires, emeralds, and garnets, are present on many beaches throughout the world, as tiny grains of sand.

Rocky

Some beach berms are not sandy at all. They are covered with flat pebbles called shingles or rounded rocks known as cobbles. Such beaches are common along the coasts of the British Isles. Hastings Beach, a shingle beach on the southern coast of England, has been a dock for fishing boats for more than a thousand years.

A storm beach is a type of shingle beach that is often hit by heavy storms. Strong waves and winds batter storm beaches into narrow, steep landforms. The shingles on storm beaches are usually small near the water and large at the highest elevation.

Other types of beaches

Some beaches, called barrier beaches, protect the mainland from the battering of ocean waves. These beaches may lie at the heads of islands called barrier islands. Many barrier beaches and barrier islands stretch along the Atlantic and Gulf coasts of the United States. These narrow beaches form barriers between the open ocean and protected harbors, lagoons, and sounds.

Beaches near rivers are often muddy or soft. Soil and sediment from the river is carried to the river’s mouth, sometimes creating a fertile beach. Hoi An, Vietnam, is an ancient town that sits on the estuary of the Thu Bon River and the South China Sea. Hoi An’s soft beaches serve as resort and tourist center.

Beach berms can be many different colors. Coral beaches, common on islands in the Caribbean Sea, are white and powdery. They are made from the eroded exoskeletons of tiny animals called corals. Some coral beaches, such as Harbour Island, Bahamas, actually have pink sand. The coral that created these beaches were pink or red.

On some volcanic islands, beaches are jet-black. The sand on Punaluu Beach, Hawaii, is made of basalt, or lava that flowed into the ocean and instantly cooled. As it cooled, the basalt exploded into thousands of tiny fragments. Some volcanic beaches, such as those on the South Pacific island of Guam, are green. The basalt in these beaches contained a large amount of the mineral olivine.

Threats to Beaches

Coastal Erosion:

The most significant threat to beaches is natural coastal erosion. Coastal erosion is the natural process of the beach moving due to waves, storms, and wind. Beaches that experience consistent coastal erosion are said to be in retreat.

Coastal erosion can be influenced by weather systems. Beaches on the island nation of Tuvalu, in the South Pacific, were retreating very quickly in the 1990s. Meteorologists linked this to the weather system known as the El Nino-Southern Oscillation (ENSO). As ENSO events slowed, Tuvalu’s beaches began to recover.

People respond to coastal erosion in different ways. For years, coastal erosion threatened the Cape Hatteras Lighthouse, on Hatteras Island in the U.S. state of North Carolina. The Cape Hatteras Lighthouse is the tallest lighthouse in the United States. For more than 100 years, it has warned ships of the low-lying sandbars and islands known as the Outer Banks. Coastal erosion made the beach beneath the lighthouse unstable. In 2000, the entire lighthouse was moved 870 meters (2,870 feet) inland.

People also combat coastal erosion with seawalls. These large structures, built of rock, plastic, or concrete, are constructed to prevent sand and other beach material from drifting away. Residents of Sea Gate, a community in Coney Island, New York, for instance, invested in a series of seawalls to protect their homes from powerful storms and waves from the Atlantic Ocean.

However, shifting sand is a natural part of the beach ecosystem. Seawalls may protect one section of beach while leaving another with little sand. Seawalls can also increase the speed at which beaches retreat. When tides and waves hit massive seawalls instead of beaches, they bounce back to the ocean with more energy. This tidal energy causes the sand in front of a seawall to erode much more quickly than it would without the seawall.

Hurricane Sandy was a deadly storm that struck the East Coast of the United States in October 2012. Many of the seawalls of Sea Gate crumbled, and more than 25 homes were lost.

Sea Level Rise:

Beaches are also threatened by sea level rise. Sea levels have been gradually rising for many years, drowning some beaches completely.

New Moore Island, for example, was a small, uninhabited island in the Bay of Bengal. Both India and Bangladesh claimed the island, which was little more than a strip of sandy beach. In March 2010, rising sea levels drowned the island completely. New Moore Island is now a sandbar.

Development

Although the natural forces of wind and water can dramatically change beaches over many years, human activity can speed up the process. Dams, which block river sediment from reaching beaches, can cause beaches to retreat. In some places, large quantities of sand have been removed from beaches for use in making concrete.

Development threatens the natural landscape of beaches. People develop homes and businesses near beaches for many reasons. Beaches are traditional tourist destinations. Places like the U.S. state of Hawaii, the island nation of Tahiti, and the islands of Greece are all economically dependent on tourism. Businesses, such as charter boat facilities, restaurants, and hotels, are built on the beach.

People also enjoy living near beaches. Beachfront property is often very highly valued. “The Hamptons” are exclusive beach communities on the eastern end of Long Island, New York. Homes in the Hamptons are some of the most expensive in the United States.

Development can crowd beaches. As more buildings and other facilities are built, beaches become narrower and narrower. The natural, seasonal movement of beach sediment is disrupted. Communities spend millions of dollars digging, or dredging, sand from one place to another in order to keep the beach the same all year.

Disappearing beaches are bad for coastal facilities. Natural beaches reduce the power of waves, wind, and storm surges. Without these barrier beaches, waves and storm surges crash directly into buildings. In 1992, a storm swept away more than 200 homes in the Hamptons. It cost the government more than $80 million to replace the barrier beach.

On Kauai, one of the islands in Hawaii, more than 70 percent of the beach is eroding, partly because of construction of seawalls and jetties, and from clearing out stream mouths. Geologists say Oahu, another Hawaiian island, has lost 25 percent of its shoreline. Tourism is the state’s main industry, so disappearing beaches are a major concern. The destruction of Hawaii’s beaches could also mean a loss of habitat for many plants and animals, some of which are already endangered.

Beach Pollution

Many beaches, especially in urban areas, are extremely polluted. Waves wash up debris from the ocean, while drainage pipes or rivers deposit waste from inland areas. Some of this waste includes sewage and other toxic chemicals. After strong storms, some beaches are closed. The amount of bacteria, raw sewage, and other toxic chemicals is hazardous to human health. Sometimes, it takes days or even weeks for the toxic waters to wash out to sea.

Beach pollution also includes garbage, such as plastic bags, cans, and other containers from picnics. Medical waste, such as needles and surgical instruments, has even washed up on beaches.

All beach pollution is harmful to wildlife. Birds may choke on small bits of plastic. Marine mammals such as sea lions may become tangled in ropes, twine, or other material. Floating plastic may prevent algae or sea plants from developing. This prevents animals that live in tide pools, such as sea anemones or sea stars, from finding nutrients.

Protecting Beaches

Reducing pollution is an important way to protect beaches. Visitors should never leave trash on the beach or throw it in the ocean.

Beachgoers should also leave wildlife alone—including birds, plants, and seaweed. Taking shells or live animals from the beach destroys the habitat.

People can also protect beaches from excess erosion. Limiting beachfront development can be an important step in protecting the natural landscape of beaches. Along some beaches, areas of vegetation known as “living shorelines” protect the beach ecosystem from erosion and protect the inland area from floods and storm surges.

In some places, machinery is used to dredge sand from the seabed just offshore and return it to the beach. Miami Beach, in the U.S. state of Florida, was restored by this method.

wtqqnkYDYi2ifsWZVW2MT4-1200-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1745 2023-04-21 13:22:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1648) Venae cavae

Gist

What is the vena cava? The superior vena cava and inferior vena cava are very large veins that bring deoxygenated blood to your heart to get oxygen. Your inferior vena cava, your body's largest vein, carries oxygen-depleted blood back to your heart from the lower part of your body.

Summary

In anatomy, the venae cavae are two large veins (great vessels) that return deoxygenated blood from the body into the heart. In humans they are the superior vena cava and the inferior vena cava, and both empty into the right atrium. They are located slightly off-center, toward the right side of the body.

The right atrium receives deoxygenated blood through coronary sinus and two large veins called venae cavae. The inferior vena cava (or caudal vena cava in some animals) travels up alongside the abdominal aorta with blood from the lower part of the body. It is the largest vein in the human body.

The superior vena cava (or cranial vena cava in animals) is above the heart, and forms from a convergence of the left and right brachiocephalic veins, which contain blood from the head and the arms.

Details

Vena cava, in air-breathing vertebrates, including humans, are either of two major trunks, the anterior and posterior venae cavae, that deliver oxygen-depleted blood to the right side of the heart. The anterior vena cava, also known as the precava, drains the head end of the body, while the posterior vena cava, or postcava, drains the tail, or rear, end. In humans these veins are respectively called the superior and inferior venae cavae. Whereas many mammals, including humans, have only one anterior vena cava, other animals have two.

Superior vena cava.

Not far below the collarbone and in back of the right side of the breastbone, two large veins, the right and left brachiocephalic, join to form the superior vena cava. The brachiocephalic veins, as their name implies—being formed from the Greek words for “arm” and “head”—carry blood that has been collected from the head and neck and the arms; they also drain blood from much of the upper half of the body, including the upper part of the spine and the upper chest wall. A large vein, the azygos, which receives oxygen-poor blood from the chest wall and the bronchi, opens into the superior vena cava close to the point at which the latter passes through the pericardium, the sac that encloses the heart. The superior vena cava extends down about 7 cm (2.7 inches) before it opens into the right upper chamber—the right atrium of the heart. There is no valve at the heart opening.

Inferior vena cava.

The inferior vena cava is formed by the coming together of the two major veins from the legs, the common iliac veins, at the level of the fifth lumbar vertebra, just below the small of the back. Unlike the superior vena cava, it has a substantial number of tributaries between its point of origin and its terminus at the heart. These include the veins that collect blood from the muscles and coverings of the loins and from the walls of the abdomen, from the reproductive organs, from the kidneys, and from the liver. In its course to the heart the inferior vena cava ascends close to the backbone; passes the liver, in the dorsal surface of which it forms a groove; enters the chest through an opening in the diaphragm; and empties into the right atrium of the heart at a non-valve opening below the point of entry for the superior vena cava.

Additional Information

The vena cava is a large vein that collects blood from either the upper or lower half of your body. It receives blood from several smaller veins. This is blood with the oxygen removed that the vena cava transports to the right side of the heart. Blockage or injury of a vena cava can have serious consequences for your health.

What Is the Vena Cava?

A vena cava (plural: venae cavae) is a large vein that carries blood to the heart. You have two venae cavae: the superior vena cava and the inferior vena cava. Together, these large veins carry deoxygenated (with the oxygen removed) blood from all over the body to the right atrium of the heart. This blood moves to the right ventricle of the heart, which pumps it to the lungs through the pulmonary artery.

These large veins are formed by the merging of smaller veins. After the venae cavae are formed, smaller veins connect with them along their path. The blood from all over the head, arms, and chest is collected by various veins that all contribute to the superior vena cava. Similarly, the blood from the lower limbs, pelvis, and abdominal organs all reach the inferior vena cava.

When you're at rest, your heart pumps five to six liters of blood a minute. If you're exercising hard, this can go up to 35 liters a minute. The venae cavae bring this blood back.

What Does the Vena Cava Do?

The venae cavae are veins, so they have one job — to carry blood from all the tissues and organs of the body to the heart.

The superior vena cava carries blood drained from the parts of the body above the diaphragm — the head, neck, arms, shoulder, and chest.

The inferior vena cava transports blood from your lower limbs, liver, digestive system, kidneys, reproductive system, and other organs and tissues of the body below the diaphragm.

Where Is the Vena Cava Located?

The superior vena cava begins behind the sternum (breast bone) near the right first rib. It travels down and drains into the right atrium at the level of the third rib. It is a short vein about 7 centimeters long and runs close to the right lung in the space between the two lungs.

RELATED

The inferior vena cava goes up the abdomen on the right side of the spine (vertebral column). After connecting with the hepatic vein, it goes through the diaphragm, the muscle that helps you breathe and separates your chest cavity from your abdomen. In the chest, the inferior vena cava lies on the right side of the space between the lungs. Reaching the heart, it opens into the right atrium.

The two venae cavae are in line with each other vertically. This allows doctors to pass a guidewire or catheter from the superior vena cava through the right atrium and into the inferior vena cava.

Signs Something Could Be Wrong With Your Vena Cava

Any blockage of a vena cava can cause difficulty in blood flow. Blockage of the superior vena cava can cause:

* Swelling of your upper body
* Breathlessness
* Cough
* Headache
* Angina (pain in the chest)
* Flushing of the face
* Difficulty swallowing

Blockage of the inferior vena cava can cause:

* Pain and swelling in your legs
* Weight gain
* Back pain

If you notice any of these symptoms, you should consult your healthcare provider.

What Conditions Affect the Vena Cava?

Obstruction of a vena cava reduces blood flow through it. Some conditions that cause this:

* Tumors in the nearby organs, most often lung cancers and lymphomas
* Blood clots
* Birth defects

The inferior vena cava can be injured by gunshots, stab wounds, or surgery. Injury to this vein causes lots of blood to be lost quickly and can lead to death. Surgeons clamp the vein above and below the injured part and try to repair it. As a last resort, the inferior vena cava can be tied off (ligated), but this results in significant problems later.

The superior vena cava may be injured during medical procedures. Repair is challenging because clamping the superior vena cava stops blood flow from the head and brain. Outcomes can be poor.

Deep vein thrombosis (DVT) is a condition in which blood clots form in the veins of the legs and pelvis. These clots sometimes detach from their place and reach the inferior vena cava. They then pass through the right side of the heart, leading to a dangerous condition called pulmonary embolism. DVT is treated with medicines to prevent clotting. Your doctor may advise the placement of a filter in the inferior vena cava to prevent any detached blood clots from reaching the heart.

Blockage of the blood flow in a vena cava needs treatment. Obstruction caused by a tumor is treated with surgery, chemotherapy, or radiation therapy. Medicines are prescribed to reduce blood clot formation. Tubes called stents can be placed inside a blood vessel to allow blood flow.

Human-heart-chambers.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1746 2023-04-22 13:34:08

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1649) Blacksmith

Summary

A blacksmith, also called smith, is a craftsman who fabricates objects out of iron by hot and cold forging on an anvil. Blacksmiths who specialized in the forging of shoes for horses were called farriers. The term blacksmith derives from iron, formerly called “black metal,” and farrier from the Latin ferrum, “iron.”

Iron replaced bronze for use in tools and weapons in the late 2nd and the 1st millennia BC, and from then until the Industrial Revolution, blacksmiths made by hand most of the wrought iron objects used in the world. The blacksmith’s essential equipment consists of a forge, or furnace, in which smelted iron is heated so that it can be worked easily; an anvil, a heavy, firmly secured, steel-surfaced block upon which the piece of iron is worked; tongs to hold the iron on the anvil; and hammers, chisels, and other implements to cut, shape, flatten, or weld the iron into the desired object.

Blacksmiths made an immense variety of common objects used in everyday life: nails, screws, bolts, and other fasteners; sickles, plowshares, axes, and other agricultural implements; hammers and other tools used by artisans; candlesticks and other household objects; swords, shields, and armour; wheel rims and other metal parts in wagons and carriages; fireplace fittings and implements; spikes, chains, and cables used on ships; and the ironwork, both functional and decorative, used in furniture and in the building trades.

The blacksmith’s most frequent occupation, however, was farriery. In horseshoeing, the blacksmith first cleans and shapes the sole and rim of the horse’s hoof with rasps and knives, a process painless to the animal owing to the tough, horny, and nerveless character of the hoof. He then selects a U-shaped iron shoe of appropriate size from his stock and, heating it red-hot in a forge, modifies its shape to fit the hoof, cools it by quenching it in water, and affixes it to the hoof with nails.

Most towns and villages had a blacksmith’s shop where horses were shod and tools, farm implements, and wagons and carriages were repaired. The ubiquity of the profession can be inferred, in the English-speaking world, from the prevalence of the surname “Smith.” Blacksmiths also came to be general-purpose repairers of farm equipment and other machinery in the 19th century. By then, however, blacksmithing was already on the decline, as more and more metal articles formerly made by hand were shaped in factories by machines or made by inexpensive casting processes. In the industrialized world, even the blacksmith’s mainstay, farriery, has greatly declined with the disappearance of horses from use in agriculture and transport.

Details

A blacksmith is a metalsmith who creates objects primarily from wrought iron or steel, but sometimes from other metals, by forging the metal, using tools to hammer, bend, and cut (cf. tinsmith). Blacksmiths produce objects such as gates, grilles, railings, light fixtures, furniture, sculpture, tools, agricultural implements, decorative and religious items, cooking utensils, and weapons. There was an historical distinction between the heavy work of the blacksmith and the more delicate operation of a whitesmith, who usually worked in gold, silver, pewter, or the finishing steps of fine steel. The place where a blacksmith works is called variously a smithy, a forge or a blacksmith's shop.

While there are many people who work with metal such as farriers, wheelwrights, and armorers, in former times the blacksmith had a general knowledge of how to make and repair many things, from the most complex of weapons and armor to simple things like nails or lengths of chain.

Blacksmith's striker

A blacksmith's striker is an assistant (frequently an apprentice) whose job is to swing a large sledgehammer in heavy forging operations, as directed by the blacksmith. In practice, the blacksmith holds the hot iron at the anvil (with tongs) in one hand, and indicates where to strike the iron by tapping it with a small hammer in the other hand. The striker then delivers a heavy blow to the indicated spot with a sledgehammer. During the 20th century and into the 21st century, this role has become increasingly unnecessary and automated through the use of trip hammers or reciprocating power hammers.

Blacksmith's materials

When iron ore is smelted into usable metal, a certain amount of carbon is usually alloyed with the iron. (Charcoal is almost pure carbon.) The amount of carbon significantly affects the properties of the metal. If the carbon content is over 2%, the metal is called cast iron, because it has a relatively low melting point and is easily cast. It is quite brittle, however, and cannot be forged so therefore not used for blacksmithing. If the carbon content is between 0.25% and 2%, the resulting metal is tool steel, which can be heat treated as discussed above. When the carbon content is below 0.25%, the metal is either "wrought iron (wrought iron is not smelted and cannot come from this process) " or "mild steel." The terms are never interchangeable. In preindustrial times, the material of choice for blacksmiths was wrought iron. This iron had a very low carbon content, and also included up to 5% of glassy iron silicate slag in the form of numerous very fine stringers. This slag content made the iron very tough, gave it considerable resistance to rusting, and allowed it to be more easily "forge welded," a process in which the blacksmith permanently joins two pieces of iron, or a piece of iron and a piece of steel, by heating them nearly to a white heat and hammering them together. Forge welding is more difficult with modern mild steel, because it welds in a narrower temperature band. The fibrous nature of wrought iron required knowledge and skill to properly form any tool which would be subject to stress. Modern steel is produced using either the blast furnace or arc furnaces. Wrought iron was produced by a labor-intensive process called puddling, so this material is now a difficult-to-find specialty product. Modern blacksmiths generally substitute mild steel for making objects traditionally of wrought iron. Sometimes they use electrolytic-process pure iron.

Terminology

* Iron is a naturally occurring metallic element. It is almost never found in its native form (pure iron) in nature. It is usually found as an oxide or sulfide, with many other impurity elements mixed in.

* Wrought iron is the purest form of iron generally encountered or produced in quantity. It may contain as little as 0.04% carbon (by weight). From its traditional method of manufacture, wrought iron has a fibrous internal texture. Quality wrought-iron blacksmithing takes the direction of these fibers into account during forging, since the strength of the material is stronger in line with the grain than across the grain. Most of the remaining impurities from the initial smelting become concentrated in silicate slag trapped between the iron fibers. This slag produces a lucky side effect during forge-welding. When the silicate melts, it makes wrought iron self-fluxing. The slag becomes a liquid glass that covers the exposed surfaces of the wrought iron, preventing oxidation which would otherwise interfere with the successful welding process.

* Steel is an alloy of iron and between 0.3% and 1.7% carbon by weight. The presence of carbon allows steel to assume one of several different crystalline configurations. Macroscopically, this is seen as the ability to "turn the hardness of a piece of steel on and off" through various processes of heat-treatment. If the concentration of carbon is held constant, this is a reversible process. Steel with a higher carbon percentage may be brought to a higher state of maximum hardness.

* Cast iron is iron that contains between 2.0% to 6% carbon by weight. There is so much carbon present that the hardness cannot be switched off. Hence, cast iron is a brittle metal, which can break like glass. Cast iron cannot be forged without special heat treatment to convert it to malleable iron.

Steel with less than 0.6% carbon content cannot be hardened enough by simple heat-treatment to make useful hardened-steel tools. Hence, in what follows, wrought-iron, low-carbon-steel, and other soft unhardenable iron varieties are referred to indiscriminately as just iron.

20th and 21st centuries

During the 20th century various gases (natural gas, acetylene, etc.) have also come to be used as fuels for blacksmithing. While these are fine for blacksmithing iron, special care must be taken when using them to blacksmith steel. Each time a piece of steel is heated, there is a tendency for the carbon content to leave the steel (decarburization). This can leave a piece of steel with an effective layer of unhardenable iron on its surface. In a traditional charcoal or coal forge, the fuel is really just carbon. In a properly regulated charcoal/coal fire, the air in and immediately around the fire should be a reducing atmosphere. In this case, and at elevated temperatures, there is a tendency for vaporized carbon to soak into steel and iron, counteracting or negating the decarburizing tendency. This is similar to the process by which a case of steel is developed on a piece of iron in preparation for case hardening.

A renewed interest in blacksmithing occurred as part of the trend in "do-it-yourself" and "self-sufficiency" that occurred during the 1970s. Currently there are many books, organizations and individuals working to help educate the public about blacksmithing, including local groups of smiths who have formed clubs, with some of those smiths demonstrating at historical sites and living history events. Some modern blacksmiths who produce decorative metalwork refer to themselves as artist-blacksmiths. In 1973 the Artists Blacksmiths’ Association of North America was formed with 27 members. By 2013 it had almost 4000 members. Likewise the British Artist Blacksmiths Association was created in 1978, with 30 charter members and had about 600 members in 2013 and publish for members a quarterly magazine.

While developed nations saw a decline and re-awakening of interest in blacksmithing, in many developing nations blacksmiths continued making and repairing iron and steel tools and hardware for people in their local area.

blacksmithing1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1747 2023-04-23 16:32:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1650) Artificial Intelligence

Gist

What is artificial intelligence (AI)? Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

Summary

Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals or by humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs.

AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), automated decision-making, and competing at the highest level in strategic game systems (such as chess and Go).

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.

The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction, and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. The term artificial intelligence has also been criticized for overhyping AI's true technological capabilities.

Details

What is artificial intelligence (AI)?

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

How does AI work?

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use it. Often, what they refer to as AI is simply a component of the technology, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No single programming language is synonymous with AI, but Python, R, Java, C++ and Julia have features popular with AI developers.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text can learn to generate lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples. New, rapidly improving generative AI techniques can create realistic text, images, music and other media.

AI programming focuses on cognitive skills that include the following:

* Learning. This aspect of AI programming focuses on acquiring data and creating rules for how to turn it into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.
* Reasoning. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.
* Self-correction. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.
* Creativity. This aspect of AI uses neural networks, rules-based systems, statistical methods and other AI techniques to generate new images, new text, new music and new ideas.

Differences between AI, machine learning and deep learning

AI, machine learning and deep learning are common terms in enterprise IT and sometimes used interchangeably, especially by companies in their marketing materials. But there are distinctions. The term AI, coined in the 1950s, refers to the simulation of human intelligence by machines. It covers an ever-changing set of capabilities as new technologies are developed. Technologies that come under the umbrella of AI include machine learning and deep learning.

Machine learning enables software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values. This approach became vastly more effective with the rise of large data sets to train on. Deep learning, a subset of machine learning, is based on our understanding of how the brain is structured. Deep learning's use of artificial neural networks structure is the underpinning of recent advances in AI, including self-driving cars and ChatGPT.

Why is artificial intelligence important?

AI is important for its potential to change how we live, work and play. It has been effectively used in business to automate tasks done by humans, including customer service work, lead generation, fraud detection and quality control. In a number of areas, AI can perform tasks much better than humans. Particularly when it comes to repetitive, detail-oriented tasks, such as analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors. Because of the massive data sets it can process, AI can also give enterprises insights into their operations they might not have been aware of. The rapidly expanding population of generative AI tools will be important in fields ranging from education and marketing to product design.

Indeed, advances in AI techniques have not only helped fuel an explosion in efficiency, but opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but Uber has become a Fortune 500 company by doing just that.

AI has become central to many of today's largest and most successful companies, including Alphabet, Apple, Microsoft and Meta, where AI technologies are used to improve operations and outpace competitors. At Alphabet subsidiary Google, for example, AI is central to its search engine, Waymo's self-driving cars and Google Brain, which invented the transformer neural network architecture that underpins the recent breakthroughs in natural language processing.

What are the advantages and disadvantages of artificial intelligence?

Artificial neural networks and deep learning AI technologies are quickly evolving, primarily because AI can process large amounts of data much faster and make predictions more accurately than humanly possible.

While the huge volume of data created on a daily basis would bury a human researcher, AI applications using machine learning can take that data and quickly turn it into actionable information. As of this writing, a primary disadvantage of AI is that it is expensive to process the large amounts of data AI programming requires. As AI techniques are incorporated into more products and services, organizations must also be attuned to AI's potential to create biased and discriminatory systems, intentionally or inadvertently.

Advantages of AI

The following are some advantages of AI.

* Good at detail-oriented jobs. AI has proven to be as good or better than doctors at diagnosing certain cancers, including breast cancer and melanoma.
* Reduced time for data-heavy tasks. AI is widely used in data-heavy industries, including banking and securities, pharma and insurance, to reduce the time it takes to analyze big data sets. Financial services, for example, routinely use AI to process loan applications and detect fraud.
* Saves labor and increases productivity. An example here is the use of warehouse automation, which grew during the pandemic and is expected to increase with the integration of AI and machine learning.
* Delivers consistent results. The best AI translation tools deliver high levels of consistency, offering even small businesses the ability to reach customers in their native language.
* Can improve customer satisfaction through personalization. AI can personalize content, messaging, ads, recommendations and websites to individual customers.
* AI-powered virtual agents are always available. AI programs do not need to sleep or take breaks, providing 24/7 service.

Disadvantages of AI

The following are some disadvantages of AI.

* Expensive.
* Requires deep technical expertise.
* Limited supply of qualified workers to build AI tools.
* Reflects the biases of its training data, at scale.
* Lack of ability to generalize from one task to another.
* Eliminates human jobs, increasing unemployment rates.

Strong AI vs. weak AI

AI can be categorized as weak or strong.

Weak AI, also known as narrow AI, is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.

Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese Room argument.

What are the 4 types of artificial intelligence?

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows.

Type 1: Reactive machines. These AI systems have no memory and are task-specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on a chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.

These are commonly described as the four main types of AI.

What are examples of AI technology and how is it used today?

AI is incorporated into a variety of different types of technology. Here are seven examples.

* Automation. When paired with AI technologies, automation tools can expand the volume and types of tasks performed. An example is robotic process automation (RPA), a type of software that automates repetitive, rules-based data processing tasks traditionally done by humans. When combined with machine learning and emerging AI tools, RPA can automate bigger portions of enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to process changes.

* Machine learning. This is the science of getting a computer to act without programming. Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms:

** Supervised learning. Data sets are labeled so that patterns can be detected and used to label new data sets.
** Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or differences.
** Reinforcement learning. Data sets aren't labeled but, after performing an action or several actions, the AI system is given feedback.

* Machine vision. This technology gives a machine the ability to see. Machine vision captures and analyzes visual information using a camera, analog-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in a range of applications from signature identification to medical image analysis. Computer vision, which is focused on machine-based image processing, is often conflated with machine vision.

* Natural language processing (NLP). This is the processing of human language by a computer program. One of the older and best-known examples of NLP is spam detection, which looks at the subject line and text of an email and decides if it's junk. Current approaches to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis and speech recognition.

* Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently. For example, robots are used in car production assembly lines or by NASA to move large objects in space. Researchers also use machine learning to build robots that can interact in social settings.

* Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition and deep learning to build automated skills to pilot a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.

* Text, image and audio generation. Generative AI techniques, which create various types of media from text prompts, are being applied extensively across businesses to create a seemingly limitless range of content types from photorealistic art to email responses and screenplays.

AI is not just one technology.

What are the applications of AI?

Artificial intelligence has made its way into a wide variety of markets. Here are 11 examples.

* AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster medical diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

* AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. The rapid advancement of generative AI technology such as ChatGPT is expected to have far-reaching consequences: eliminating jobs, revolutionizing product design and disrupting business models.

* AI in education. AI can automate grading, giving educators more time for other tasks. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. The technology could also change where and how students learn, perhaps even replacing some teachers. As demonstrated by ChatGPT, Bard and other large language models, generative AI can help educators craft course work and other teaching materials and engage students in new ways. The advent of these tools also forces educators to rethink student homework and testing and revise policies on plagiarism.

* AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

* AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms use machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents, and NLP to interpret requests for information.

* AI in entertainment and media. The entertainment business uses AI techniques for targeted advertising, recommending content, distribution, detecting fraud, creating scripts and making movies. Automated journalism helps newsrooms streamline media workflows reducing time, costs and complexity. Newsrooms use AI to automate routine tasks, such as data entry and proofreading; and to research topics and assist with headlines. How journalism can reliably use ChatGPT and other generative AI to generate content is open to question.

* AI in software coding and IT processes. New generative AI tools can be used to produce application code based on natural language prompts, but it is early days for these tools and unlikely they will replace software engineers soon. AI is also being used to automate many IT processes, including data entry, fraud detection, customer service, and predictive maintenance and security.

* Security. AI and machine learning are at the top of the buzzword list security vendors use to market their products, so buyers should approach with caution. Still, AI techniques are being successfully applied to multiple aspects of cybersecurity, including anomaly detection, solving the false-positive problem and conducting behavioral threat analytics. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.

* AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

* AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are used to improve and cut the costs of compliance with banking regulations. Banking organizations use AI to improve their decision-making for loans, set credit limits and identify investment opportunities.

* AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient. In supply chains, AI is replacing traditional methods of forecasting demand and predicting disruptions, a trend accelerated by COVID-19 when many companies were caught off guard by the effects of a global pandemic on the supply and demand of goods.

Augmented intelligence vs. artificial intelligence

Some industry experts have argued that the term artificial intelligence is too closely linked to popular culture, which has caused the general public to have improbable expectations about how AI will change the workplace and life in general. They have suggested using the term augmented intelligence to differentiate between AI systems that act autonomously -- popular culture examples include Hal 9000 and The Terminator -- and AI tools that support humans.

* Augmented intelligence. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. Examples include automatically surfacing important information in business intelligence reports or highlighting important information in legal filings. The rapid adoption of ChatGPT and Bard across industry indicates a willingness to use AI to support human decision-making.

* Artificial intelligence. True AI, or AGI, is closely associated with the concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality. This remains within the realm of science fiction, though some developers are working on the problem. Many believe that technologies such as quantum computing could play an important role in making AGI a reality and that we should reserve the use of the term AI for this kind of general intelligence.

Ethical use of artificial intelligence

While AI tools present a range of new functionality for businesses, the use of AI also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

In summary, AI's ethical challenges include the following: bias, due to improperly trained algorithms and human bias; misuse, due to deepfakes and phishing; legal concerns, including AI libel and copyright issues; elimination of jobs; and data privacy concerns, particularly in the banking, healthcare and legal fields.

These components make up responsible AI use.

AI governance and regulations

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, U.S. Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) is considering AI regulations. GDPR's strict limits on how enterprises can use consumer data already limits the training and functionality of many consumer-facing AI applications.

Policymakers in the U.S. have yet to issue AI legislation, but that could change soon. A "Blueprint for an AI Bill of Rights" published in October 2022 by the White House Office of Science and Technology Policy (OSTP) guides businesses on how to implement ethical AI systems. The U.S. Chamber of Commerce also called for AI regulations in a report released in March 2023.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI, as are the challenges presented by AI's lack of transparency that make it difficult to see how the algorithms reach their results. Moreover, technology breakthroughs and novel applications such as ChatGPT and Dall-E can make existing laws instantly obsolete. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

Milestones in AI from 1950 to present.

AI has had a long and sometimes controversial history from the Turing test in 1950 to today's generative AI chatbots like ChatGPT.

What is the history of AI?

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to René Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada King, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist. The two presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; and McCarthy developed Lisp, a language for AI programming still used today. In the mid-1960s, MIT Professor Joseph Weizenbaum developed ELIZA, an early NLP program that laid the foundation for today's chatbots.

1970s and 1980s. The achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that set the stage for the remarkable advances in AI we see today. The combination of big data and increased computational power propelled breakthroughs in NLP, computer vision, robotics, machine learning and deep learning. In 1997, as advances in AI accelerated, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion.

2000s. Further advances in machine learning, deep learning, NLP, speech recognition and computer vision gave rise to products and services that have shaped the way we live today. These include the 2000 launch of Google's search engine and the 2001 launch of Amazon's recommendation engine. Netflix developed its recommendation system for movies, Facebook introduced its facial recognition system and Microsoft launched its speech recognition system for transcribing speech into text. IBM launched Watson and Google started its self-driving initiative, Waymo.

2010s. The decade between 2010 and 2020 saw a steady stream of AI developments. These include the launch of Apple's Siri and Amazon's Alexa voice assistants; IBM Watson's victories on Jeopardy; self-driving cars; the development of the first generative adversarial network; the launch of TensorFlow, Google's open source deep learning framework; the founding of research lab OpenAI, developers of the GPT-3 language model and Dall-E image generator; the defeat of world Go champion Lee Sedol by Google DeepMind's AlphaGo; and the implementation of AI-based systems that detect cancers with a high degree of accuracy.

2020s. The current decade has seen the advent of generative AI, a type of artificial intelligence technology that can produce new content. Generative AI starts with a prompt that could be in the form of a text, an image, a video, a design, musical notes or any input that the AI system can process. Various AI algorithms then return new content in response to the prompt. Content can include essays, solutions to problems, or realistic fakes created from pictures or audio of a person. The abilities of language models such as ChatGPT-3, Google's Bard and Microsoft's Megatron-Turing NLG have wowed the world, but the technology is still in early stages, as evidenced by its tendency to hallucinate or skew answers.

AI tools and services

AI tools and services are evolving at a rapid rate. Current innovations in AI tools and services can be traced to the 2012 AlexNet neural network that ushered in a new era of high-performance AI built on GPUs and large data sets. The key change was the ability to train neural networks on massive amounts of data across multiple GPU cores in parallel in a more scalable way.

Over the last several years, the symbiotic relationship between AI discoveries at Google, Microsoft, and OpenAI, and the hardware innovations pioneered by Nvidia have enabled running ever-larger AI models on more connected GPUs, driving game-changing improvements in performance and scalability.

The collaboration among these AI luminaries was crucial for the recent success of ChatGPT, not to mention dozens of other breakout AI services. Here is a rundown of important innovations in AI tools and services.

Transformers. Google, for example, led the way in finding a more efficient process for provisioning AI training across a large cluster of commodity PCs with GPUs. This paved the way for the discovery of transformers that automate many aspects of training AI on unlabeled data.

Hardware optimization. Just as important, hardware vendors like Nvidia are also optimizing the microcode for running across multiple GPU cores in parallel for the most popular algorithms. Nvidia claimed the combination of faster hardware, more efficient AI algorithms, fine-tuning GPU instructions and better data center integration is driving a million-fold improvement in AI performance. Nvidia is also working with all cloud center providers to make this capability more accessible as AI-as-a-Service through IaaS, SaaS and PaaS models.

Generative pre-trained transformers. The AI stack has also evolved rapidly over the last few years. Previously enterprises would have to train their AI models from scratch. Increasingly vendors such as OpenAI, Nvidia, Microsoft, Google, and others provide generative pre-trained transformers (GPTs), which can be fine-tuned for a specific task at a dramatically reduced cost, expertise and time. Whereas some of the largest models are estimated to cost $5 million to $10 million per run, enterprises can fine-tune the resulting models for a few thousand dollars. This results in faster time to market and reduces risk.

AI cloud services. Among the biggest roadblocks that prevent enterprises from effectively using AI in their businesses are the data engineering and data science tasks required to weave AI capabilities into new apps or to develop new ones. All the leading cloud providers are rolling out their own branded AI as service offerings to streamline data prep, model development and application deployment. Top examples include AWS AI Services, Google Cloud AI, Microsoft Azure AI platform, IBM AI solutions and Oracle Cloud Infrastructure AI Services.

Cutting-edge AI models as a service. Leading AI model developers also offer cutting-edge AI models on top of these cloud services. OpenAI has dozens of large language models optimized for chat, NLP, image generation and code generation that are provisioned through Azure. Nvidia has pursued a more cloud-agnostic approach by selling AI infrastructure and foundational models optimized for text, images and medical data available across all cloud providers. Hundreds of other players are offering models customized for various industries and use cases as well.

Additional Information

Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

What is intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

Learning

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.

Reasoning

To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is, “Fred must be in either the museum or the café. He is not in the café; therefore he is in the museum,” and of the latter, “Previous accidents of this sort were caused by instrument failure; therefore this accident was caused by instrument failure.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rules.

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

Problem solving

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.

Perception

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.

At present, artificial perception is sufficiently well advanced to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on the open road, and robots to roam through buildings collecting empty soda cans. One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving television eye and a pincer hand, constructed at the University of Edinburgh, Scotland, during the period 1966–73 under the direction of Donald Michie. FREDDY was able to recognize a variety of objects and could be instructed to assemble simple artifacts, such as a toy car, from a random heap of components.

Language

A language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a minilanguage, it being a matter of convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”

An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—is their productivity. A productive language can formulate an unlimited variety of sentences.

It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements. Although none of these programs actually understands language, they may, in principle, reach the point where their command of a language is indistinguishable from that of a normal human. What, then, is involved in genuine understanding, if even a computer that uses language like a native human speaker is not acknowledged to understand? There is no universally agreed upon answer to this difficult question. According to one theory, whether or not one understands depends not only on one’s behaviour but also on one’s history: in order to be said to understand, one must have learned the language and have been trained to take one’s place in the linguistic community by means of interaction with other language users.

Artificial-Intelligence1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1748 2023-04-24 01:41:17

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1651) Laurasia

Gist

Laurasia,  one of the two ancient supercontinents produced by the first split of the even larger supercontinent Pangaea about 200 million years ago, comprising what are now North America, Greenland, Europe, and Asia (excluding India).

Details

Laurasia was the more northern of two large landmasses that formed part of the Pangaea supercontinent from around 335 to 175 million years ago (Mya), the other being Gondwana. It separated from Gondwana 215 to 175 Mya (beginning in the late Triassic period) during the breakup of Pangaea, drifting farther north after the split and finally broke apart with the opening of the North Atlantic Ocean c. 56 Mya. The name is a portmanteau of Laurentia and Asia.

Laurentia, Avalonia, Baltica, and a series of smaller terranes, collided in the Caledonian orogeny c. 400 Ma to form Laurussia (also known as Euramerica, or the Old Red Sandstone Continent). Laurussia then collided with Gondwana to form Pangaea. Kazakhstania and Siberia were then added to Pangaea 290–300 Ma to form Laurasia. Laurasia finally became an independent continental mass when Pangaea broke up into Gondwana and Laurasia.

Terminology and origin of the concept

Laurentia, the Palaeozoic core of North America and continental fragments that now make up part of Europe, collided with Baltica and Avalonia in the Caledonian orogeny c. 430–420 Mya to form Laurussia. In the Late Carboniferous Laurussia and Gondwana formed Pangaea. Siberia and Kazakhstania finally collided with Baltica in the Late Permian to form Laurasia. A series of continental blocks that now form East and Southeast Asia were later added to Laurasia.

In 1904–1909 Austrian geologist Eduard Suess proposed that the continents in the Southern Hemisphere were once merged into a larger continent called Gondwana. In 1915 German meteorologist Alfred Wegener proposed the existence of a supercontinent called Pangaea. In 1937 South African geologist Alexander du Toit proposed that Pangaea was divided into two larger landmasses, Laurasia in the Northern Hemisphere and Gondwana in the Southern Hemisphere, separated by the Tethys Ocean.

"Laurussia" was defined by Swiss geologist Peter Ziegler in 1988 as the merger between Laurentia and Baltica along the northern Caledonian suture. The "Old Red Continent" is an informal name often used for the Silurian-Carboniferous deposits in the central landmass of Laurussia.

Several earlier supercontinents proposed and debated in the 1990s and later (e.g. Rodinia, Nuna, Nena) included earlier connections between Laurentia, Baltica, and Siberia. These original connections apparently survived through one and possibly even two Wilson Cycles, though their intermittent duration and recurrent fit is debated.

Additional Information

Laurasia  was a supercontinent. It had been the northern part of the Pangaea global supercontinent. Pangaea split into Laurasia and Gondwana to the south during the Jurassic period.

Laurasia included most of the landmasses which make up today's continents of the northern hemisphere, chiefly Laurentia (the name given to the North American craton), Europe, Scandinavia, western Russia, Siberia, Kazakhstan, and China.

Laurasia's name combines the names of Laurentia and Eurasia.

Although Laurasia is known as a Mesozoic phenomenon, today it is believed that the same continents that formed Laurasia also existed as a coherent continent after the breakup of Rodinia around 1 billion years ago. To avoid confusion with Laurasia, that continent is referred to as Proto-Laurasia.

Details

Laurasia is an ancient continental mass in the Northern Hemisphere that included North America, Europe, and Asia (except peninsular India). Its existence was proposed by Alexander Du Toit, a South African geologist, in Our Wandering Continents (1937). This book was a reformulation of the continental drift theory advanced by the German meteorologist Alfred Wegener. Whereas Wegener had postulated a single supercontinent, Pangea, Du Toit theorized that there were two such great landmasses: Laurasia in the north and Gondwana in the south, separated by an oceanic area called Tethys. Laurasia is thought to have fragmented into the present continents of North America, Europe, and Asia some 66 million to 30 million years ago, an interval that spans the end of the Cretaceous Period and much of the Paleogene Period.

More Information

Laurasia is the name given to the largely northern supercontinent that is thought to have formed most recently during the late Mesozoic era, as part of the split of the Pangaean supercontinent. It also is believed that the same continents comprising Laurasia existed as a coherent landmass much earlier, forming after the breakup of the hypothesized supercontinent Rodinia about 1 billion years ago. The landmass of this earlier period is sometimes referred to as Proto-Laurasia to avoid confusion with the Mesozoic supercontinent.

The name Laurasia combines the names of Laurentia and Eurasia. Laurasia included most of the landmasses that make up today's continents of the northern hemisphere, chiefly Laurentia (the name given to the North American craton), as well as Baltica, Siberia, Kazakhstania, and the North China and East China cratons.

The formation of different supercontinents, such as Laurasia, is explained today by the theory of plate tectonics, which recognizes the earth to have a thin, solid crust, made up of several plates, that floats or rides on an inner layer of melted rock. The view of a supercontinent that is hundreds of millions of years old poses a problem for young-earth creationists, but plate tectonics is widely accepted today and backed by considerable scientific evidence.

Overview and origin

In geology, a supercontinent is a landmass comprising more than one continental core, or craton. Supercontinents are considered to form in cycles, coming together and breaking apart again through plate tectonics. The theory of plate tectonics encompassed and superseded the older theory of continental drift from the first half of the twentieth century and the concept of sea floor spreading developed during the 1960s. The breakup of Pangaea some 250 million years ago lead to the continents we presently know: Africa, Antarctic, the Americas, Asia, Australia, and Europe. It is considered that 250 million years from now, the present-day continents will once again reform into one gigantic supercontinent (Nield 2007).

The continents comprising the supercontinent Laurasia are believed to have formed this landmass on two separate occasions, although the earlier version is often known as "Proto-Laurasia" to distinguish it.

Proto-Laurasia is believed to have existed as a coherent supercontinent after the breakup of the hypothesized supercontinent Rodinia around 1 billion years ago. Rodinia, believed to have formed 1.1 billion years ago during the Proterozoic, was the supercontinent from which all subsequent continents, sub or super, derived. Proto-Laurasia may have split off about 750 million years ago. It is believed that it did not break up again before it recombined with the southern continents to form the late Precambrian supercontinent of Pannotia, which remained until the early Cambrian. Laurasia was assembled, then broken up, due to the actions of plate tectonics, continental drift and seafloor spreading.

The most recent version of Laurasia existed during the Mesozoic, and formed from the breakup of the supercontinent Pangaea (or Pangea). Pangaea existed during the Paleozoic and Mesozoic eras, before the process of plate tectonics separated each of the component continents into their current configuration. Pangaea broke apart during the Triassic and Jurassic periods of the Mesozoic, separating first into the two supercontinents of Gondwana (or Gondwanaland) to the south and Laurasia to the north, and thereafter into the continents as they are observed today.

One simplified version for Laurasia's origin holds that Laurasia itself was formed by the collision of the Siberia continent and the minor supercontinent Laurussia (or Euramerica). After these three major landmasses collided, other smaller landmasses collided as well.

Break up and reformation

During the Cambrian, Laurasia was largely located in equatorial latitudes and began to break up, with North China and Siberia drifting into latitudes further north than those occupied by continents during the previous 500 million years. By the Devonian, North China was located near the Arctic Circle and it remained the northernmost land in the world during the Carboniferous Ice Age between 300 and 280 million years ago. There is no evidence, though, for any large scale Carboniferous glaciation of the northern continents. This cold period saw the re-joining of Laurentia and Baltica with the formation of the Appalachian Mountains and the vast coal deposits, which are today a mainstay of the economy of such regions as West Virginia, the United Kingdom, and Germany.

Siberia moved southwards and joined with Kazakhstania, a small continental region believed today to have been created during the Silurian by extensive volcanism. When these two continents joined together, Laurasia was nearly reformed, and by the beginning of the Triassic, the East China craton had rejoined the redeveloping Laurasia as it collided with Gondwana to form Pangaea. North China became, as it drifted southwards from near-Arctic latitudes, the last continent to join with Pangaea.

Final split

Around 200 million years ago, Pangaea started to break up. Between eastern North America and northwest Africa, a new ocean formed—the Atlantic Ocean, even though Greenland (attached to North America) and Europe were still joined together. The separation of Europe and Greenland occurred around 60 million years ago (in the Paleocene). Laurasia finally divided into the continents after which it is named: Laurentia (now North America) and Eurasia (excluding India and Arabia).

shutterstock-365744171.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1749 2023-04-24 23:00:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1652) Gondwana

Summary

Gondwana, also called Gondwanaland, was a southern supercontinent. It formed when Pangaea broke up, starting 170 million years ago (mya), in the early middle Jurassic.

The global supercontinent Pangaea was complete 250 million years ago. Then it split into two smaller supercontinents, which were about the same size. The northern part of Pangaea became Laurasia, and the southern part became Gondwana. Over time, Gondwana drifted south, while Laurasia moved north.

Gondwana included most of the landmasses in today's southern hemisphere, including Antarctica, South America, Africa, Madagascar, Australia–New Guinea, and New Zealand. It originally included China, Siberia, Arabia and the Indian subcontinent, which have now moved entirely into the Northern Hemisphere.

Gondwana itself began to break up in the mid-Jurassic period, about 170 million years ago.

History of the name

Nothofagus is a plant genus that shows a Gondwanan distribution. It first appeared in Gondwana and still exists in parts of Australia, New Zealand, New Caledonia and Chile. Fossils have also recently been found in Antarctica.
Gondwana was named by an Austrian scientist, Eduard Suess. He named the supercontinent "Gondwana" because rock formations of this ancient continent were found in modern Odisha (eastern India).

The adjective Gondwanan is often used in biogeography to describe where different organisms live. It is most commonly used when the organisms only live in two or more of the regions which were part of Gondwana, including the Antarctic flora. For example, the Proteaceae, a family of plants, lives only in southern South America, South Africa, and Australia. This is called a "Gondwanan distribution" (meaning that the Proteaceae live only in the areas that used to be part of Gondwana). This pattern shows that the Proteaceae have existed for a long time – since the time that Gondwana existed.

Evidence of plant and animal distribution supported the ideas of two scientists: Alfred Russel Wallace and Alfred Wegener. Wallace explained geographical distribution as the result of evolution. Wegener used geographical distribution as evidence for continental drift.

Breakup of Gondwana

Between 160 and 23 million years ago, Gondwana broke up. Africa separated from Antarctica around 160 million years ago. Next, it separated from the Indian subcontinent, in the early Cretaceous period (about 125 million years ago).

About 65 million years ago, Antarctica (then connected to Australia) still had a tropical to subtropical climate, with marsupial fauna. About 40 million years ago, Australia-New Guinea separated from Antarctica. This allowed latitudinal currents to separate Antarctica from Australia, and the first ice began to appear in Antarctica.

During the Eocene-Oligocene extinction event about 34 million years ago, levels of carbon dioxide were about 760 parts per million. They had been decreasing from earlier levels, which were in the thousands of parts per million.

Around 23 million years ago, the Drake Passage opened between Antarctica and South America, resulting in the Antarctic Circumpolar Current that completely isolated Antarctica. Models of the changes suggest that decreasing levels of carbon dioxide became more important. The ice began to spread through Antarctica, replacing the forests that had covered the continent. Since about 15 million years ago, Antarctica has been mostly covered with ice.[8] Around six million years ago, the Antarctic ice cap reached the size it is today.

Submerged former lands

There are several submerged (underwater) lands in the Indian Ocean, off the west of Australia. They are under more than 1.5 kilometres (0.93 miles) of water. Their rocks show that they used to be part of Gondwana. They are not the type of rocks that are usually found in the ocean, like basalt. Instead, they are typical land rocks, like granite, sandstone, and gneiss. They also have the type of fossils that are now found on continental areas. Recently, two of these sunken islands were found to the west of Perth, Western Australia. These islands are almost the size of Tasmania, and have flat tops. This shows they were once at sea level before being submerged underwater. It also shows that when India began to break away from Australasia in the early Cretaceous period, the islands formed part of the last link between the two present-day continents.

Naturaliste Plateau

The Naturaliste Plateau is a submerged land off of Western Australia. It has an area of 90,000 square kilometres (34,749 square miles).

The Naturaliste Plateau may have deposits of oil. When it was above land during the Mesozoic era, it had a tropical climate which might have been perfect for creating coal, oil and natural gas.

Kerguelen microcontinent

The Kerguelen Plateau is a submerged microcontinent in the southern Indian Ocean. It is about 3,000 kilometres (1,900 miles) to the southwest of Australia, and extends for more than 2,200 kilometres (1,400 miles) in a northwest-southeast direction. It is under deep water, but a small part of the plateau is above sea level, forming the Australian Heard Island and McDonald Islands, and the French Kerguelen Islands. The islands are part of a large igneous province (LIP) which started when Gondwana started to break up, 130 million years ago in the Lower Cretaceous period.

Volcanic activity occurs sometimes on the Heard and McDonald islands.

Details

Gondwana, also called Gondwanaland, is an ancient supercontinent that incorporated present-day South America, Africa, Arabia, Madagascar, India, Australia, and Antarctica. It was fully assembled by Late Precambrian time, some 600 million years ago, and the first stage of its breakup began in the Early Jurassic Period, about 180 million years ago. The name Gondwanaland was coined by the Austrian geologist Eduard Suess in reference to Upper Paleozoic and Mesozoic formations in the Gondwana region of central India, which are similar to formations of the same age on Southern Hemisphere continents.

The matching shapes of the coastlines of western Africa and eastern South America were first noted by Francis Bacon in 1620 as maps of Africa and the New World first became available. The concept that all of the continents of the Southern Hemisphere were once joined together was set forth in detail by Alfred Wegener, a German meteorologist, in 1912. He envisioned a single great landmass, Pangaea (or Pangea). Gondwana comprised the southern half of this supercontinent.

The concept of Gondwana was expanded upon by Alexander Du Toit, a South African geologist, in his 1937 book Our Wandering Continents. Du Toit carefully documented the numerous geologic and paleontological lines of evidence that linked the southern continents. This evidence included the occurrence of glacial deposits—tillites—of Permo-Carboniferous age (approximately 290 million years old) and similar floras and faunas that are not found in the Northern Hemisphere. The widely distributed seed fern Glossopteris is particularly cited in this regard. The rock strata that contain this evidence are called the Karoo (Karroo) System in South Africa, the Gondwana System in India, and the Santa Catharina System in South America. It also occurs in the Maitland Group of eastern Australia as well as in the Whiteout conglomerate and Polarstar formations of Antarctica. Though the concept of Gondwana was widely accepted by scientists from the Southern Hemisphere, scientists in the Northern Hemisphere continued to resist the idea of continental mobility until the 1960s, when the theory of plate tectonics demonstrated that the ocean basins are not permanent global features and vindicated Wegener’s hypothesis of continental drift.

According to plate tectonic evidence, Gondwana was assembled by continental collisions in the Late Precambrian (about 1 billion to 542 million years ago). Gondwana then collided with North America, Europe, and Siberia to form the supercontinent of Pangea. The breakup of Gondwana occurred in stages. Some 180 million years ago, in the Jurassic Period, the western half of Gondwana (Africa and South America) separated from the eastern half (Madagascar, India, Australia, and Antarctica). The South Atlantic Ocean opened about 140 million years ago as Africa separated from South America. At about the same time, India, which was still attached to Madagascar, separated from Antarctica and Australia, opening the central Indian Ocean. During the Late Cretaceous Period, India broke away from Madagascar, and Australia slowly rifted away from Antarctica. India eventually collided with Eurasia some 50 million years ago, forming the Himalayan mountains, while the northward-moving Australian plate had just begun its collision along the southern margin of Southeast Asia—a collision that is still under way today.

Additional Information

Gondwana was the more southern of two large landmasses that formed part of the Pangaea supercontinent (which comprised all of Earth's landmasses) from around 335 to 175 million years ago (Mya), the other being Laurasia. It formed during the late Neoproterozoic (about 550 million years ago) and began to break away from Pangaea during the Jurassic period (about 180 million years ago). The final stages of break-up, involving the separation of Antarctica from South America (forming the Drake Passage) and Australia, occurred during the Paleogene (from around 66 to 23 million years ago (Mya)). Gondwana was not considered a supercontinent by the earliest definition, since the landmasses of Baltica, Laurentia, and Siberia were separated from it. To differentiate it from the Indian region of the same name (see § Name), it is also commonly called Gondwanaland.

Gondwana was formed by the accretion of several cratons (a large stable block of the earth's crust). Eventually, Gondwana became the largest piece of continental crust of the Palaeozoic Era, covering an area of about 100,000,000 sq km (39,000,000 sq mi), about one-fifth of the Earth's surface. During the Carboniferous Period (about 335 million years ago), it merged with Laurasia to form a larger supercontinent called Pangaea. Gondwana (and Pangaea) gradually broke up during the Mesozoic Era (about 200 million years ago). The remnants of Gondwana make up around two-thirds of today's continental area, including South America, Africa, Antarctica, Australia, Zealandia, Arabia, and the Indian Subcontinent.

The formation of Gondwana began c. 800 to 650 Ma with the East African Orogeny, the collision of India and Madagascar with East Africa, and was completed c. 600 to 530 Ma with the overlapping Brasiliano and Kuunga orogenies, the collision of South America with Africa, and the addition of Australia and Antarctica, respectively.

Regions that were part of Gondwana shared floral and zoological elements that persist to the present day.

Name

The continent of Gondwana was named by the Austrian scientist Eduard Suess, after the region in central India of the same name, which is derived from Sanskrit for "forest of the Gonds". The name had been previously used in a geological context, first by H. B. Medlicott in 1872, from which the Gondwana sedimentary sequences (Permian-Triassic) are also described.

Some scientists prefer the term "Gondwanaland" to make a clear distinction between the region and the supercontinent.

Formation

The assembly of Gondwana was a protracted process during the Neoproterozoic and Paleozoic, which remains incompletely understood because of the lack of paleo-magnetic data. Several orogenies, collectively known as the Pan-African orogeny, caused the continental fragments of a much older supercontinent, Rodinia, to amalgamate. One of those orogenic belts, the Mozambique Belt, formed 800 to 650 Ma and was originally interpreted as the suture between East (India, Madagascar, Antarctica, and Australia) and West Gondwana (Africa and South America). Three orogenies were recognised during the 1990s: the East African Orogeny (650 to 800 Ma) and Kuunga orogeny (including the Malagasy Orogeny in southern Madagascar) (550 Ma), the collision between East Gondwana and East Africa in two steps, and the Brasiliano orogeny (660 to 530 Ma), the successive collision between South American and African cratons.

The last stages of Gondwanan assembly overlapped with the opening of the Iapetus Ocean between Laurentia and western Gondwana. During this interval, the Cambrian explosion occurred. Laurentia was docked against the western shores of a united Gondwana for a brief period near the Precambrian/Cambrian boundary, forming the short-lived and still disputed supercontinent Pannotia.

The Mozambique Ocean separated the Congo–Tanzania–Bangweulu Block of central Africa from Neoproterozoic India (India, the Antongil Block in far eastern Madagascar, the Seychelles, and the Napier and Rayner Complexes in East Antarctica). The Azania continent (much of central Madagascar, the Horn of Africa and parts of Yemen and Arabia) was an island in the Mozambique Ocean.

The continent Australia/Mawson was still separated from India, eastern Africa, and Kalahari by c. 600 Ma, when most of western Gondwana had already been amalgamated. By c. 550 Ma, India had reached its Gondwanan position, which initiated the Kuunga orogeny (also known as the Pinjarra orogeny). Meanwhile, on the other side of the newly forming Africa, Kalahari collided with Congo and Rio de la Plata which closed the Adamastor Ocean. c. 540–530 Ma, the closure of the Mozambique Ocean brought India next to Australia–East Antarctica, and both North and South China were in proximity to Australia.

As the rest of Gondwana formed, a complex series of orogenic events assembled the eastern parts of Gondwana (eastern Africa, Arabian-Nubian Shield, Seychelles, Madagascar, India, Sri Lanka, East Antarctica, and Australia) c. 750 to 530 Ma. First the Arabian-Nubian Shield collided with eastern Africa (in the Kenya-Tanzania region) in the East African Orogeny c.750 to 620 Ma. Then Australia and East Antarctica were merged with the remaining Gondwana c. 570 to 530 Ma in the Kuunga Orogeny.

The later Malagasy orogeny at about 550–515 Mya affected Madagascar, eastern East Africa and southern India. In it, Neoproterozoic India collided with the already combined Azania and Congo–Tanzania–Bangweulu Block, suturing along the Mozambique Belt.

The 18,000 km-long (11,000 mi) Terra Australis Orogen developed along Gondwana's western, southern, and eastern margins. Proto-Gondwanan Cambrian arc belts from this margin have been found in eastern Australia, Tasmania, New Zealand, and Antarctica. Though these belts formed a continuous arc chain, the direction of subduction was different between the Australian-Tasmanian and New Zealand-Antarctica arc segments.

MukkwnPZe2zTtY5NSvyC8X-1200-80.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#1750 2023-04-25 15:11:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,463

Re: Miscellany

1753) Pangaea or Pangea

Summary

Pangea, also spelled Pangaea, in early geologic time, was a supercontinent that incorporated almost all the landmasses on Earth. Pangea was surrounded by a global ocean called Panthalassa, and it was fully assembled by the Early Permian Epoch (some 299 million to about 273 million years ago). The supercontinent began to break apart about 200 million years ago, during the Early Jurassic Epoch (201 million to 174 million years ago), eventually forming the modern continents and the Atlantic and Indian oceans. Pangea’s existence was first proposed in 1912 by German meteorologist Alfred Wegener as a part of his theory of continental drift. Its name is derived from the Greek pangaia, meaning “all the Earth.”

Formation

The assembly of Pangea’s component landmasses was well underway by the Devonian Period (419.2 million to 358.9 million years ago) as the paleocontinents Laurentia (a landmass made up of the North American craton—that is, the continent’s stable interior portion) and Baltica (a landmass made up of the Eastern European craton) joined with several smaller microcontinents to form Euramerica. By the beginning of the Permian Period (298.9 million to 252.2 million years ago), the northwestern coastline of the ancient continent Gondwana (a paleocontinent that would eventually fragment to become South America, India, Africa, Australia, and Antarctica) collided with and joined the southern part of Euramerica (a paleocontinent made up of North America and southern Europe). With the fusion of the Angaran craton of Siberia to that combined landmass during the middle of the Early Permian, the assembly of Pangea was complete.

Geography

Pangea was C-shaped, with the bulk of its mass stretching between Earth’s northern and southern polar regions. The curve of the eastern edge of the supercontinent contained an embayment called the Tethys Sea, or Tethys Ocean. The Paleo-Tethys Ocean took shape during Pangea’s initial assembly phase. This ocean was slowly replaced by the Neo-Tethys Ocean after a strip of continental material known as the Cimmerian continent, or the Cimmerian superterrane, detached from northern Gondwana and rotated northward.

On the periphery of Pangea was Cathaysia, a smaller continent extending beyond the eastern edge of Angara and comprising the landmasses of both North and South China. Cathaysia lay within the western Panthalassic Ocean and at the eastern end of the Paleo-Tethys Ocean. Both oceans also contained scattered fragments of continental crust (microcontinents), basaltic volcanic island arcs, oceanic plateaus, and trenches. These island arcs and other isolated landmasses were later welded onto the margins of Pangea, forming accreted terranes (landmasses that collide with continents).

The assembly of the various large landmasses into the supercontinent led to the development of extensive dry climates in the supercontinent’s tropics during Permian times. As low-latitude seaways closed, warm surface ocean currents were deflected into much higher latitudes (areas closer to the poles), and cool-water upwelling developed along Pangea’s west coast. Extensive mountain-building events (or orogenies) occurred where the continents collided with one another, and the newly created high mountain ranges strongly influenced local and regional terrestrial climates. East-west atmospheric flow in the temperate and higher latitudes was disrupted by two high mountain chains—one in the tropics oriented east-west and one running north-south—that diverted warm marine air into higher latitudes.

These developments may have contributed to the series of extinction events that took place near the end of the Permian Period. Paleoecologists have posited that continental collisions eliminated several shallow-water marine basins—the primary habitat of most marine invertebrates—and Pangea’s north-south orientation, which drastically changed ocean circulation patterns, altered regional climates. In addition, by the end of the Permian Period, land largely prevented cooler waters near the poles from entering the Paleo-Tethys and Neo-Tethys basins, which may have raised water temperatures in shallow areas above the tolerance limits of corals and other organisms (see also Permian extinction).

Breakup

The mechanism for the breakup of Pangea is now explained in terms of plate tectonics rather than Wegener’s outmoded concept of continental drift, which simply stated that Earth’s continents were once joined together into the supercontinent Pangea that lasted for most of geologic time. Plate tectonics states that Earth’s outer shell, or lithosphere, consists of large rigid plates that move apart at oceanic ridges, come together at subduction zones, or slip past one another along fault lines. The pattern of seafloor spreading indicates that Pangea did not break apart all at once but rather fragmented in distinct stages. Plate tectonics also postulates that the continents joined with one another and broke apart several times in Earth’s geologic history.

The first oceans formed from the breakup, some 180 million years ago, were the central Atlantic Ocean between northwestern Africa and North America and the southwestern Indian Ocean between Africa and Antarctica. The South Atlantic Ocean opened about 140 million years ago as Africa separated from South America. About the same time, India separated from Antarctica and Australia, forming the central Indian Ocean. Finally, about 80 million years ago, North America separated from Europe, Australia began to rift away from Antarctica, and India broke away from Madagascar. India eventually collided with Eurasia approximately 50 million years ago, forming the Himalayas.

Pangea’s formal conceptualization began with Wegener’s work in 1910. Like other scientists before him, Wegener became impressed with the similarity in the coastlines of eastern South America and western Africa and speculated that those lands had once been joined together. He began to toy with the idea that in the late Paleozoic Era (which ended about 252 million years ago) all the present-day continents had formed a single large mass, or supercontinent, which subsequently broke apart. Wegener called this ancient continent Pangaea.

Other scientists had proposed that such a continent existed but had explained the separation of the modern world’s continents as resulting from the subsidence, or sinking, of large portions of the supercontinent to form the Atlantic and Indian oceans. Wegener, by contrast, proposed that Pangaea’s constituent portions had slowly moved thousands of miles apart over long periods of geologic time. Wegener proposed his term for this movement, die Verschiebung der Kontinente (German: “continental displacement”), which gave rise to the term continental drift, in 1912. In 1937 Alexander L. Du Toit, a South African geologist, modified Wegener’s hypothesis by suggesting two primordial continents: Laurasia in the north and Gondwana in the south. Although Wegener did not manage to persuade the scientific world of continental drift, Du Toit’s work continued to amass evidence of it. However, the mechanism driving continental drift remained elusive until a successor theory, plate tectonics, was worked out during the 1960s.

Other supercontinents

During Earth’s long history, there probably have been several Pangea-like supercontinents. The oldest of those supercontinents is called Rodinia and was formed during Precambrian time some one billion years ago. Another Pangea-like supercontinent, Pannotia, was assembled 600 million years ago, at the end of the Precambrian. Present-day plate motions are bringing the continents together once again. Africa has begun to collide with southern Europe, and the Australian Plate is now colliding with Southeast Asia. Within the next 250 million years, Africa and the Americas will merge with Eurasia to form a supercontinent that approaches Pangean proportions. The episodic assembly of the world’s landmasses has been called the supercontinent cycle or, in honour of Wegener, the Wegenerian cycle

Details

Pangaea or Pangea was a supercontinent that existed during the late Paleozoic and early Mesozoic eras. It assembled from the earlier continental units of Gondwana, Euramerica and Siberia during the Carboniferous approximately 335 million years ago, and began to break apart about 200 million years ago, at the end of the Triassic and beginning of the Jurassic. In contrast to the present Earth and its distribution of continental mass, Pangaea was centred on the equator and surrounded by the superocean Panthalassa and the Paleo-Tethys and subsequent Tethys Oceans. Pangaea is the most recent supercontinent to have existed and the first to be reconstructed by geologists.

Origin of the concept

The name "Pangaea" is derived from Ancient Greek pan ("all, entire, whole") and Gaia or Gaea ("Mother Earth, land"). The concept that the continents once formed a contiguous land mass was hypothesised, with corroborating evidence, by Alfred Wegener, the originator of the scientific theory of continental drift, in his 1912 publication The Origin of Continents (Die Entstehung der Kontinente). He expanded upon his hypothesis in his 1915 book The Origin of Continents and Oceans (Die Entstehung der Kontinente und Ozeane), in which he postulated that, before breaking up and drifting to their present locations, all the continents had formed a single supercontinent that he called the "Urkontinent".

The name "Pangaea" occurs in the 1920 edition of Die Entstehung der Kontinente und Ozeane, but only once, when Wegener refers to the ancient supercontinent as "the Pangaea of the Carboniferous". Wegener used the Germanized form "Pangäa," but the name entered German and English scientific literature (in 1922 and 1926, respectively) in the Latinized form "Pangaea" (of the Greek "Pangaia"), especially due to a symposium of the American Association of Petroleum Geologists in November 1926.

Wegener originally proposed that the breakup of Pangaea was due to centripetal forces from the Earth's rotation acting on the high continents. However, this mechanism was easily shown to be physically implausible, which delayed acceptance of the Pangaea hypothesis. Arthur Holmes proposed the more plausible mechanism of mantle convection, which, together with evidence provided by the mapping of the ocean floor following the Second World War, led to the development and acceptance of the theory of plate tectonics. This theory provides the now widely-accepted explanation for the existence and breakup of Pangaea.

Evidence of existence

The geography of the continents bordering the Atlantic Ocean was the first evidence suggesting the existence of Pangaea. The seemingly close fit of the coastlines of North and South America with Europe and Africa was remarked on almost as soon as these coasts were charted. The first to suggest that these continents were once joined and later separated may have been Abraham Ortelius in 1596. Careful reconstructions showed that the mismatch at the 500 fathoms (3,000 feet; 910 meters) contour was less than 130 km (81 mi), and it was argued that this was much too good to be attributed to chance.

Additional evidence for Pangaea is found in the geology of adjacent continents, including matching geological trends between the eastern coast of South America and the western coast of Africa. The polar ice cap of the Carboniferous Period covered the southern end of Pangaea. Glacial deposits, specifically till, of the same age and structure are found on many separate continents that would have been together in the continent of Pangaea. The continuity of mountain chains provides further evidence, such as the Appalachian Mountains chain extending from the southeastern United States to the Caledonides of Ireland, Britain, Greenland, and Scandinavia.

Fossil evidence for Pangaea includes the presence of similar and identical species on continents that are now great distances apart. For example, fossils of the therapsid Lystrosaurus have been found in South Africa, India and Antarctica, alongside members of the Glossopteris flora, whose distribution would have ranged from the polar circle to the equator if the continents had been in their present position; similarly, the freshwater reptile Mesosaurus has been found in only localized regions of the coasts of Brazil and West Africa.

Geologists can also determine the movement of continental plates by examining the orientation of magnetic minerals in rocks. When rocks are formed, they take on the magnetic orientation of the Earth, showing which direction the poles lie relative to the rock; this determines latitudes and orientations (though not longitudes). Magnetic differences between samples of sedimentary and intrusive igneous rock whose age varies by millions of years is due to a combination of magnetic polar wander (with a cycle of a few thousand years) and the drifting of continents over millions of years. One can subtract the polar wander component, which is identical for all contemporaneous samples, leaving the portion that shows continental drift and can be used to help reconstruct earlier continental latitudes and orientations.

Formation

Pangaea is only the most recent supercontinent reconstructed from the geologic record. The formation of supercontinents and their breakup appears to have been cyclical through Earth's history. There may have been several others before Pangaea.

Paleomagnetic measurements help geologists determine the latitude and orientation of ancient continental blocks, and newer techniques may help determine longitudes. Paleontology helps determine ancient climates, confirming latitude estimates from paleomagnetic measurements, and the distribution of ancient forms of life provides clues on which continental blocks were close to each other at particular geological moments. However, reconstructions of continents prior to Pangaea, including the ones in this section, remain partially speculative, and different reconstructions will differ in some details.

Previous supercontinents

The fourth-last supercontinent, called Columbia or Nuna, appears to have assembled in the period 2.0–1.8 billion years ago (Ga). Columbia/Nuna broke up and the next supercontinent, Rodinia, formed from the accretion and assembly of its fragments. Rodinia lasted from about 1.3 billion years ago until about 750 million years ago, but its exact configuration and geodynamic history are not nearly as well understood as those of the later supercontinents, Pannotia and Pangaea.

According to one reconstruction, when Rodinia broke up, it split into three pieces: the supercontinent of Proto-Laurasia, the supercontinent of Proto-Gondwana, and the smaller Congo craton. Proto-Laurasia and Proto-Gondwana were separated by the Proto-Tethys Ocean. Next Proto-Laurasia itself split apart to form the continents of Laurentia, Siberia, and Baltica. Baltica moved to the east of Laurentia, and Siberia moved northeast of Laurentia. The splitting also created two new oceans, the Iapetus Ocean and Paleoasian Ocean. Most of the above masses coalesced again to form the relatively short-lived supercontinent of Pannotia. This supercontinent included large amounts of land near the poles and, near the equator, only a relatively small strip connecting the polar masses. Pannotia lasted until 540 Ma, near the beginning of the Cambrian period and then broke up, giving rise to the continents of Laurentia, Baltica, and the southern supercontinent of Gondwana.

Formation of Euramerica (Laurussia)

In the Cambrian period, the continent of Laurentia, which would later become North America, sat on the equator, with three bordering oceans: the Panthalassic Ocean to the north and west, the Iapetus Ocean to the south, and the Khanty Ocean to the east. In the Earliest Ordovician, around 480 Ma, the microcontinent of Avalonia – a landmass incorporating fragments of what would become eastern Newfoundland, the southern British Isles, and parts of Belgium, northern France, Nova Scotia, New England, South Iberia, and northwest Africa – broke free from Gondwana and began its journey to Laurentia. Baltica, Laurentia, and Avalonia all came together by the end of the Ordovician to form a landmass called Euramerica or Laurussia, closing the Iapetus Ocean. The collision also resulted in the formation of the northern Appalachians. Siberia sat near Euramerica, with the Khanty Ocean between the two continents. While all this was happening, Gondwana drifted slowly towards the South Pole. This was the first step of the formation of Pangaea.

Collision of Gondwana with Euramerica

The second step in the formation of Pangaea was the collision of Gondwana with Euramerica. By the middle of the Silurian, 430 Ma, Baltica had already collided with Laurentia, forming Euramerica, an event called the Caledonian orogeny. Avalonia had not yet collided with Laurentia, but as Avalonia inched towards Laurentia, the seaway between them, a remnant of the Iapetus Ocean, was slowly shrinking. Meanwhile, southern Europe broke off from Gondwana and began to move towards Euramerica across the Rheic Ocean. It collided with southern Baltica in the Devonian.[34]

By the late Silurian, Annamia and South China split from Gondwana and started to head northward, shrinking the Proto-Tethys Ocean in their path and opening the new Paleo-Tethys Ocean to their south. In the Devonian Period, Gondwana itself headed towards Euramerica, causing the Rheic Ocean to shrink. In the Early Carboniferous, northwest Africa had touched the southeastern coast of Euramerica, creating the southern portion of the Appalachian Mountains, the Meseta Mountains, and the Mauritanide Mountains, an event called the Variscan orogeny. South America moved northward to southern Euramerica, while the eastern portion of Gondwana (India, Antarctica, and Australia) headed toward the South Pole from the equator. North and South China were on independent continents. The Kazakhstania microcontinent had collided with Siberia. (Siberia had been a separate continent for millions of years since the deformation of the supercontinent Pannotia in the Middle Carboniferous.)

The Variscan orogeny raised the Central Pangaean Mountains, which were comparable to the modern Himalayas in scale. With Pangaea now stretching from the South Pole across the equator and well into the Northern Hemisphere, an intense megamonsoon climate was established, except for a perpetually wet zone immediately around the central mountains.

Formation of Laurasia

Western Kazakhstania collided with Baltica in the Late Carboniferous, closing the Ural Ocean between them and the western Proto-Tethys in them (Uralian orogeny), causing the formation of not only the Ural Mountains, but also the supercontinent of Laurasia. This was the last step of the formation of Pangaea. Meanwhile, South America had collided with southern Laurentia, closing the Rheic Ocean and completing the Variscian orogeny with the formation the southernmost part of the Appalachians and Ouachita Mountains. By this time, Gondwana was positioned near the South Pole, and glaciers were forming in Antarctica, India, Australia, southern Africa, and South America. The North China block collided with Siberia by Jurassic, completely closing the Proto-Tethys Ocean.

By the Early Permian, the Cimmerian plate split from Gondwana and headed towards Laurasia, thus closing the Paleo-Tethys Ocean, but forming a new ocean, the Tethys Ocean, in its southern end. Most of the landmasses were all in one. By the Triassic Period, Pangaea rotated a little, and the Cimmerian plate was still travelling across the shrinking Paleo-Tethys until the Middle Jurassic. By the late Triassic, the Paleo-Tethys had closed from west to east, creating the Cimmerian Orogeny. Pangaea, which looked like a C, with the new Tethys Ocean inside the C, had rifted by the Middle Jurassic, and its deformation is explained below.

Paleogeography of Earth in the late Cambrian, around 490 Ma

Paleogeography of Earth in the middle Silurian, around 430 Ma. Avalonia and Baltica have fused with Laurentia to form Laurussia.

Paleogeography of Earth in the late Carboniferous, around 310 Ma. Laurussia has fused with Gondwana to form Pangaea.

Paleogeography of the Earth at the Permian–Triassic boundary, around 250 Ma. Siberia has fused with Pangaea to complete the assembly of the supercontinent.

Life

Pangaea existed as a supercontinent for 160 million years, from its assembly around 335 million years ago (Early Carboniferous) to its breakup 175 million years ago (Middle Jurassic). During this interval, important developments in the evolution of life took place. The seas of the Early Carboniferous were dominated by rugose corals, brachiopods, bryozoans, sharks, and the first bony fish. Life on land was dominated by lycopsid forests inhabited by insects and other arthropods and the first tetrapods. By the time Pangaea broke up, in the Middle Jurassic, the seas swarmed with molluscs (particularly ammonites), ichthyosaurs, sharks and rays, and the first ray-finned bony fishes, while life on land was dominated by forests of cycads and conifers in which dinosaurs flourished and in which the first true mammals had appeared.

The evolution of life in this time reflected the conditions created by the assembly of Pangaea. The union of most of the continental crust into one landmass reduced the extent of sea coasts. Increased erosion from uplifted continental crust increased the importance of floodplain and delta environments relative to shallow marine environments. Continental assembly and uplift also meant increasingly arid land climates, favoring the evolution of amniote animals and seed plants, whose eggs and seeds were better adapted to dry climates. The early drying trend was most pronounced in western Pangaea, which became a center of the evolution and geographical spread of amniotes.

Coal swamps typically form in perpetually wet regions close to the equator. The assembly of Pangaea disrupted the intertropical convergence zone and created an extreme monsoon climate that reduced the deposition of coal to its lowest level in the last 300 million years. During the Permian, coal deposition was largely restricted to the North and South China microcontinents, which were among the few areas of continental crust that had not joined with Pangaea. The extremes of climate in the interior of Pangaea are reflected in bone growth patterns of pareiasaurs and the growth patterns in gymnosperm forests.

The lack of oceanic barriers is thought to have favored cosmopolitanism, in which successful species attain wide geographical distribution. Cosmopolitanism was also driven by mass extinctions, including the Permian–Triassic extinction event, the most severe in the fossil record, and also the Triassic–Jurassic extinction event. These events resulted in disaster fauna showing little diversity and high cosmopolitanism, including Lystrosaurus, which opportunistically spread to every corner of Pangaea following the Permian–Triassic extinction event. On the other hand, there is evidence that many Pangaean species were provincial, with a limited geographical range, despite the absence of geographical barriers. This may be due to the strong variations in climate by latitude and season produced by the extreme monsoon climate. For example, cold-adapted pteridosperms (early seed plants) of Gondwana were blocked from spreading throughout Pangaea by the equatorial climate, and northern pteridosperms ended up dominating Gondwana in the Triassic.

Mass extinctions

The tectonics and geography of Pangaea may have worsened the Permian–Triassic extinction event or other extinctions. For example, the reduced area of continental shelf environments may have left marine species vulnerable to extinction. However, no evidence for a species-area effect has been found in more recent and better characterized portions of the geologic record. Another possibility is that reduced sea-floor spreading associated with the formation of Pangaea, and the resulting cooling and subsidence of oceanic crust, may have reduced the number of islands that could have served as refugia for marine species. Species diversity may have already been reduced prior to mass extinction events due to mingling of species possible when formerly separate continents were merged. However, there is strong evidence that climate barriers continued to separate ecological communities in different parts of Pangaea. The eruptions of the Emeishan Traps may have eliminated South China, one of the few continental areas not merged with Pangaea, as a refugium.

Rifting and break-up

There were three major phases in the break-up of Pangaea.

Opening of the Atlantic

The Atlantic Ocean did not open uniformly; rifting began in the north-central Atlantic. The first breakup of Pangaea is proposed for the late Ladinian (230 Ma) with initial spreading in the opening central Atlantic. Then the rifting proceeded along the eastern margin of North America, the northwest African margin and the High, Saharan and Tunisian Atlas.

Another phase began in the Early-Middle Jurassic (about 175 Ma), when Pangaea began to rift from the Tethys Ocean in the east to the Pacific Ocean in the west. The rifting that took place between North America and Africa produced multiple failed rifts. One rift resulted in a new ocean, the North Atlantic Ocean.

The South Atlantic did not open until the Cretaceous when Laurasia started to rotate clockwise and moved northward with North America to the north, and Eurasia to the south. The clockwise motion of Laurasia led much later to the closing of the Tethys Ocean and the widening of the "Sinus Borealis", which later became the Arctic Ocean. Meanwhile, on the other side of Africa and along the adjacent margins of east Africa, Antarctica and Madagascar, new rifts were forming that would lead to the formation of the southwestern Indian Ocean that would open up in the Cretaceous.

Break-up of Gondwana

The second major phase in the break-up of Pangaea began in the Early Cretaceous (150–140 Ma), when the landmass of Gondwana separated into multiple continents (Africa, South America, India, Antarctica, and Australia). The subduction at Tethyan Trench probably caused Africa, India and Australia to move northward, causing the opening of a "South Indian Ocean". In the Early Cretaceous, Atlantica, today's South America and Africa, finally separated from eastern Gondwana (Antarctica, India and Australia). Then in the Middle Cretaceous, Gondwana fragmented to open up the South Atlantic Ocean as South America started to move westward away from Africa. The South Atlantic did not develop uniformly; rather, it rifted from south to north.

Also, at the same time, Madagascar and India began to separate from Antarctica and moved northward, opening up the Indian Ocean. Madagascar and India separated from each other 100–90 Ma in the Late Cretaceous. India continued to move northward toward Eurasia at 15 centimeters (6 in) a year (a plate tectonic record), closing the eastern Tethys Ocean, while Madagascar stopped and became locked to the African Plate. New Zealand, New Caledonia and the rest of Zealandia began to separate from Australia, moving eastward toward the Pacific and opening the Coral Sea and Tasman Sea.

Opening of the Norwegian Sea and break-up of Australia and Antarctica

The third major and final phase of the break-up of Pangaea occurred in the early Cenozoic (Paleocene to Oligocene). Laurasia split when North America/Greenland (also called Laurentia) broke free from Eurasia, opening the Norwegian Sea about 60–55 Ma. The Atlantic and Indian Oceans continued to expand, closing the Tethys Ocean.

Meanwhile, Australia split from Antarctica and moved quickly northward, just as India had done more than 40 million years before. Australia is currently on a collision course with eastern Asia. Both Australia and India are currently moving northeast at 5–6 centimeters (2–3 in) a year. Antarctica has been near or at the South Pole since the formation of Pangaea about 280 Ma. India started to collide with Asia beginning about 35 Ma, forming the Himalayan orogeny, and also finally closing the Tethys Seaway; this collision continues today. The African Plate started to change directions, from west to northwest toward Europe, and South America began to move in a northward direction, separating it from Antarctica and allowing complete oceanic circulation around Antarctica for the first time. This motion, together with decreasing atmospheric carbon dioxide concentrations, caused a rapid cooling of Antarctica and allowed glaciers to form. This glaciation eventually coalesced into the kilometers-thick ice sheets seen today. Other major events took place during the Cenozoic, including the opening of the Gulf of California, the uplift of the Alps, and the opening of the Sea of Japan. The break-up of Pangaea continues today in the Red Sea Rift and East African Rift.

Climate change after Pangaea

The breakup of Pangaea was accompanied by outgassing of large quantities of carbon dioxide from continental rifts. This produced a Mesozoic CO2 High that contributed to the very warm climate of the Early Cretaceous. The opening of the Tethys Ocean also contributed to the warming of the climate. The very active mid-ocean ridges associated with the breakup of Pangaea raised sea levels to the highest in the geological record, flooding much of the continents.

The expansion of the temperate climate zones that accompanied the breakup of Pangaea may have contributed to the diversification of the angiosperms.

Additional Information

About 300 million years ago, Earth didn't have seven continents, but instead one massive supercontinent called Pangaea, which was surrounded by a single ocean called Panthalassa.

The explanation for Pangaea's formation ushered in the modern theory of plate tectonics, which posits that the Earth's outer shell is broken up into several plates that slide over Earth's rocky shell, the mantle.

Over the course of the planet's 4.5 billion-year history, several supercontinents have formed and broken up, a result of churning and circulation in the Earth's mantle, which makes up 84% of the planet's volume, according to the U.S. Geological Survey. This breakup and formation of supercontinents has dramatically altered the planet's history.

"This is what's driven the entire evolution of the planet through time. This is the major backbeat of the planet," said Brendan Murphy, a geology professor at the St. Francis Xavier University, in Antigonish, Nova Scotia.

More than a century ago, the scientist Alfred Wegener proposed the notion of an ancient supercontinent, which he named Pangaea (sometimes spelled Pangea), after putting together several lines of evidence.

The first and most obvious was that the "continents fit together like a tongue and groove," something that was quite noticeable on any accurate map, Murphy said. Another telltale hint that Earth's continents were all one land mass comes from the geologic record. Coal deposits found in Pennsylvania have a similar composition to those spanning across Poland, Great Britain and Germany from the same time period. That indicates that North America and Europe must have once been a single landmass. And the orientation of magnetic minerals in geologic sediments reveals how Earth's magnetic poles migrated over geologic time, Murphy said.

In the fossil record, identical plants, such as the extinct seed fern Glossopteris, are found on now widely disparate continents. And mountain chains that now lie on different continents, such as the Appalachians in the United States and the Atlas Mountains spanning Morocco, Algeria and Tunisia were all part of the Central Pangaea Mountains, formed through the collision of the supercontinents Gondwana and Laurussia.

yzxGQrsNXDzWDcTKPFqFFf-1200-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

Board footer

Powered by FluxBB