Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2426 2025-01-18 17:10:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2326) Formula/Formulae

Gist

Other form: formulas.

A formula is generally a fixed pattern that is used to achieve consistent results. It might be made up of words, numbers, or ideas that work together to define a procedure to be followed for the desired outcome.

Formulas, the patterns we follow in life, are used everywhere. In math or science, a formula might express a numeric or chemical equation; in cooking, a recipe is a formula. Baby formula is made up of the nutrients necessary for maintaining healthy growth, and the right formula for a fuel mixture is critical for a racing car's best performance. Everyone has their favorite formula for success. J. Paul Getty once gave his as "rise early, work hard, strike oil."

Summary

A chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae.

The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is CH2O (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is C6H12O6 (12 hydrogen atoms, six carbon and oxygen atoms).

Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is CH3−CH2−OH or CH3CH2OH. However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents.

Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula C6H12O6 with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula.

Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge.

Details

In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities.

The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin).

In mathematics

In mathematics, a formula generally refers to an equation or inequality relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius:

.

Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form.

In a general context, formulas often represent mathematical models of real world phenomena, and as such can be used to provide solutions (or approximate solutions) to real world problems, with some being more general than others. For example, the formula

is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations.

Expressions are distinct from formulas in the sense that they don't usually contain relations like equality (=) or inequality (<). Expressions denote a mathematical object, where as formulas denote a statement about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example,

is an expression, while
is a formula.

However, in some areas mathematics, and in particular in computer algebra, formulas are viewed as expressions that can be evaluated to true or false, depending on the values that are given to the variables occurring in the expressions. For example

takes the value false if x is given a value less than 1, and the value true otherwise.

In mathematical logic

In mathematical logic, a formula (often referred to as a well-formed formula) is an entity constructed using the symbols and formation rules of a given logical language.[8] For example, in first-order logic,


is a formula, provided that f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol.

Chemical formulas

In modern chemistry, a chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using a single line of chemical element symbols, numbers, and sometimes other symbols, such as parentheses, brackets, and plus (+) and minus (−) signs. For example, H2O is the chemical formula for water, specifying that each molecule consists of two hydrogen (H) atoms and one oxygen (O) atom. Similarly, O−3 denotes an ozone molecule consisting of three oxygen atoms and a net negative charge.

The structural formula for butane. There are three common non-pictorial types of chemical formulas for this molecule:
* the empirical formula C2H5
* the molecular formula C4H10 and
* the condensed formula (or semi-structural formula) CH3CH2CH2CH3.

A chemical formula identifies each constituent element by its chemical symbol, and indicates the proportionate number of atoms of each element.

In empirical formulas, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound—as ratios to the key element. For molecular compounds, these ratio numbers can always be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O, because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written as empirical formulas which contains only the whole numbers. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio, with n ranging from over 4 to more than 6.5.

When the chemical compound of the formula consists of simple molecules, chemical formulas often employ ways to suggest the structure of the molecule. There are several types of these formulas, including molecular formulas and condensed formulas. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. Except for the very simple substances, molecular chemical formulas generally lack needed structural information, and might even be ambiguous in occasions.

A structural formula is a drawing that shows the location of each atom, and which atoms it binds to.

In computing

In computing, a formula typically describes a calculation, such as addition, to be performed on one or more variables. A formula is often implicitly provided in the form of a computer instruction such as.

Degrees Celsius = (5/9)*(Degrees Fahrenheit  - 32)

In computer spreadsheet software, a formula indicating how to compute the value of a cell, say A3, could be written as
=A1+A2
where A1 and A2 refer to other cells (column A, row 1 or 2) within the spreadsheet. This is a shortcut for the "paper" form A3 = A1+A2, where A3 is, by convention, omitted because the result is always stored in the cell itself, making the stating of the name redundant.

Units

Formulas used in science almost always require a choice of units. Formulas are used to express relationships between various quantities, such as temperature, mass, or charge in physics; supply, profit, or demand in economics; or a wide range of other quantities in other disciplines.

An example of a formula used in science is Boltzmann's entropy formula. In statistical thermodynamics, it is a probability equation relating the entropy S of an ideal gas to the quantity W, which is the number of microstates corresponding to a given macrostate:


where k is the Boltzmann constant, equal to
, and W is the number of microstates consistent with the given macrostate.

eyJidWNrZXQiOiJjb250ZW50Lmhzd3N0YXRpYy5jb20iLCJrZXkiOiJnaWZcL2Rpc3RhbmNlLWZvcm11bGEuanBnIiwiZWRpdHMiOnsicmVzaXplIjp7IndpZHRoIjo4Mjh9LCJ0b0Zvcm1hdCI6ImF2aWYifX0=


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2427 2025-01-19 00:05:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2327) Continent

Gist

A continent is one of Earth's seven main divisions of land. The continents are, from largest to smallest: Asia, Africa, North America, South America, Antarctica, Europe, and Australia. When geographers identify a continent, they usually include all the islands associated with it.

A continent is a large continuous mass of land conventionally regarded as a collective region. There are seven continents: Asia, Africa, North America, South America, Antarctica, Europe, and Australia (listed from largest to smallest in size). Sometimes Europe and Asia are considered one continent called Eurasia.

Summary

A Continent is one of the larger continuous masses of land, namely, Asia, Africa, North America, South America, Antarctica, Europe, and Australia, listed in order of size. (Europe and Asia are sometimes considered a single continent, Eurasia.)

There is great variation in the sizes of continents; Asia is more than five times as large as Australia. The largest island in the world, Greenland, is only about one-fourth the size of Australia. The continents differ sharply in their degree of compactness. Africa has the most regular coastline and, consequently, the lowest ratio of coastline to total area. Europe is the most irregular and indented and has by far the highest ratio of coastline to total area.

The continents are not distributed evenly over the surface of the globe. If a hemisphere map centred in northwestern Europe is drawn, most of the world’s land area can be seen to lie within that hemisphere. More than two-thirds of the Earth’s land surface lies north of the Equator, and all the continents except Antarctica are wedge shaped, wider in the north than they are in the south.

The distribution of the continental platforms and ocean basins on the surface of the globe and the distribution of the major landform features have long been among the most intriguing problems for scientific investigation and theorizing. Among the many hypotheses that have been offered as explanation are: (1) the tetrahedral (four-faced) theory, in which a cooling earth assumes the shape of a tetrahedron by spherical collapse; (2) the accretion theory, in which younger rocks attached to older shield areas became buckled to form the landforms; (3) the continental-drift theory, in which an ancient floating continent drifted apart; and (4) the convection-current theory, in which convection currents in the Earth’s interior dragged the crust to cause folding and mountain making.

Geological and seismological evidence accumulated in the 20th century indicates that the continental platforms do “float” on a crust of heavier material that forms a layer completely enveloping the Earth. Each continent has one of the so-called shield areas that formed 2 billion to 4 billion years ago and is the core of the continent to which the remainder (most of the continent) has been added. Even the rocks of the extremely old shield areas are older in the centre and younger toward the margins, indicating that this process of accumulation started early. In North America the whole northeast quarter of the continent, called the Canadian, or Laurentian, Shield, is characterized by the ancient rocks of what might be called the original continent. In Europe the shield area underlies the eastern Scandinavian peninsula and Finland. The Guiana Highlands of South America are the core of that continent. Much of eastern Siberia is underlain by the ancient rocks, as are western Australia and southern Africa.

Details

A continent is any of several large geographical regions. Continents are generally identified by convention rather than any strict criteria. A continent could be a single landmass or a part of a very large landmass, as in the case of Asia or Europe. Due to this, the number of continents varies; up to seven or as few as four geographical regions are commonly regarded as continents. Most English-speaking countries recognize seven regions as continents. In order from largest to smallest in area, these seven regions are Asia, Africa, North America, South America, Antarctica, Europe, and Australia. Different variations with fewer continents merge some of these regions; examples of this are merging Asia and Europe into Eurasia, North America and South America into America, and Africa, Asia, and Europe into Afro-Eurasia.

Oceanic islands are occasionally grouped with a nearby continent to divide all the world's land into geographical regions. Under this scheme, most of the island countries and territories in the Pacific Ocean are grouped together with the continent of Australia to form the geographical region of Oceania.

In geology, a continent is defined as "one of Earth's major landmasses, including both dry land and continental shelves". The geological continents correspond to seven large areas of continental crust that are found on the tectonic plates, but exclude small continental fragments such as Madagascar that are generally referred to as microcontinents. Continental crust is only known to exist on Earth.

The idea of continental drift gained recognition in the 20th century. It postulates that the current continents formed from the breaking up of a supercontinent (Pangaea) that formed hundreds of millions of years ago.

Etymology

From the 16th century the English noun continent was derived from the term continent land, meaning continuous or connected land[5] and translated from the Latin terra continens. The noun was used to mean "a connected or continuous tract of land" or mainland. It was not applied only to very large areas of land—in the 17th century, references were made to the continents (or mainlands) of the Isle of Man, Ireland and Wales and in 1745 to Sumatra. The word continent was used in translating Greek and Latin writings about the three "parts" of the world, although in the original languages no word of exactly the same meaning as continent was used.

While continent was used on the one hand for relatively small areas of continuous land, on the other hand geographers again raised Herodotus's query about why a single large landmass should be divided into separate continents. In the mid-17th century, Peter Heylin wrote in his Cosmographie that "A Continent is a great quantity of Land, not separated by any Sea from the rest of the World, as the whole Continent of Europe, Asia, Africa." In 1727, Ephraim Chambers wrote in his Cyclopædia, "The world is ordinarily divided into two grand continents: the Old and the New." And in his 1752 atlas, Emanuel Bowen defined a continent as "a large space of dry land comprehending many countries all joined together, without any separation by water. Thus Europe, Asia, and Africa is one great continent, as America is another."[8] However, the old idea of Europe, Asia and Africa as "parts" of the world ultimately persisted with these being regarded as separate continents.

Definitions and application

By convention, continents "are understood to be large, continuous, discrete masses of land, ideally separated by expanses of water". By this definition, all continents have to be an island of some metric. In modern schemes with five or more recognized continents, at least one pair of continents is joined by land in some fashion. The criterion "large" leads to arbitrary classification: Greenland, with a surface area of 2,166,086 square kilometres (836,330 sq mi), is only considered the world's largest island, while Australia, at 7,617,930 square kilometres (2,941,300 sq mi), is deemed the smallest continent.

Earth's major landmasses all have coasts on a single, continuous World Ocean, which is divided into several principal oceanic components by the continents and various geographic criteria.

The geological definition of a continent has four criteria: high elevation relative to the ocean floor; a wide range of igneous, metamorphic and sedimentary rocks rich in silica; a crust thicker than the surrounding oceanic crust; and well-defined limits around a large enough area.

Extent

The most restricted meaning of continent is that of a continuous area of land or mainland, with the coastline and any land boundaries forming the edge of the continent. In this sense, the term continental Europe (sometimes referred to in Britain as "the Continent") is used to refer to mainland Europe, excluding islands such as Great Britain, Iceland, Ireland, and Malta, while the term continent of Australia may refer to the mainland of Australia, excluding New Guinea, Tasmania, and other nearby islands. Similarly, the continental United States refers to "the 49 States (including Alaska but excluding Hawaii) located on the continent of North America, and the District of Columbia."

From the perspective of geology or physical geography, continent may be extended beyond the confines of continuous dry land to include the shallow, submerged adjacent area (the continental shelf) and the islands on the shelf (continental islands), as they are structurally part of the continent.

From this perspective, the edge of the continental shelf is the true edge of the continent, as shorelines vary with changes in sea level. In this sense the islands of Great Britain and Ireland are part of Europe, while Australia and the island of New Guinea together form a continent. Taken to its limit, this view could support the view that there are only three continents: Antarctica, Australia-New Guinea, and a single mega-continent which joins Afro-Eurasia and America via the contiguous continental shelf in and around the Bering Sea. The vast size of the latter compared to the first two might even lead some to say it is the only continent, the others being more comparable to Greenland or New Zealand.

As a cultural construct, the concept of a continent may go beyond the continental shelf to include oceanic islands and continental fragments. In this way, Iceland is considered a part of Europe, and Madagascar a part of Africa. Extrapolating the concept to its extreme, some geographers group the Australian continental landmass with other islands in the Pacific Ocean into Oceania, which is usually considered a region rather than a continent. This divides the entire land surface of Earth into continents, regions, or quasi-continents.

Separation

The criterion that each continent is a discrete landmass is commonly relaxed due to historical conventions and practical use. Of the seven most globally recognized continents, only Antarctica and Australia are completely separated from other continents by the ocean. Several continents are defined not as absolutely distinct bodies but as "more or less discrete masses of land". Africa and Asia are joined by the Isthmus of Suez, and North America and South America by the Isthmus of Panama. In both cases, there is no complete separation of these landmasses by water (disregarding the Suez Canal and the Panama Canal, which are both narrow and shallow, as well as human-made). Both of these isthmuses are very narrow compared to the bulk of the landmasses they unite.

North America and South America are treated as separate continents in the seven-continent model. However, they may also be viewed as a single continent known as America. This viewpoint was common in the United States until World War II, and remains prevalent in some Asian six-continent models. The single American continent model remains a common view in European countries like France, Greece, Hungary, Italy, Malta, Portugal, Spain, Latin American countries and some Asian countries.

The criterion of a discrete landmass is completely disregarded if the continuous landmass of Eurasia is classified as two separate continents (Asia and Europe). Physiographically, Europe and the Indian subcontinent are large peninsulas of the Eurasian landmass. However, Europe is considered a continent with its comparatively large land area of 10,180,000 square kilometres (3,930,000 sq mi), while the Indian subcontinent, with less than half that area, is considered a subcontinent. The alternative view—in geology and geography—that Eurasia is a single continent results in a six-continent view of the world. Some view the separation of Eurasia into Asia and Europe as a residue of Eurocentrism: "In physical, cultural and historical diversity, China and India are comparable to the entire European landmass, not to a single European country. [...]." However, for historical and cultural reasons, the view of Europe as a separate continent continues in almost all categorizations.

If continents are defined strictly as discrete landmasses, embracing all the contiguous land of a body, then Africa, Asia, and Europe form a single continent which may be referred to as Afro-Eurasia. Combined with the consolidation of the Americas, this would produce a four-continent model consisting of Afro-Eurasia, America, Antarctica, and Australia.

When sea levels were lower during the Pleistocene ice ages, greater areas of the continental shelf were exposed as dry land, forming land bridges between Tasmania and the Australian mainland. At those times, Australia and New Guinea were a single, continuous continent known as Sahul. Likewise, Afro-Eurasia and the Americas were joined by the Bering Land Bridge. Other islands, such as Great Britain, were joined to the mainlands of their continents. At that time, there were just three discrete landmasses in the world: Africa-Eurasia-America, Antarctica, and Australia-New Guinea (Sahul).

Number

There are several ways of distinguishing the continents:

* The seven-continent model is taught in most English-speaking countries, including Australia, Canada, the United Kingdom, and the United States, and also in Bangladesh, China, India, Indonesia, Pakistan, the Philippines, Sri Lanka, Suriname, parts of Europe and Africa.
* The six-continent combined-Eurasia model is mostly used in Russia and some parts of Eastern Europe.
* The six-continent combined-America model is taught in Greece and many Romance-speaking countries—including Latin America.
* The Olympic flag's five rings represent the five inhabited continents of the combined-America model but excludes the uninhabited Antarctica.

In the English-speaking countries, geographers often use the term Oceania to denote a geographical region which includes most of the island countries and territories in the Pacific Ocean, as well as the continent of Australia.

Eighth continent

Zealandia (a submerged continent) has been called the eighth continent.

Area and population

The following table provides areas given by the Encyclopædia Britannica for each continent in accordance with the seven-continent model, including Australasia along with Melanesia, Micronesia, and Polynesia as parts of Oceania. It also provides populations of continents according to 2021 estimates by the United Nations Statistics Division based on the United Nations geoscheme, which includes all of Egypt (including the Isthmus of Suez and the Sinai Peninsula) as a part of Africa, all of Armenia, Azerbaijan, Cyprus, Georgia, Indonesia, Kazakhstan, and Turkey (including East Thrace) as parts of Asia, all of Russia (including Siberia) as a part of Europe, all of Panama and the United States (including Hawaii) as parts of North America, and all of Chile (including Easter Island) as a part of South America.

Geological continents

Geologists use four key attributes to define a continent:

* Elevation – The landmass, whether dry or submerged beneath the ocean, should be elevated above the surrounding ocean crust.
* Geology – The landmass should contain different types of rock: igneous, metamorphic, and sedimentary.
* Crustal structure – The landmass should consist of the continental crust, which is thicker and has a lower seismic velocity than the oceanic crust.
* Limits and area – The landmass should have clearly defined boundaries and an area of more than one million square kilometres.

With the addition of Zealandia in 2017, Earth currently has seven recognized geological continents:

* Africa
* Antarctica
* Australia
* Eurasia
* North America
* South America
* Zealandia

Due to a seeming lack of Precambrian cratonic rocks, Zealandia's status as a geological continent has been disputed by some geologists. However, a study conducted in 2021 found that part of the submerged continent is indeed Precambrian, twice as old as geologists had previously thought, which is further evidence that supports the idea of Zealandia being a geological continent.

All seven geological continents are spatially isolated by geologic features.

Additional Information

A continent is one of Earth’s seven main divisions of land. The continents are, from largest to smallest: Asia, Africa, North America, South America, Antarctica, Europe, and Australia.

When geographers identify a continent, they usually include all the islands associated with it. Japan, for instance, is part of the continent of Asia. Greenland and all the islands in the Caribbean Sea are usually considered part of North America.

Together, the continents add up to about 148 million square kilometers (57 million square miles) of land. Continents make up most—but not all—of Earth’s land surface. A very small portion of the total land area is made up of islands that are not considered physical parts of continents. The ocean covers almost three-fourths of Earth. The area of the ocean is more than double the area of all the continents combined. All continents border at least one ocean. Asia, the largest continent, has the longest series of coastlines.

Coastlines, however, do not indicate the actual boundaries of the continents. Continents are defined by their continental shelves. A continental shelf is a gently sloping area that extends outward from the beach far into the ocean. A continental shelf is part of the ocean, but also part of the continent.

To geographers, continents are also culturally distinct. The continents of Europe and Asia, for example, are actually part of a single, enormous piece of land called Eurasia. But linguistically and ethnically, the areas of Asia and Europe are distinct. Because of this, most geographers divide Eurasia into Europe and Asia. An imaginary line, running from the northern Ural Mountains in Russia south to the Caspian and Black Seas, separates Europe, to the west, from Asia, to the east.

Building the Continents

Earth formed 4.6 billion years ago from a great, swirling cloud of dust and gas. The continuous smashing of space debris and the pull of gravity made Earth's core heat up. As the heat increased, some of Earth’s rocky materials melted and rose to the surface, where they cooled and formed a crust. Heavier material sank toward Earth’s center. Eventually, Earth came to have three main layers: the core, the mantle, and the crust.

The crust and the top portion of the mantle form a rigid shell around Earth that is broken up into huge sections called tectonic plates. The heat from inside Earth causes the plates to slide around on the molten mantle. Today, tectonic plates continue to slowly slide around the surface, just as they have been doing for hundreds of millions of years. Geologists believe the interaction of the plates, a process called plate tectonics, contributed to the creation of continents.

Studies of rocks found in ancient areas of North America have revealed the oldest known pieces of the continents began to form nearly four billion years ago, soon after Earth itself formed. At that time, a primitive ocean covered Earth. Only a small fraction of the crust was made up of continental material. Scientists theorize that this material built up along the boundaries of tectonic plates during a process called subduction. During subduction, plates collide, and the edge of one plate slides beneath the edge of another.

When heavy oceanic crust subducted toward the mantle, it melted in the mantle’s intense heat. Once melted, the rock became lighter. Called magma, it rose through the overlying plate and burst out as lava. When the lava cooled, it hardened into igneous rock.

Gradually, the igneous rock built up into small volcanic islands above the surface of the ocean. Over time, these islands grew bigger, partly as the result of more lava flows and partly from the buildup of material scraped off descending plates. When plates carrying islands subducted, the islands themselves did not descend into the mantle. Their material fused with that of islands on the neighboring plate. This made even larger landmasses—the first continents.

The building of volcanic islands and continental material through plate tectonics is a process that continues today. Continental crust is much lighter than oceanic crust. In subduction zones, where tectonic plates interact with each other, oceanic crust always subducts beneath continental crust. Oceanic crust is constantly being recycled in the mantle. For this reason, continental crust is much, much older than oceanic crust.

Wandering Continents

If you could visit Earth as it was millions of years ago, it would look very different. The continents have not always been where they are today. About 480 million years ago, most continents were scattered chunks of land lying along or south of the Equator. Millions of years of continuous tectonic activity changed their positions, and by 240 million years ago, almost all of the world’s land was joined in a single, huge continent. Geologists call this supercontinent Pangaea, which means “all lands” in Greek.

By about 200 million years ago, the forces that helped form Pangaea caused the supercontinent to begin to break apart. The pieces of Pangaea that began to move apart were the beginnings of the continents that we know today.

A giant landmass that would become Europe, Asia, and North America separated from another mass that would split up into other continents and regions. In time, Antarctica and Oceania, still joined together, broke away and drifted south. The small piece of land that would become the peninsula of India broke away and for millions of years moved north as a large island. It eventually collided with Asia. Gradually, the different landmasses moved to their present locations.

The positions of the continents are always changing. North America and Europe are moving away from each other at the rate of about 2.5 centimeters (one inch) per year. If you could visit the planet in the future, you might find that part of the United States' state of California had separated from North America and become an island. Africa might have split in two along the Great Rift Valley. It is even possible that another supercontinent may form someday.

Continental Features

The surface of the continents has changed many times because of mountain building, weathering, erosion, and build-up of sediment. Continuous, slow movement of tectonic plates also changes surface features.

The rocks that form the continents have been shaped and reshaped many times. Great mountain ranges have risen and then have been worn away. Ocean waters have flooded huge areas and then gradually dried up. Massive ice sheets have come and gone, sculpting the landscape in the process.

Today, all continents have great mountain ranges, vast plains, extensive plateaus, and complex river systems. The landmasses’s average elevation above sea level is about 838 meters (2,750 feet).

Although each is unique, all the continents share two basic features: old, geologically stable regions, and younger, somewhat more active regions. In the younger regions, the process of mountain building has happened recently and often continues to happen.

The power for mountain building, or orogeny, comes from plate tectonics. One way mountains form is through the collision of two tectonic plates. The impact creates wrinkles in the crust, just as a rug wrinkles when you push against one end of it. Such a collision created Asia’s Himalaya several million years ago. The plate carrying India slowly and forcefully shoved the landmass of India into Asia, which was riding on another plate. The collision continues today, causing the Himalaya to grow taller every year.

Recently formed mountains, called coastal ranges, rise near the western coasts of North America and South America. Older, more stable mountain ranges are found in the interior of continents. The Appalachians of North America and the Urals, on the border between Europe and Asia, are older mountain ranges that are not geologically active.

Even older than these ancient, eroded mountain ranges are flatter, more stable areas of the continents called cratons. A craton is an area of ancient crust that formed during Earth’s early history. Every continent has a craton. Microcontinents, like New Zealand, lack cratons.

Cratons have two forms: shields and platforms. Shields are bare rocks that may be the roots or cores of ancient mountain ranges that have completely eroded away. Platforms are cratons with sediment and sedimentary rock lying on top.

The Canadian Shield makes up about a quarter of North America. For hundreds of thousands of years, sheets of ice up to 3.2 kilometers (two miles) thick coated the Canadian Shield. The moving ice wore away material on top of ancient rock layers, exposing some of the oldest formations on Earth. When you stand on the oldest part of the Canadian Shield, you stand directly on rocks that formed more than 3.5 billion years ago.

North America

North America, the third-largest continent, extends from the tiny Aleutian Islands in the northwest to the Isthmus of Panama in the south. The continent includes the enormous island of Greenland in the northeast. In the far north, the continent stretches halfway around the world, from Greenland to the Aleutians. But at Panama’s narrowest part, the continent is just 50 kilometers (31 miles) across.

Young mountains—including the Rockies, North America’s largest chain—rise in the West. Some of Earth’s youngest mountains are found in the Cascade Range of the U.S. states of Washington, Oregon, and California. Some peaks there began to form only about one million years ago—a wink of an eye in Earth’s long history. North America’s older mountain ranges rise near the East Coast of the United States and Canada.

In between the mountain systems lie wide plains that contain deep, rich soil. Much of the soil was formed from material deposited during the most recent glacial period. This Ice Age reached its peak about 18,000 years ago. As glaciers retreated, streams of melted ice dropped sediment on the land, building layers of fertile soil in the plains region. Grain grown in this region, called the “breadbasket of North America,” feeds a large part of the world.

North America contains a variety of natural wonders. Landforms and all types of vegetation can be found within its boundaries. North America has deep canyons, such as Copper Canyon in the Mexican state of Chihuahua. Yellowstone National Park, in the U.S. state of Wyoming, has some of the world’s most active geysers. Canada’s Bay of Fundy has the greatest variation of tide levels in the world. The Great Lakes form the planet’s largest area of freshwater. In California, giant sequoias, the world’s most massive trees, grow more than 76 meters (250 feet) tall and nearly 31 meters (100 feet) around.

Greenland, off the east coast of Canada, is the world’s largest island. Despite its name, Greenland is mostly covered with ice. Its ice is a remnant of the great ice sheets that once blanketed much of the North American continent. Greenland is the only place besides Antarctica that still has an ice sheet.

From the freezing Arctic to the tropical jungles of Central America, North America enjoys more climate variation than any other continent. Almost every type of ecosystem is represented somewhere on the continent, from coral reefs in the Caribbean to Greenland’s ice sheet to the Great Plains in the U.S. and Canada.

Today, North America is home to the citizens of Canada, the United States, Greenland (an autonomous terrirory of Denmark), Mexico, Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Panama, and the island countries and territories that dot the Caribbean Sea and the western North Atlantic.

Most of North America sits on the North American Plate. Parts of the Canadian province of British Columbia and the U.S. states of Washington, Oregon, and California sit on the tiny Juan de Fuca Plate. Parts of California and the Mexican state of Baja California sit on the enormous Pacific Plate. Parts of Baja California and the Mexican states of Baja California Sur, Sonora, Sinaloa, and Jalisco sit on the Cocos Plate. The Caribbean Plate carries most of the small islands of the Caribbean Sea (south of the island of Cuba) as well as Central America from Honduras to Panama. The Hawaiian Islands, in the middle of the Pacific Ocean on the Pacific Plate, are usually considered part of North America.

South America

South America is connected to North America by the narrow Isthmus of Panama. These two continents weren’t always connected; they came together only three million years ago. South America is the fourth-largest continent and extends from the sunny beaches of the Caribbean Sea to the frigid waters near the Antarctic Circle.

South America’s southernmost islands, called Tierra del Fuego, are less than 1,120 kilometers (700 miles) from Antarctica. These islands even host some Antarctic birds, such as penguins, albatrosses, and terns. Early Spanish explorers visiting the islands for the first time saw small fires dotting the land. These fires, made by Indigenous people, seemed to float on the water, which is probably how the islands got their name—Tierra del Fuego means "Land of Fire."

The Andes, Earth’s longest terrestrial mountain range, stretch the entire length of South America. Many active volcanoes dot the range. These volcanic areas are fueled by heat generated as a large oceanic plate, called the Nazca Plate, grinds beneath the plate carrying South America.

The central-southern area of South America has pampas, or plains. These rich areas are ideal for agriculture. The growing of wheat is a major industry in the pampas. Grazing animals, such as cattle and sheep, are also raised in the pampas region.

In northern South America, the Amazon River and its tributaries flow through the world’s largest tropical rainforest. In volume, the Amazon is the largest river in the world. More water flows from it than from the next six largest rivers combined.

South America is also home to the world’s highest waterfall, Angel Falls, in the country of Venezuela. Water flows more than 979 meters (3,212 feet)—almost one mile. The falls are so high that most of the water evaporates into mist or is blown away by wind before it reaches the ground.

South American rainforests contain an enormous wealth of animal and plant life. More than 15,000 species of plants and animals are found only in the Amazon Basin. Many Amazonian plant species are sources of food and medicine for the rest of the world. Scientists are trying to find ways to preserve this precious and fragile environment as people move into the Amazon Basin and clear land for settlements and agriculture.

Twelve independent countries make up South America: Brazil, Colombia, Argentina, Peru, Venezuela, Chile, Ecuador, Bolivia, Paraguay, Uruguay, Guyana, and Suriname. The territories of French Guiana, which is claimed by France, and the Falkland Islands, which are adminstered by the United Kingdom but claimed by Argentina, are also part of South America.

Almost all of South America sits on top of the South American Plate.

Europe

Europe, the sixth-largest continent, contains just seven percent of the world’s land. In total area, the continent of Europe is only slightly larger than the country of Canada. However, the population of Europe is more than twice that of South America. Europe has 46 countries and many of the world’s major cities, including London, the United Kingdom; Paris, France; Berlin, Germany; Rome, Italy; Madrid, Spain; and Moscow, Russia.

Most European countries have access to the ocean. The continent is bordered by the Arctic Ocean in the north, the Atlantic Ocean in the west, the Caspian Sea in the southeast, and the Mediterranean and Black Seas in the south. The nearness of these bodies of water and the navigation of many of Europe’s rivers played a major role in the continent’s history. Early Europeans learned the river systems of the Volga, Danube, Don, Rhine, and Po, and could successfully travel the length and width of the small continent for trade, communication, or conquest.

Navigation and exploration outside of Europe was an important part of the development of the continent’s economic, social, linguistic, and political legacy. European explorers were responsible for colonizing land on every continent except Antarctica. This colonization process had a drastic impact on the economic and political development of those continents, as well as Europe. Europe's colonial period ended in the violent transfer of wealth and land from Indigenous peoples in the Americas, and later Africa, Oceania, and Asia.

In the east, the Ural Mountains separate Europe from Asia. The nations of Russia and Kazakhstan straddle both continents. Another range, the Kjølen Mountains, extends along the northern part of the border between Sweden and Norway. To the south, the Alps form an arc stretching from Albania to Austria, then across Switzerland and northern Italy into France. As the youngest and steepest of Europe’s mountains, the Alps geologically resemble the Rockies of North America, another young range.

A large area of gently rolling plains extends from northern France eastward to the Urals. A climate of warm summers, cold winters, and plentiful rain helps make much of this European farmland very productive.

The climate of Western Europe, especially around the Mediterranean Sea, makes it one of the world’s leading tourism destinations.

Almost all of Europe sits on the massive Eurasian Plate.

Africa

Africa, the second-largest continent, covers an area more than three times that of the United States. From north to south, Africa stretches about 8,000 kilometers (5,000 miles). It is connected to Asia by the Isthmus of Suez in Egypt.

The Sahara, which covers much of North Africa, is the world’s largest hot desert. The world’s longest river, the Nile, flows more than 6,560 kilometers (4,100 miles) from its most remote headwaters in Lake Victoria to the Mediterranean Sea in the north. A series of falls and rapids along the southern part of the river makes navigation difficult. The Nile has played an important role in the history of Africa. In ancient Egyptian civilization, it was a source of life for food, water, and transportation.

The top half of Africa is mostly dry, hot desert. The middle area has savannas, or flat, grassy plains. This region is home to wild animals such as lions, giraffes, elephants, hyenas, cheetahs, and wildebeests. The central and southern areas of Africa are dominated by rainforests. Many of these forests thrive around Africa’s other great rivers, the Zambezi, the Congo, and the Niger. These rivers also served as the homes to Great Zimbabwe, the Kingdom of Kongo, and the Ghana Empire, respectively. However, trees are being cut down in Africa’s rainforests for many of the same reasons deforestation is taking place in the rainforests of South America and Asia: development for businesses, homes, and agriculture.

Much of Africa is a high plateau surrounded by narrow strips of coastal lowlands. Hilly uplands and mountains rise in some areas of the interior. Glaciers on Mount Kilimanjaro in Tanzania sit just kilometers from the tropical jungles below. Even though Kilimanjaro is not far from the Equator, snow covers its summit all year long.

In eastern Africa, a giant depression called the Great Rift Valley runs from the Red Sea to the country of Mozambique. (The rift valley actually starts in southwestern Asia.) The Great Rift Valley is a site of major tectonic activity, where the continent of Africa is splitting into two. Geologists have already named the two parts of the African Plate. The Nubian Plate will carry most of the continent, to the west of the rift; the Somali Plate will carry the far eastern part of the continent, including the so-called “Horn of Africa.” The Horn of Africa is a peninsula that resembles the upturned horn of a rhinoceros. The countries of Eritrea, Ethiopia, Djibouti, and Somalia sit on the Horn of Africa and the Somali Plate.

Africa is home to 54 countries but only 16 percent of the world’s total population. The area of central-eastern Africa is important to scientists who study evolution and the earliest origins of humanity. This area is thought to be the place where hominids began to evolve.

The entire continent of Africa sits on the African Plate.

Asia

Asia, the largest continent, stretches from the eastern Mediterranean Sea to the western Pacific Ocean. There are more than 40 countries in Asia. Some are among the most-populated countries in the world, including China, India, and Indonesia. Sixty percent of Earth’s population lives in Asia. More than a third of the world’s people live in China and India alone.

The continent of Asia includes many islands, some of them are countries unto themselves. The Philippines, Indonesia, Japan, and Taiwan are major island nations in Asia.

Most of Asia’s people live in cities or fertile farming areas near river valleys, plains, and coasts. The plateaus in Central Asia are largely unsuitable for farming and are thinly populated.

Asia accounts for almost a third of the world’s land. The continent has a wide range of climate regions, from polar in the Siberian Arctic to tropical in equatorial Indonesia. Parts of Central Asia, including the Gobi Desert in China and Mongolia, are dry year-round. Southeast Asia, on the other hand, depends on the annual monsoons, which bring rain and make agriculture possible.

Monsoon rains and snowmelt feed Asian rivers such as the Ganges, the Yellow, the Mekong, the Indus, and the Yangtze. The rich valley between the Tigris and Euphrates Rivers in western Asia is called the “Fertile Crescent” for its place in the development of agriculture and human civilization.

Asia is the most mountainous of all the continents. More than 50 of the highest peaks in the world are in Asia. Mount Everest, which reaches more than 8,700 meters (29,000 feet) high in the Himalaya range, is the highest point on Earth. These mountains have become major destination spots for adventurous travelers.

Plate tectonics continuously push the mountains higher. As the landmass of India pushes northward into the landmass of Eurasia, parts of the Himalaya rise at a rate of about 2.5 centimeters (one inch) every five years.

Asia contains, not only, Earth’s highest elevation, but also its lowest place on land: the shores of the Dead Sea in the countries of Israel and Jordan. The land there lies more than 390 meters (1,300 feet) below sea level.

Although the Eurasian Plate carries most of Asia, it is not the only one supporting major parts of the large continent. The Arabian Peninsula, in the continent’s southwest, is carried by the Arabian Plate. The Indian Plate supports the Indian peninsula, sometimes called the Indian subcontinent. The Australian Plate carries some islands in Indonesia. The North American Plate carries eastern Siberia and the northern islands of Japan.

Australia

In addition to being the smallest continent, Australia is the flattest and the second-driest, after Antarctica. The region including the continent of Australia is sometimes called Oceania, to include the thousands of tiny islands of the Central Pacific and South Pacific, most notably Melanesia, Micronesia, and Polynesia (including the U.S. state of Hawai‘i). However, the continent of Australia itself includes only the nation of Australia, the eastern portion of the island of New Guinea (the nation of Papua New Guinea) and the island nation of New Zealand.

Australia covers just less than 8.5 million square kilometers (about 3.5 million square miles). Its population is about 31 million. It is the most sparsely populated continent, after Antarctica.

A plateau in the middle of mainland Australia makes up most of the continent’s total area. Rainfall is light on the plateau, and not many people have settled there. The Great Dividing Range, a long mountain range, rises near the east coast and extends from the northern part of the territory of Queensland through the territories of New South Wales and Victoria. Mainland Australia is known for the Outback, a desert area in the interior. This area is so dry, hot, and barren that few people live there.

In addition to the hot plateaus and deserts in mainland Australia, the continent also features lush equatorial rainforests on the island of New Guinea, tropical beaches, and high mountain peaks and glaciers in New Zealand.

Most of Australia’s people live in cities along the southern and eastern coasts of the mainland. Major cities include Perth, Sydney, Brisbane, Melbourne, and Adelaide.

Biologists who study animals consider Australia a living laboratory. When the continent began to break away from Antarctica more than 60 million years ago, it carried a cargo of animals with it. Isolated from life on other continents, the animals developed into creatures unique to Australia, such as the koala (Phascolarctos cinereus), the platypus (Ornithorhynchus anatinus), and the Tasmanian devil (Sarcophilus harrisii).

The Great Barrier Reef, off mainland Australia’s northeast coast, is another living laboratory. The world’s largest coral reef ecosystem, it is home to thousands of species of fish, sponges, marine mammals, corals, and crustaceans. The reef itself is 1,920 kilometers (1,200 miles) of living coral communities. By some estimates, it is the world’s largest living organism.

Most of Australia sits on the Australian Plate. The southern part of the South Island of New Zealand sits on the Pacific Plate.

Antarctica

Antarctica is the windiest, driest, and iciest place on Earth—it is the world's largest desert. Antarctica is larger than Europe or Australia, but unlike those continents, it has no permanent human population. People who work there are scientific researchers and support staff, such as pilots and cooks.

The climate of Antarctica makes it impossible to support agriculture or a permanent civilization. Temperatures in Antarctica, much lower than Arctic temperatures, plunge lower than -73 degrees Celsius (-100 degrees Fahrenheit).

Scientific bases and laboratories have been established in Antarctica for studies in fields that include geology, oceanography, and meteorology. The freezing temperatures of Antarctica make it an excellent place to study the history of Earth’s atmosphere and climate. Ice cores from the massive Antarctic ice sheet have recorded changes in Earth’s temperature and atmospheric gases for thousands of years. Antarctica is also an ideal place for discovering meteorites, or stony objects that have impacted Earth from space. The dark meteorites, often made of metals like iron, stand out from the white landscape of most of the continent.

Antarctica is almost completely covered with ice, sometimes as thick as 3.2 kilometers (two miles). In winter, Antarctica’s surface area may double as pack ice builds up in the ocean around the continent.

Like all other continents, Antarctica has volcanic activity. The most active volcano is Mount Erebus, which is less than 1,392 kilometers (870 miles) from the South Pole. Its frequent eruptions are evidenced by hot, molten rock beneath the continent’s icy surface.

Antarctica does not have any countries. However, scientific groups from different countries inhabit the research stations. A multinational treaty negotiated in 1959 and reviewed in 1991 states that research in Antarctica can only be used for peaceful purposes. McMurdo Station, the largest community in Antarctica, is operated by the United States. Vostok Station, where the coldest temperature on Earth was recorded, is operated by Russia.

All of Antarctica sits on the Antarctic Plate.

Continents-of-the-World.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2428 2025-01-19 18:48:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2328) Computer (Printing)

Gist

A printer is a device that accepts text and graphic output from a computer and transfers the information to paper, usually to standard-size, 8.5" by 11" sheets of paper. Printers vary in size, speed, sophistication and cost.

Summary

A printer is electronic device that accepts text files or images from a computer and transfers them to a medium such as paper or film. It can be connected directly to the computer or indirectly via a network. Printers are classified as impact printers (in which the print medium is physically struck) and non-impact printers. Most impact printers are dot-matrix printers, which have a number of pins on the print head that emerge to form a character. Non-impact printers fall into three main categories: laser printers use a laser beam to attract toner to an area of the paper; ink-jet printers spray a jet of liquid ink; and thermal printers transfer wax-based ink or use heated pins to directly imprint an image on specially treated paper. Important printer characteristics include resolution (in dots per inch), speed (in sheets of paper printed per minute), colour (full-colour or black-and-white), and cache memory (which affects the speed at which a file can be printed).

Details

What is a printer?

A printer is a device that accepts text and graphic output from a computer and transfers the information to paper, usually to standard-size, 8.5" by 11" sheets of paper. Printers vary in size, speed, sophistication and cost. In general, more expensive printers are used for more frequent printing or high-resolution color printing.

Personal computer printers can be distinguished as impact or non-impact printers. Early impact printers worked something like an automatic typewriter, with a key striking an inked impression on paper for each printed character. The dot matrix printer, an impact printer that strikes the paper a line at a time, was a popular low-cost option.

The best-known non-impact printers are the inkjet printer and the laser printer. The inkjet sprays ink from an ink cartridge at very close range to the paper as it rolls by, while the laser printer uses a laser beam reflected from a mirror to attract ink (called toner) to selected paper areas as a sheet rolls over a drum.

Different types of printers

There are many different printer manufacturers available today, including Canon, Epson, Hewlett-Packard, Xerox and Lexmark, among many others. There are also several types of printers to choose from, which we'll explore below.

* Inkjet printers recreate a digital image by spraying ink onto paper. These are the most common type of personal printer.
* Laser printers are used to create high-quality prints by passing a laser beam at a high speed over a negatively charged drum to define an image. Color laser printers are more often found in professional settings.
* 3D printers are a relatively new printer technology. 3D printing creates a physical object from a digital file. It works by adding layer upon layer of material until the print job is complete and the object is whole.
* Thermal printers produce an image on paper by passing paper with a thermochromic coating over a print head comprised of electrically heated elements and produces an image in the area where the heated coating turns black. A dye-sublimation printer is a form of thermal printing technology that uses heat to transfer dye onto materials.
* All-in-one printers are multifunction devices that combine printing with other technologies such as a copier, scanner and/or fax machine.
* LED printers are similar to laser printers but use a light-emitting diode array in the print head instead of a laser.
* Photo printers are similar to inkjet printers but are designed specifically to print high-quality photos, which require a lot of ink and special paper to ensure the ink doesn't smear.

Older printer types

There are a few first-generation printer types that are outdated and rarely used today:

* Dot matrix printer: Dot matrix printing is an older impact printer technology for text documents that strikes the paper one line at a time. Dot matrix printers offer very basic print quality.
* Line printer: A line printer prints a single line of text at a time. While an older form of printing, line printers are still in use today.

Features to look for in a printer

The four printer qualities of most interest to users are:

* Color: Most modern printers offer color printing. However, they can also be set to print in black and white. Color printers are more expensive to operate since they use two ink cartridges -- one color and one black ink -- or toners that need to be replaced after a certain number of pages are printed. Printing ink cartridges or toner cartridges are comprised of black, cyan, magenta and yellow ink. The ink can be mixed together, or it may come in separate monochrome solid ink printer cartridges, depending on the type of printer.
* Resolution: Printer resolution -- the sharpness of text and images on paper -- is usually measured in dots per inch (dpi). Most inexpensive printers provide sufficient resolution for most purposes at 600 dpi.
* Speed: If a user does a lot of printing, printing speed is an important feature. Inexpensive printers print only about 3 to 6 sheets per minute. However, faster printing speeds are an option with a more sophisticated, expensive printer.
* Memory: Most printers come with a small amount of memory -- typically 2-16 megabytes- that can be expanded by the user. Having more than the minimum amount of memory is helpful and faster when printing out pages with large images.

Printer I/O interfaces

The most common I/O interface for printers had been the parallel Centronics interface with a 36-pin plug.

Nowadays, however, printers and computers are likely to use a serial interface, especially a USB or FireWire with smaller and less cumbersome plugs.

Printer languages

Printer languages are commands from the computer to the printer to tell the printer how to format the document being printed. These commands manage font size, graphics, compression of data sent to the printer, color, etc. The two most popular printer languages are PostScript and Printer Control Language.

Postscript

Postscript is a printer language that uses English phrases and programmatic constructions to describe the appearance of a printed page to the printer. Adobe developed the printer language in 1985, and introduced new features such as outline fonts and vector graphics which can be printed with a plotter.

Printers now come from the factory with (or can be loaded with) Postscript support. Postscript is not restricted to printers. It can be used with any device that creates an image using dots such as screen displays, slide recorders and image-setters.

Printer Control Language (PCL)

PCL (Printer Control Language) is an escape code language used to send commands to the printer for printing documents. Escape code language has its name because the escape key begins the command sequence followed by a series of code numbers. HP originally devised PCL for dot matrix and inkjet printers.

Since its introduction, PCL has become an industry standard. Other manufacturers who sell HP clones have copied it. Some of these clones are very good, but there are small differences in the way they print a page compared to real HP printers.

In 1984, the original HP LaserJet printer was introduced using PCL, which helped change the appearance of low-cost printer documents from poor to exceptional quality.

Fonts

A font is a set of characters of a specific style and size within an overall typeface design. Printers use resident fonts and soft fonts to print documents.

Resident fonts

Resident fonts are built into the hardware of a printer. They are also called internal fonts or built-in fonts.

All printers come with one or more resident fonts. Additional fonts can be added by inserting a font cartridge into the printer or installing soft fonts on the hard drive. Resident fonts cannot be erased, unlike soft fonts.

Soft fonts

Soft fonts are installed onto the hard drive or flash drive and then sent to the computer's memory when a document is printed that uses the particular soft font. Soft fonts can be downloaded from the internet or purchased in stores.

Additional Information

In computing, a printer is a peripheral machine which makes a durable representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers.

History

The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.

The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.

The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson.

The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints.

The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace.

The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today.

Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling.

Types:

Personal printer

Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. They are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.

Networked printer

Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm. An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option) or a retransfer printer.

Virtual printer

A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.

Barcode printer

A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.

3D printer

A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.

ID card printer

A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions.

The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail:

Thermal transfer

Mainly used to personalize pre-printed plastic cards in monochrome. The color is "transferred" from the (monochrome) color ribbon ;Dye sublimation:This process uses four panels of color according to the CMYK color ribbon. The card to be printed passes under the print head several times each time with the corresponding ribbon panel. Each color in turn is diffused (sublimated) directly onto the card. Thus it is possible to produce a high depth of color (up to 16 million shades) on the card. Afterwards a transparent overlay (O) also known as a topcoat (T) is placed over the card to protect it from mechanical wear and tear and to render the printed image UV resistant.

Reverse image technology

The standard for high-security card applications that use contact and contactless smart chip cards. The technology prints images onto the underside of a special film that fuses to the surface of a card through heat and pressure. Since this process transfers dyes and resins directly onto a smooth, flexible film, the print-head never comes in contact with the card surface itself. As such, card surface interruptions such as smart chips, ridges caused by internal RFID antennae and debris do not affect print quality. Even printing over the edge is possible.

Thermal rewrite print process

In contrast to the majority of other card printers, in the thermal rewrite process the card is not personalized through the use of a color ribbon, but by activating a thermal sensitive foil within the card itself. These cards can be repeatedly personalized, erased and rewritten. The most frequent use of these are in chip-based student identity cards, whose validity changes every semester.

Common printing problems

Many printing problems are caused by physical defects in the card material itself, such as deformation or warping of the card that is fed into the machine in the first place. Printing irregularities can also result from chip or antenna embedding that alters the thickness of the plastic and interferes with the printer's effectiveness. Other issues are often caused by operator errors, such as users attempting to feed non-compatible cards into the card printer, while other printing defects may result from environmental abnormalities such as dirt or contaminants on the card or in the printer. Reverse transfer printers are less vulnerable to common printing problems than direct-to-card printers, since with these printers the card does not come into direct contact with the printhead.

Variations

Broadly speaking there are three main types of card printers, differing mainly by the method used to print onto the card. They are:

Near to Edge

This term designates the cheapest type of printing by card printers. These printers print up to 5 mm from the edge of the card stock.

Direct to Card

Also known as "Edge to Edge Printing". The print-head comes in direct contact with the card. This printing type is the most popular nowadays, mostly due to cost factor. The majority of identification card printers today are of this type.

Reverse Transfer

Also known as "High Definition Printing" or "Over the Edge Printing". The print-head prints to a transfer film backwards (hence the reverse) and then the printed film is rolled onto the card with intense heat (hence the transfer). The term "over the edge" is due to the fact that when the printer prints onto the film it has a "bleed", and when rolled onto the card the bleed extends to completely over the edge of the card, leaving no border.
Different ID Card Printers use different encoding techniques to facilitate disparate business environments and to support security initiatives. Known encoding techniques are:

* Contact Smart Card

The Contact Smart Cards use RFID technology and require direct contact to a conductive plate to register admission or transfer of information. The transmission of commands, data, and card status held between the two physical contact points.

* Contactless Smart Card

Contactless Smart Cards exhibit integrated circuit that can store and process data while communicating with the terminal via Radio Frequency. Unlike Contact Smart Card, contact less cards feature intelligent re-writable microchip that can be transcribed through radio waves.

* HiD Proximity

HID's proximity technology allows fast, accurate reading while offering card or key tag read ranges from 4" to 24" inches (10 cm to 60.96 cm), dependent on the type of proximity reader being used. Since these cards and key tags do not require physical contact with the reader, they are virtually maintenance and wear-free.

ISO Magnetic Stripe

A magnetic stripe card is a type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card. The magnetic stripe, sometimes called swipe card or magstripe, is read by physical contact and swiping past a magnetic reading head.

Software

There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network.

Other options

Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed.

Applications

Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards.

Computer-Printer-Equipment-PNG-Image-File.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2429 2025-01-20 00:25:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2329) Osteoarthritis

Gist

Osteoarthritis is the most common type of arthritis (a condition that affects your joints). Healthcare providers sometimes refer to it as degenerative joint disease or OA. It happens when the cartilage that lines your joints is worn down over time and your bones rub against each other when you use your affected joints.

Usually, the ends of bones in your joints are capped in a layer of tough, smooth cartilage. Cartilage is like a two-in-one shock absorber and lubricant — it helps the bones in your joints move past each other smoothly and safely. If you have osteoarthritis, the cartilage in your affected joints wears away over time. Eventually, your bones rub against each other when you move your joints.

Osteoarthritis can affect any of your joints, but most commonly develops in your:

* Hands.
* Knees.
* Hips.
* Neck (cervical spine).
* Lower back (lumbar spine).

Summary

Osteoarthritis is a disorder of the joints characterized by progressive deterioration of the articular cartilage or of the entire joint, including the articular cartilage, the synovium (joint lining), the ligaments, and the subchondral bone (bone beneath the cartilage). Osteoarthritis is the most common joint disease, although estimates of incidence and prevalence vary across different regions of the world and among different populations. By some estimates nearly 10 percent of men and about 18 percent of women over age 60 are affected by the condition. Although its suffix indicates otherwise, osteoarthritis is not characterized by excessive joint inflammation as is the case with rheumatoid arthritis. The disease may be asymptomatic, especially in the early years of its onset. As it progresses, however, pain, stiffness, and a limitation in movement may develop. Common sites of discomfort are the vertebrae, knees, and hips—joints that bear much of the weight of the body.

The cause of osteoarthritis is not completely understood, but biomechanical forces that place stress on the joints (e.g., bearing weight, postural or orthopedic abnormalities, or injuries that cause chronic irritation of the bone) are thought to interact with biochemical and genetic factors to contribute to osteoarthritis. Early stages of the condition are characterized by changes in cartilage thickness, which in turn are associated with an imbalance between cartilage breakdown and repair. The cartilage eventually becomes softened and roughened. Over time the cartilage wears away, and the subchondral bone, deprived of its protective cover, attempts to regenerate the destroyed tissue, resulting in increased bone density at the site of damage and an uneven remodeling of the surface of the joint. Thick bony outgrowths called spurs sometimes develop. Articulation of the joint becomes difficult. These developments are compounded by a reduction in synovial fluid, which acts as a natural joint lubricant and shock absorber.

Depending on the site and severity of the disease, various treatments are employed. Individuals who experience moderate symptoms can be treated by a combination of the following: analgesic (pain-relieving) medications, periodic rest, weight reduction, corticosteroid injections, and physical therapy or exercise. Surgical procedures such as hip or knee replacement or joint debridement (the removal of unhealthy tissue) may be necessary to relieve more severe pain and improve joint function. Injections of a joint lubricant consisting of hyaluronic acid, a substance normally found in synovial fluid, can help relieve pain and joint stiffness in some persons with osteoarthritis.

Researchers have also been investigating the therapeutic potential of the purine nucleoside adenosine, a substance that is found naturally in cells and that has been developed into a drug for medical use. Studies in animals have shown that replenishing adenosine levels in diseased joints can aid cartilage regrowth.

Details

Osteoarthritis (OA) is a type of degenerative joint disease that results from breakdown of joint cartilage and underlying bone. It is believed to be the fourth leading cause of disability in the world, affecting 1 in 7 adults in the United States alone. The most common symptoms are joint pain and stiffness. Usually the symptoms progress slowly over years. Other symptoms may include joint swelling, decreased range of motion, and, when the back is affected, weakness or numbness of the arms and legs. The most commonly involved joints are the two near the ends of the fingers and the joint at the base of the thumbs, the knee and hip joints, and the joints of the neck and lower back. The symptoms can interfere with work and normal daily activities. Unlike some other types of arthritis, only the joints, not internal organs, are affected.

Causes include previous joint injury, abnormal joint or limb development, and inherited factors. Risk is greater in those who are overweight, have legs of different lengths, or have jobs that result in high levels of joint stress. Osteoarthritis is believed to be caused by mechanical stress on the joint and low grade inflammatory processes. It develops as cartilage is lost and the underlying bone becomes affected. As pain may make it difficult to exercise, muscle loss may occur. Diagnosis is typically based on signs and symptoms, with medical imaging and other tests used to support or rule out other problems. In contrast to rheumatoid arthritis, in osteoarthritis the joints do not become hot or red.

Treatment includes exercise, decreasing joint stress such as by rest or use of a cane, support groups, and pain medications. Weight loss may help in those who are overweight. Pain medications may include paracetamol (acetaminophen) as well as NSAIDs such as naproxen or ibuprofen. Long-term opioid use is not recommended due to lack of information on benefits as well as risks of addiction and other side effects. Joint replacement surgery may be an option if there is ongoing disability despite other treatments.[2] An artificial joint typically lasts 10 to 15 years.

Osteoarthritis is the most common form of arthritis, affecting about 237 million people or 3.3% of the world's population, as of 2015. It becomes more common as people age. Among those over 60 years old, about 10% of males and 18% of females are affected. Osteoarthritis is the cause of about 2% of years lived with disability.

Signs and symptoms

The main symptom is pain, causing loss of ability and often stiffness. The pain is typically made worse by prolonged activity and relieved by rest. Stiffness is most common in the morning, and typically lasts less than thirty minutes after beginning daily activities, but may return after periods of inactivity. Osteoarthritis can cause a crackling noise (called "crepitus") when the affected joint is moved, especially shoulder and knee joint. A person may also complain of joint locking and joint instability. These symptoms would affect their daily activities due to pain and stiffness. Some people report increased pain associated with cold temperature, high humidity, or a drop in barometric pressure, but studies have had mixed results.

Osteoarthritis commonly affects the hands, feet, spine, and the large weight-bearing joints, such as the hips and knees, although in theory, any joint in the body can be affected. As osteoarthritis progresses, movement patterns (such as gait), are typically affected. Osteoarthritis is the most common cause of a joint effusion of the knee.

In smaller joints, such as at the fingers, hard bony enlargements, called Heberden's nodes (on the distal interphalangeal joints) or Bouchard's nodes (on the proximal interphalangeal joints), may form, and though they are not necessarily painful, they do limit the movement of the fingers significantly. Osteoarthritis of the toes may be a factor causing formation of bunions, rendering them red or swollen.

Causes

Damage from mechanical stress with insufficient self repair by joints is believed to be the primary cause of osteoarthritis. Sources of this stress may include misalignments of bones caused by congenital or pathogenic causes; mechanical injury; excess body weight; loss of strength in the muscles supporting a joint; and impairment of peripheral nerves, leading to sudden or uncoordinated movements. The risk of osteoarthritis increases with aging, history of joint injury, or family history of osteoarthritis. However exercise, including running in the absence of injury, has not been found to increase the risk of knee osteoarthritis. Nor has cracking one's knuckles been found to play a role.

Primary

The development of osteoarthritis is correlated with a history of previous joint injury and with obesity, especially with respect to knees. Changes in gender hormone levels may play a role in the development of osteoarthritis, as it is more prevalent among post-menopausal women than among men of the same age. Conflicting evidence exists for the differences in hip and knee osteoarthritis in African Americans and Caucasians.

Occupational

Increased risk of developing knee and hip osteoarthritis was found among those who work with manual handling (e.g. lifting), have physically demanding work, walk at work, and have climbing tasks at work (e.g. climb stairs or ladders). With hip osteoarthritis, in particular, increased risk of development over time was found among those who work in bent or twisted positions. For knee osteoarthritis, in particular, increased risk was found among those who work in a kneeling or squatting position, experience heavy lifting in combination with a kneeling or squatting posture, and work standing up. Women and men have similar occupational risks for the development of osteoarthritis.

Secondary

This type of osteoarthritis is caused by other factors but the resulting pathology is the same as for primary osteoarthritis:

* Alkaptonuria
* Congenital disorders of joints
* Diabetes doubles the risk of having a joint replacement due to osteoarthritis and people with diabetes have joint replacements at a younger age than those without diabetes.
* Ehlers-Danlos syndrome
* Hemochromatosis and Wilson's disease
* Inflammatory diseases (such as Perthes' disease), (Lyme disease), and all chronic forms of arthritis (e.g., costochondritis, gout, and rheumatoid arthritis). In gout, uric acid crystals cause the cartilage to degenerate at a faster pace.
* Injury to joints or ligaments (such as the ACL) as a result of an accident or orthopedic operations.
* Ligamentous deterioration or instability may be a factor.
* Marfan syndrome
* Obesity
* Joint infection

Pathophysiology

While osteoarthritis is a degenerative joint disease that may cause gross cartilage loss and morphological damage to other joint tissues, more subtle biochemical changes occur in the earliest stages of osteoarthritis progression. The water content of healthy cartilage is finely balanced by compressive force driving water out and hydrostatic and osmotic pressure drawing water in. Collagen fibres exert the compressive force, whereas the Gibbs–Donnan effect and cartilage proteoglycans create osmotic pressure which tends to draw water in.

However, during onset of osteoarthritis, the collagen matrix becomes more disorganized and there is a decrease in proteoglycan content within cartilage. The breakdown of collagen fibers results in a net increase in water content. This increase occurs because whilst there is an overall loss of proteoglycans (and thus a decreased osmotic pull), it is outweighed by a loss of collagen.

Other structures within the joint can also be affected. The ligaments within the joint become thickened and fibrotic, and the menisci can become damaged and wear away. Menisci can be completely absent by the time a person undergoes a joint replacement. New bone outgrowths, called "spurs" or osteophytes, can form on the margins of the joints, possibly in an attempt to improve the congruence of the articular cartilage surfaces in the absence of the menisci. The subchondral bone volume increases and becomes less mineralized (hypo mineralization). All these changes can cause problems functioning. The pain in an osteoarthritic joint has been related to thickened synovium and to subchondral bone lesions.

Additional Information

Osteoarthritis is a degenerative joint disease, in which the tissues in the joint break down over time. It is the most common type of arthritis and is more common in older people.

People with osteoarthritis usually have joint pain and, after rest or inactivity, stiffness for a short period of time. The most commonly affected joints include the:

* Hands (ends of the fingers and at the base and ends of the thumbs).
* Knees.
* Hips.
* Neck.
* Lower back.

Osteoarthritis affects each person differently. For some people, osteoarthritis is relatively mild and does not affect day-to-day activities. For others, it causes significant pain and disability. Joint damage usually develops gradually over years, although it could worsen quickly in some people.

What happens in osteoarthritis?

Researchers do not know what triggers or starts the breakdown of the tissues in the joint.  However, as osteoarthritis begins to develop, it can damage all the areas of the joint, including:

* Cartilage, the tissue that covers the ends where two bones meet to form a joint.
* Tendons and ligaments.
* Synovium, the lining of the joint.
* Bone.
* Meniscus in the knee.

As the damage of soft tissues in the joint progresses, pain, swelling, and loss of joint motion develops. If you have joint pain, you may be less active, and this can lead to muscle weakness, which may cause more stress on the joint. Over time, the joint may lose its normal shape. Also, small bone growths, called osteophytes or bone spurs, may grow on the edges of the joint. The shape of the bone may also change. Bits of bone or cartilage can also break off and float inside the joint space. This causes more damage. Researchers continue to study the cause of pain in people who have osteoarthritis.

Who Gets Osteoarthritis?

Anyone can get osteoarthritis; however, it is more common as people age. Women are more likely than men to have osteoarthritis, especially after age 50. For many women, it develops after menopause.

Younger people can also develop osteoarthritis, usually as the result of:

* Joint injury.
* Abnormal joint structure.
* Genetic defect in joint cartilage.

Symptoms of Osteoarthritis

The symptoms of osteoarthritis often begin slowly and usually begin with one or a few joints. The common symptoms of osteoarthritis include:

* Pain when using the joint, which may improve with rest. For some people, in the later stages of the disease, the pain may be worse at night. Pain can be localized or widespread.
* Joint stiffness, usually lasting less than 30 minutes, in the morning or after resting for a period of time.
* Joint changes that can limit joint movement.
* Swelling in and around the joint, especially after a lot of activity or use of that area.
* Changes in the ability to move the joint.
* Feeling that the joint is loose or unstable.

Osteoarthritis symptoms can affect joints differently. For example:

* Hands. Bony enlargements and shape changes in the finger joints can happen over time.
* Knees. When walking or moving, you may hear a grinding or scaping noise. Over time, muscle and ligament weakness can cause the knee to buckle.
* Hips. You might feel pain and stiffness in the hip joint or in the groin, inner thigh, or buttocks. Sometimes, the pain from arthritis in the hip can radiate (spread) to the knees. Over time, you may not be able to move your hip as far as you did in the past.
* Spine. You may feel stiffness and pain in the neck or lower back. As changes in the spine happen, some people develop spinal stenosis, which can lead to other symptoms.

As your symptoms worsen over time, activities that you could participate in become difficult to do, such as stepping up, getting on or off the toilet or in and out of a chair, gripping a pan, or walking across a parking lot.

Pain and other symptoms of osteoarthritis may lead you to feel tired, have problems sleeping, and feel depressed.

Cause of Osteoarthritis

Osteoarthritis happens when the cartilage and other tissues within the joint break down or have a change in their structure. This does not happen because of simple wear and tear on the joints. Instead, changes in the tissue can trigger the breakdown, which usually happens gradually over time.

Certain factors may make it more likely for you to develop the disease, including:

* Aging.
* Being overweight or obese.
* History of injury or surgery to a joint.
* Overuse from repetitive movements of the joint.
* Joints that do not form correctly.
* Family history of osteoarthritis.

j65hxk_30b79c35-1019-4a7b-a168-7c576686e3ad-941040.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2430 2025-01-20 18:23:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2330) Globe

Gist

A globe is a three-dimensional scale model of the Earth or other round body. Because it is spherical, or ball-shaped, it can represent surface features, directions, and distances more accurately than a flat map.

Summary

A globe is a three-dimensional scale model of the Earth or other round body. Because it is spherical, or ball-shaped, it can represent surface features, directions, and distances more accurately than a flat map. On the other hand, a globe may be less practical for travelers, since globes are much bulkier than flat maps and often carry less detailed information.

The oldest known globe was made more than 2,100 years ago by Crates of Mallus, a Greek philosopher and geographer who lived in what is today Turkey. The oldest globe that survives to this day was made by the German geographer Martin Behaim in 1492—just before Christopher Columbus sailed to the New World. This globe is more accurate than Crates', but still leaves out North America, South America, Australia, and Antarctica.

The Earth is not the only planet that has been mapped onto a globe. In the past few decades, spacecraft have made detailed maps of the surfaces of other planets and moons. Globes for some of them, such as the planet Mars and our own Moon, are available for purchase.

Even the night sky around the Earth, known as the celestial sphere, has been mapped onto a globe. Celestial globes represent stars and planets visible above certain parts of the Earth. Many constellations, such as the Big Dipper, are outlined into familiar shapes on celestial globes. Looking for patterns on celestial globes makes finding individual stars easier to spot.

Like most early terrestrial globes, most early celestial globes were made of metal. Metal globes are usually cast in two halves, or hemispheres. These halves are then welded together with hot metal, creating a seam, or raised line, in the middle of the sphere. It is nearly impossible to create seamless globes—globes that are made of a single piece of metal. Nevertheless, astronomers and metalsmiths in what is today India and Pakistan created such celestial globes in the 1500s.

An ancient type of globe is the armillary sphere. An armillary sphere has a mini-globe of Earth surrounded by rings representing movement of visible stars and planets. The rings are adjustable, so they reflect the stars and planets visible at different times of the year in different places on the globe. Before the invention of the telescope, armillary spheres were the most important tools astronomers had. In fact, celestial globes and armillary spheres have likely been used at least as long as terrestrial globes, if not longer.

Details

A globe is a spherical model of Earth, of some other celestial body, or of the celestial sphere. Globes serve purposes similar to maps, but, unlike maps, they do not distort the surface that they portray except to scale it down. A model globe of Earth is called a terrestrial globe. A model globe of the celestial sphere is called a celestial globe.

A globe shows details of its subject. A terrestrial globe shows landmasses and water bodies. It might show nations and major cities and the network of latitude and longitude lines. Some have raised relief to show mountains and other large landforms. A celestial globe shows notable stars, and may also show positions of other prominent astronomical objects. Typically, it will also divide the celestial sphere into constellations.

The word globe comes from the Latin word globus, meaning "sphere". Globes have a long history. The first known mention of a globe is from Strabo, describing the Globe of Crates from about 150 BC. The oldest surviving terrestrial globe is the Erdapfel, made by Martin Behaim in 1492. The oldest surviving celestial globe sits atop the Farnese Atlas, carved in the 2nd century Roman Empire.

Terrestrial and planetary

Flat maps are created using a map projection that inevitably introduces an increasing amount of distortion the larger the area that the map shows. A globe is the only representation of the Earth that does not distort either the shape or the size of large features – land masses, bodies of water, etc.

The Earth's circumference is quite close to 40 million metres. Many globes are made with a circumference of one metre, so they are models of the Earth at a scale of 1:40 million. In imperial units, many globes are made with a diameter of one foot (about 30 cm), yielding a circumference of 3.14 feet (about 96 cm) and a scale of 1:42 million. Globes are also made in many other sizes.

Some globes have surface texture showing topography or bathymetry. In these, elevations and depressions are purposely exaggerated, as they otherwise would be hardly visible. For example, one manufacturer produces a three dimensional raised relief globe with a 64 cm (25 in) diameter (equivalent to a 200 cm circumference, or approximately a scale of 1:20 million) showing the highest mountains as over 2.5 cm (1 in) tall, which is about 57 times higher than the correct scale of Mount Everest.

Most modern globes are also imprinted with parallels and meridians, so that one can tell the approximate coordinates of a specific location. Globes may also show the boundaries of countries and their names.

Many terrestrial globes have one celestial feature marked on them: a diagram called the analemma, which shows the apparent motion of the Sun in the sky during a year.

Globes generally show north at the top, but many globes allow the axis to be swiveled so that southern portions can be viewed conveniently. This capability also permits exploring the Earth from different orientations to help counter the north-up bias caused by conventional map presentation.

Celestial

Celestial globes show the apparent positions of the stars in the sky. They omit the Sun, Moon and planets because the positions of these bodies vary relative to those of the stars, but the ecliptic, along which the Sun moves, is indicated. In their most basic form celestial globes represent the stars as if the viewer were looking down upon the sky as a globe that surrounds the earth.

History

The sphericity of the Earth was established by Greek astronomy in the 3rd century BC, and the earliest terrestrial globe appeared from that period. The earliest known example is the one constructed by Crates of Mallus in Cilicia (now Çukurova in modern-day Turkey), in the mid-2nd century BC.

No terrestrial globes from Antiquity have survived. An example of a surviving celestial globe is part of a Hellenistic sculpture, called the Farnese Atlas, surviving in a 2nd-century AD Roman copy in the Naples Archaeological Museum, Italy.

Early terrestrial globes depicting the entirety of the Old World were constructed in the Islamic world. During the Middle Ages in Christian Europe, while there are writings alluding to the idea that the earth was spherical, no known attempts at making a globe took place before the fifteenth century. The earliest extant terrestrial globe was made in 1492 by Martin Behaim (1459–1537) with help from the painter Georg Glockendon. Behaim was a German mapmaker, navigator, and merchant. Working in Nuremberg, Germany, he called his globe the "Nürnberg Terrestrial Globe." It is now known as the Erdapfel. Before constructing the globe, Behaim had traveled extensively. He sojourned in Lisbon from 1480, developing commercial interests and mingling with explorers and scientists. He began to construct his globe after his return to Nürnberg in 1490.

China made many mapping advancements such as sophisticated land surveys and the invention of the magnetic compass. However, no record of terrestrial globes in China exists until a globe was introduced by the Persian astronomer, Jamal ad-Din, in 1276.

Another early globe, the Hunt–Lenox Globe, ca. 1510, is thought to be the source of the phrase Hic Sunt Dracones, or "Here be dragons". A similar grapefruit-sized globe made from two halves of an ostrich egg was found in 2012 and is believed to date from 1504. It may be the oldest globe to show the New World. Stefaan Missine, who analyzed the globe for the Washington Map Society journal Portolan, said it was "part of an important European collection for decades." After a year of research in which he consulted many experts, Missine concluded the Hunt–Lenox Globe was a copper cast of the egg globe.

A facsimile globe showing America was made by Martin Waldseemüller in 1507. Another "remarkably modern-looking" terrestrial globe of the Earth was constructed by Taqi al-Din at the Constantinople observatory of Taqi ad-Din during the 1570s.

The world's first seamless celestial globe was built by Mughal scientists under the patronage of Jahangir.

Globus IMP, electro-mechanical devices including five-inch globes have been used in Soviet and Russian spacecraft from 1961 to 2002 as navigation instruments. In 2001, the TMA version of the Soyuz spacecraft replaced this instrument with a digital map.

Manufacture

Traditionally, globes were manufactured by gluing a printed paper map onto a sphere, often made from wood.

The most common type has long, thin gores (strips) of paper that narrow to a point at the poles, small disks cover over the inevitable irregularities at these points. The more gores there are, the less stretching and crumpling is required to make the paper map fit the sphere. This method of globe making was illustrated in 1802 in an engraving in The English Encyclopedia by George Kearsley.

Modern globes are often made from thermoplastic. Flat, plastic disks are printed with a distorted map of one of the Earth's hemispheres. This is placed in a machine which molds the disk into a hemispherical shape. The hemisphere is united with its opposite counterpart to form a complete globe.

Usually a globe is mounted so that its rotation axis is 23.5° (0.41 rad) from vertical, which is the angle the Earth's rotation axis deviates from perpendicular to the plane of its orbit. This mounting makes it easy to visualize how seasons change.

In the 1800s small pocket globes (less than 3 inches) were status symbols for gentlemen and educational toys for rich children.

Examples

Sorted in decreasing sizes:

* The Unisphere in Flushing Meadows, New York, at the Billie Jean King USTA Tennis Center, at 37 m (120 ft) in diameter, is the world's largest geographical globe. This corresponds to a scale of about 1:350 000. (There are larger spherical structures, such as the Cinesphere in Toronto, Ontario, Canada, but this does not have geographical or astronomical markings.)
* Wyld's Great Globe, located in London's Leicester Square from 1851-1862, was a hollow globe 60 feet 4 inches (18.39 m) in diameter designed by mapmaker James Wyld. Visitors could climb stairs to view a plaster of Paris model of the Earth's surface, complete with mountains and rivers to scale.
* Eartha, the world's largest rotating globe with a diameter of 12 m (41 ft), located at the DeLorme headquarters in Yarmouth, Maine. This corresponds to a scale of about 1:1.1 million. Eartha was constructed in 1998.
* The P-I Globe, a 13.5-ton  30-foot (9.1 m) neon globe with rotating "It's in the P-I" words and an 18-foot eagle, was made in 1948 for the Seattle Post-Intelligencer's headquarters. It was moved to the newspaper's new location in 1986.
* The Great Globe at Swanage is a stone sphere that stands at Durlston Castle within Durlston Country Park, England. Measuring 10 feet (3.0 m) in diameter and weighing 40 tons, this intricately carved globe showcases the continents, oceans, and specific regions of the world. Crafted from Portland stone, it spans about 3 meters (10 ft) in diameter.

Additional Information

A globe is the most common general-use model of spherical Earth. It is a sphere or ball that bears a map of the Earth on its surface and is mounted on an axle that permits rotation. The ancient Greeks, who knew the Earth to be a sphere, were the first to use globes to represent the surface of the Earth. Crates of Mallus is said to have made one in about 150 bce. The earliest surviving terrestrial globe was made in Nürnberg in 1492 by Martin Behaim, who almost undoubtedly influenced Christopher Columbus to attempt to sail west to the Orient. In ancient times, “celestial globes” were used to represent the constellations; the earliest surviving one is the marble Farnese globe, a celestial globe dating from about 25 ce.

Today’s globe, typically hollow, may be made of almost any light, strong material, such as cardboard, plastic, or metal. Some are translucent. They may also be inflatable. Terrestrial globes are usually mounted with the axis tilted 23.5° from the vertical, to help simulate the inclination of the Earth relative to the plane in which it orbits the Sun. Terrestrial globes may be physical, showing natural features such as deserts and mountain ranges (sometimes molded in relief), or political, showing countries, cities, etc. While most globes emphasize the surface of the land, a globe may also show the bottom of the sea. Globes also can be made to depict the surfaces of spherical bodies other than the Earth, for example, the Moon. Celestial globes are also still in use.

edf6ce2204e6e6fe5277d03ad3663231.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2431 2025-01-21 00:11:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2331) North Pole

Gist

North Pole, the northern end of Earth's axis, lying in the Arctic Ocean, about 450 miles (725 km) north of Greenland.

Summary

North Pole, the northern end of Earth’s axis, lying in the Arctic Ocean, about 450 miles (725 km) north of Greenland. This geographic North Pole does not coincide with the magnetic North Pole—to which magnetic compasses point and which in the early 21st century lay north of the Queen Elizabeth Islands of extreme northern Canada at approximately 82°15′ N 112°30′ W (it is steadily migrating northwest)—or with the geomagnetic North Pole, the northern end of Earth’s geomagnetic field (about 79°30′ N 71°30′ W). The geographic pole, located at a point where the ocean depth is about 13,400 feet (4,080 metres) deep and covered with drifting pack ice, experiences six months of complete sunlight and six months of total darkness each year.

The American explorer Robert E. Peary claimed to have reached the pole by dog sledge in April 1909, and another American explorer, Richard E. Byrd, claimed to have reached it by airplane on May 9, 1926; the claims of both men were later questioned. Three days after Byrd’s attempt, on May 12, the pole was definitely reached by an international team of Roald Amundsen, Lincoln Ellsworth, and Umberto Nobile, who traversed the polar region in a dirigible. The first ships to visit the pole were the U.S. nuclear submarines Nautilus (1958) and Skate (1959), the latter surfacing through the ice, and the Soviet icebreaker Arktika was the first surface ship to reach it (1977). Other notable surface expeditions include the first confirmed to reach the pole (1968; via snowmobile), the first to traverse the polar region (1969; Alaska to Svalbard, via dog sled), and the first to travel to the pole and back without resupply (1986; also via dog sled); the last expedition also included the first woman to reach the pole, American Ann Bancroft. After reaching the South Pole on January 11, 1986, the British explorer Robert Swan led an expedition to the North Pole, reaching his destination on May 14, 1989 and thereby becoming the first person to walk to both poles.

Details

The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where the Earth's axis of rotation meets its surface. It is called the True North Pole to distinguish from the Magnetic North Pole.

The North Pole is by definition the northernmost point on the Earth, lying antipodally to the South Pole. It defines geodetic latitude 90° North, as well as the direction of true north. At the North Pole all directions point south; all lines of longitude converge there, so its longitude can be defined as any degree value. No time zone has been assigned to the North Pole, so any time can be used as the local time. Along tight latitude circles, counterclockwise is east and clockwise is west. The North Pole is at the center of the Northern Hemisphere. The nearest land is usually said to be Kaffeklubben Island, off the northern coast of Greenland about 700 km (430 mi) away, though some perhaps semi-permanent gravel banks lie slightly closer. The nearest permanently inhabited place is Alert on Ellesmere Island, Canada, which is located 817 km (508 mi) from the Pole.

While the South Pole lies on a continental land mass, the North Pole is located in the middle of the Arctic Ocean amid waters that are almost permanently covered with constantly shifting sea ice. The sea depth at the North Pole has been measured at 4,261 m (13,980 ft) by the Russian Mir submersible in 2007 and at 4,087 m (13,409 ft) by USS Nautilus in 1958. This makes it impractical to construct a permanent station at the North Pole (unlike the South Pole). However, the Soviet Union, and later Russia, constructed a number of manned drifting stations on a generally annual basis since 1937, some of which have passed over or very close to the Pole. Since 2002, a group of Russians have also annually established a private base, Barneo, close to the Pole. This operates for a few weeks during early spring. Studies in the 2000s predicted that the North Pole may become seasonally ice-free because of Arctic ice shrinkage, with timescales varying from 2016 to the late 21st century or later.

Attempts to reach the North Pole began in the late 19th century, with the record for "Farthest North" being surpassed on numerous occasions. The first undisputed expedition to reach the North Pole was that of the airship Norge, which overflew the area in 1926 with 16 men on board, including expedition leader Roald Amundsen. Three prior expeditions – led by Frederick Cook (1908, land), Robert Peary (1909, land) and Richard E. Byrd (1926, aerial) – were once also accepted as having reached the Pole. However, in each case later analysis of expedition data has cast doubt upon the accuracy of their claims.

The first verified individuals to reach the North Pole on the ground was in 1948 by a 24-man Soviet party, part of Aleksandr Kuznetsov's Sever-2 expedition to the Arctic, who flew part-way to the Pole first before making the final trek to the Pole on foot. The first complete land expedition to reach the North Pole was in 1968 by Ralph Plaisted, Walt Pederson, Gerry Pitzl and Jean-Luc Bombardier, using snowmobiles and with air support.

Precise definition

The Earth's axis of rotation – and hence the position of the North Pole – was commonly believed to be fixed (relative to the surface of the Earth) until, in the 18th century, the mathematician Leonhard Euler predicted that the axis might "wobble" slightly. Around the beginning of the 20th century astronomers noticed a small apparent "variation of latitude", as determined for a fixed point on Earth from the observation of stars. Part of this variation could be attributed to a wandering of the Pole across the Earth's surface, by a range of a few metres. The wandering has several periodic components and an irregular component. The component with a period of about 435 days is identified with the eight-month wandering predicted by Euler and is now called the Chandler wobble after its discoverer. The exact point of intersection of the Earth's axis and the Earth's surface, at any given moment, is called the "instantaneous pole", but because of the "wobble" this cannot be used as a definition of a fixed North Pole (or South Pole) when metre-scale precision is required.

It is desirable to tie the system of Earth coordinates (latitude, longitude, and elevations or orography) to fixed landforms. However, given plate tectonics and isostasy, there is no system in which all geographic features are fixed. Yet the International Earth Rotation and Reference Systems Service and the International Astronomical Union have defined a framework called the International Terrestrial Reference System.

Exploration:

Pre-1900

As early as the 16th century, many prominent people correctly believed that the North Pole was in a sea, which in the 19th century was called the Polynya or Open Polar Sea. It was therefore hoped that passage could be found through ice floes at favorable times of the year. Several expeditions set out to find the way, generally with whaling ships, already commonly used in the cold northern latitudes.

One of the earliest expeditions to set out with the explicit intention of reaching the North Pole was that of British naval officer William Edward Parry, who in 1827 reached latitude 82°45′ North. In 1871, the Polaris expedition, a US attempt on the Pole led by Charles Francis Hall, ended in disaster. Another British Royal Navy attempt to get to the pole, part of the British Arctic Expedition, by Commander Albert H. Markham reached a then-record 83°20'26" North in May 1876 before turning back. An 1879–1881 expedition commanded by US naval officer George W. De Long ended tragically when their ship, the USS Jeannette, was crushed by ice. Over half the crew, including De Long, were lost.

In April 1895, the Norwegian explorers Fridtjof Nansen and Hjalmar Johansen struck out for the Pole on skis after leaving Nansen's icebound ship Fram. The pair reached latitude 86°14′ North before they abandoned the attempt and turned southwards, eventually reaching Franz Josef Land.

In 1897, Swedish engineer Salomon August Andrée and two companions tried to reach the North Pole in the hydrogen balloon Örnen ("Eagle"), but came down 300 km (190 mi) north of Kvitøya, the northeasternmost part of the Svalbard archipelago. They trekked to Kvitøya but died there three months after their crash. In 1930 the remains of this expedition were found by the Norwegian Bratvaag Expedition.

The Italian explorer Luigi Amedeo, Duke of the Abruzzi and Captain Umberto Cagni of the Italian Royal Navy (Regia Marina) sailed the converted whaler Stella Polare ("Pole Star") from Norway in 1899. On 11 March 1900, Cagni led a party over the ice and reached latitude 86° 34’ on 25 April, setting a new record by beating Nansen's result of 1895 by 35 to 40 km (22 to 25 mi). Cagni barely managed to return to the camp, remaining there until 23 June. On 16 August, the Stella Polare left Rudolf Island heading south and the expedition returned to Norway.

1900–1940

The US explorer Frederick Cook claimed to have reached the North Pole on 21 April 1908 with two Inuit men, Ahwelah and Etukishook, but he was unable to produce convincing proof and his claim is not widely accepted.

The conquest of the North Pole was for many years credited to US Navy engineer Robert Peary, who claimed to have reached the Pole on 6 April 1909, accompanied by Matthew Henson and four Inuit men, Ootah, Seeglo, Egingwah, and Ooqueah. However, Peary's claim remains highly disputed and controversial. Those who accompanied Peary on the final stage of the journey were not trained in navigation, and thus could not independently confirm his navigational work, which some claim to have been particularly sloppy as he approached the Pole.

The distances and speeds that Peary claimed to have achieved once the last support party turned back seem incredible to many people, almost three times that which he had accomplished up to that point. Peary's account of a journey to the Pole and back while traveling along the direct line – the only strategy that is consistent with the time constraints that he was facing – is contradicted by Henson's account of tortuous detours to avoid pressure ridges and open leads.

The British explorer Wally Herbert, initially a supporter of Peary, researched Peary's records in 1989 and found that there were significant discrepancies in the explorer's navigational records. He concluded that Peary had not reached the Pole. Support for Peary came again in 2005, however, when British explorer Tom Avery and four companions recreated the outward portion of Peary's journey with replica wooden sleds and Canadian Eskimo Dog teams, reaching the North Pole in 36 days, 22 hours – nearly five hours faster than Peary. However, Avery's fastest 5-day march was 90 nautical miles (170 km), significantly short of the 135 nautical miles (250 km) claimed by Peary. Avery writes on his web site that "The admiration and respect which I hold for Robert Peary, Matthew Henson and the four Inuit men who ventured North in 1909, has grown enormously since we set out from Cape Columbia. Having now seen for myself how he travelled across the pack ice, I am more convinced than ever that Peary did indeed discover the North Pole."

The first claimed flight over the Pole was made on 9 May 1926 by US naval officer Richard E. Byrd and pilot Floyd Bennett in a Fokker tri-motor aircraft. Although verified at the time by a committee of the National Geographic Society, this claim has since been undermined by the 1996 revelation that Byrd's long-hidden diary's solar sextant data (which the NGS never checked) consistently contradict his June 1926 report's parallel data by over 100 mi (160 km). The secret report's alleged en-route solar sextant data were inadvertently so impossibly overprecise that he excised all these alleged raw solar observations out of the version of the report finally sent to geographical societies five months later (while the original version was hidden for 70 years), a realization first published in 2000 by the University of Cambridge after scrupulous refereeing.

The first consistent, verified, and scientifically convincing attainment of the Pole was on 12 May 1926, by Norwegian explorer Roald Amundsen and his US sponsor Lincoln Ellsworth from the airship Norge. Norge, though Norwegian-owned, was designed and piloted by the Italian Umberto Nobile. The flight started from Svalbard in Norway, and crossed the Arctic Ocean to Alaska. Nobile, with several scientists and crew from the Norge, overflew the Pole a second time on 24 May 1928, in the airship Italia. The Italia crashed on its return from the Pole, with the loss of half the crew.

Another transpolar flight [ru] was accomplished in a Tupolev ANT-25 airplane with a crew of Valery Chkalov, Georgy Baydukov and Alexander Belyakov, who flew over the North Pole on 19 June 1937, during their direct flight from the Soviet Union to the USA without any stopover.

Ice station

In May 1937 the world's first North Pole ice station, North Pole-1, was established by Soviet scientists 20 kilometres (13 mi) from the North Pole after the ever first landing of four heavy and one light aircraft onto the ice at the North Pole. The expedition members — oceanographer Pyotr Shirshov, meteorologist Yevgeny Fyodorov, radio operator Ernst Krenkel, and the leader Ivan Papanin — conducted scientific research at the station for the next nine months. By 19 February 1938, when the group was picked up by the ice breakers Taimyr and Murman, their station had drifted 2850 km to the eastern coast of Greenland.

1940–2000

In May 1945 an RAF Lancaster of the Aries expedition became the first Commonwealth aircraft to overfly the North Geographic and North Magnetic Poles. The plane was piloted by David Cecil McKinley of the Royal Air Force. It carried an 11-man crew, with Kenneth C. Maclure of the Royal Canadian Air Force in charge of all scientific observations. In 2006, Maclure was honoured with a spot in Canada's Aviation Hall of Fame.

Discounting Peary's disputed claim, the first men to set foot at the North Pole were a Soviet party including geophysicists Mikhail Ostrekin and Pavel Senko, oceanographers Mikhail Somov and Pavel Gordienko, and other scientists and flight crew (24 people in total) of Aleksandr Kuznetsov's Sever-2 expedition (March–May 1948). It was organized by the Chief Directorate of the Northern Sea Route. The party flew on three planes (pilots Ivan Cherevichnyy, Vitaly Maslennikov and Ilya Kotov) from Kotelny Island to the North Pole and landed there at 4:44pm (Moscow Time, UTC+04:00) on 23 April 1948. They established a temporary camp and for the next two days conducted scientific observations. On 26 April the expedition flew back to the continent.

Next year, on 9 May 1949 two other Soviet scientists (Vitali Volovich and Andrei Medvedev) became the first people to parachute onto the North Pole. They jumped from a Douglas C-47 Skytrain, registered CCCP H-369.

On 3 May 1952, U.S. Air Force Lieutenant Colonel Joseph O. Fletcher and Lieutenant William Pershing Benedict, along with scientist Albert P. Crary, landed a modified Douglas C-47 Skytrain at the North Pole. Some Western sources considered this to be the first landing at the Pole until the Soviet landings became widely known.

The United States Navy submarine USS Nautilus (SSN-571) crossed the North Pole on 3 August 1958. On 17 March 1959 USS Skate (SSN-578) surfaced at the Pole, breaking through the ice above it, becoming the first naval vessel to do so.

The first confirmed surface conquest of the North Pole was accomplished by Ralph Plaisted, Walt Pederson, Gerry Pitzl and Jean Luc Bombardier, who traveled over the ice by snowmobile and arrived on 19 April 1968. The United States Air Force independently confirmed their position.

On 6 April 1969 Wally Herbert and companions Allan Gill, Roy Koerner and Kenneth Hedges of the British Trans-Arctic Expedition became the first men to reach the North Pole on foot (albeit with the aid of dog teams and airdrops). They continued on to complete the first surface crossing of the Arctic Ocean – and by its longest axis, Barrow, Alaska, to Svalbard – a feat that has never been repeated. Because of suggestions (later proven false) of Plaisted's use of air transport, some sources classify Herbert's expedition as the first confirmed to reach the North Pole over the ice surface by any means. In the 1980s Plaisted's pilots Weldy Phipps and Ken Lee signed affidavits asserting that no such airlift was provided. It is also said that Herbert was the first person to reach the pole of inaccessibility.

On 17 August 1977 the Soviet nuclear-powered icebreaker Arktika completed the first surface vessel journey to the North Pole.

In 1982 Ranulph Fiennes and Charles R. Burton became the first people to cross the Arctic Ocean in a single season. They departed from Cape Crozier, Ellesmere Island, on 17 February 1982 and arrived at the geographic North Pole on 10 April 1982. They travelled on foot and snowmobile. From the Pole, they travelled towards Svalbard but, due to the unstable nature of the ice, ended their crossing at the ice edge after drifting south on an ice floe for 99 days. They were eventually able to walk to their expedition ship MV Benjamin Bowring and boarded it on 4 August 1982 at position 80:31N 00:59W. As a result of this journey, which formed a section of the three-year Transglobe Expedition 1979–1982, Fiennes and Burton became the first people to complete a circumnavigation of the world via both North and South Poles, by surface travel alone. This achievement remains unchallenged to this day. The expedition crew included a Jack Russell Terrier named Bothie who became the first dog to visit both poles.

In 1985 Sir Edmund Hillary (the first man to stand on the summit of Mount Everest) and Neil Armstrong (the first man to stand on the moon) landed at the North Pole in a small twin-engined ski plane. Hillary thus became the first man to stand at both poles and on the summit of Everest.

In 1986 Will Steger, with seven teammates, became the first to be confirmed as reaching the Pole by dogsled and without resupply.

USS Gurnard (SSN-662) operated in the Arctic Ocean under the polar ice cap from September to November 1984 in company with one of her sister ships, the attack submarine USS Pintado (SSN-672). On 12 November 1984 Gurnard and Pintado became the third pair of submarines to surface together at the North Pole. In March 1990, Gurnard deployed to the Arctic region during exercise Ice Ex '90 and completed only the fourth winter submerged transit of the Bering and Seas. Gurnard surfaced at the North Pole on 18 April, in the company of the USS Seahorse (SSN-669).

On 6 May 1986 USS Archerfish (SSN 678), USS Ray (SSN 653) and USS Hawkbill (SSN-666) surfaced at the North Pole, the first tri-submarine surfacing at the North Pole.

On 21 April 1987 Shinji Kazama of Japan became the first person to reach the North Pole on a motorcycle.

On 18 May 1987 USS Billfish (SSN 676), USS Sea Devil (SSN 664) and HMS Superb (S 109) surfaced at the North Pole, the first international surfacing at the North Pole.

In 1988 a team of 13 (9 Soviets, 4 Canadians) skied across the arctic from Siberia to northern Canada. One of the Canadians, Richard Weber, became the first person to reach the Pole from both sides of the Arctic Ocean.

On April 16, 1990, a German-Swiss expedition led by a team of the University of Giessen reached the Geographic North Pole for studies on pollution of pack ice, snow and air. Samples taken were analyzed in cooperation with the Geological Survey of Canada and the Alfred Wegener Institute for Polar and Marine Research. Further stops for sample collections were on multi-year sea ice at 86°N, at Cape Columbia and Ward Hunt Island.

On 4 May 1990 Børge Ousland and Erling Kagge became the first explorers ever to reach the North Pole unsupported, after a 58-day ski trek from Ellesmere Island in Canada, a distance of 800 km.

On 7 September 1991 the German research vessel Polarstern and the Swedish icebreaker Oden reached the North Pole as the first conventional powered vessels. Both scientific parties and crew took oceanographic and geological samples and had a common tug of war and a football game on an ice floe. Polarstern again reached the pole exactly 10 years later, with the Healy.

In 1998, 1999, and 2000, Lada Niva Marshs (special very large wheeled versions made by BRONTO, Lada/Vaz's experimental product division) were driven to the North Pole. The 1998 expedition was dropped by parachute and completed the track to the North Pole. The 2000 expedition departed from a Russian research base around 114 km from the Pole and claimed an average speed of 20–15 km/h in an average temperature of −30 °C.

21st century

Commercial airliner flights on the polar routes may pass within viewing distance of the North Pole. For example, a flight from Chicago to Beijing may come close as latitude 89° N, though because of prevailing winds return journeys go over the Bering Strait. In recent years journeys to the North Pole by air (landing by helicopter or on a runway prepared on the ice) or by icebreaker have become relatively routine, and are even available to small groups of tourists through adventure holiday companies. Parachute jumps have frequently been made onto the North Pole in recent years. The temporary seasonal Russian camp of Barneo has been established by air a short distance from the Pole annually since 2002, and caters for scientific researchers as well as tourist parties. Trips from the camp to the Pole itself may be arranged overland or by helicopter.

The first attempt at underwater exploration of the North Pole was made on 22 April 1998 by Russian firefighter and diver Andrei Rozhkov with the support of the Diving Club of Moscow State University, but ended in fatality. The next attempted dive at the North Pole was organized the next year by the same diving club, and ended in success on 24 April 1999. The divers were Michael Wolff (Austria), Brett Cormick (UK), and Bob Wass (USA).

In 2005 the United States Navy submarine USS Charlotte (SSN-766) surfaced through 155 cm (61 in) of ice at the North Pole and spent 18 hours there.

In July 2007 British endurance swimmer Lewis Gordon Pugh completed a 1 km (0.62 mi) swim at the North Pole. His feat, undertaken to highlight the effects of global warming, took place in clear water that had opened up between the ice floes.[51] His later attempt to paddle a kayak to the North Pole in late 2008, following the erroneous prediction of clear water to the Pole, was stymied when his expedition found itself stuck in thick ice after only three days. The expedition was then abandoned.

By September 2007 the North Pole had been visited 66 times by different surface ships: 54 times by Soviet and Russian icebreakers, 4 times by Swedish Oden, 3 times by German Polarstern, 3 times by USCGC Healy and USCGC Polar Sea, and once by CCGS Louis S. St-Laurent and by Swedish Vidar Viking.

2007 descent to the North Pole seabed

On 2 August 2007 a Russian scientific expedition Arktika 2007 made the first ever manned descent to the ocean floor at the North Pole, to a depth of 4.3 km (2.7 mi), as part of the research programme in support of Russia's 2001 extended continental shelf claim to a large swathe of the Arctic Ocean floor. The descent took place in two MIR submersibles and was led by Soviet and Russian polar explorer Artur Chilingarov. In a symbolic act of visitation, the Russian flag was placed on the ocean floor exactly at the Pole.

The expedition was the latest in a series of efforts intended to give Russia a dominant influence in the Arctic according to The New York Times.

MLAE 2009 Expedition

In 2009 the Russian Marine Live-Ice Automobile Expedition (MLAE-2009) with Vasily Elagin as a leader and a team of Afanasy Makovnev, Vladimir Obikhod, Alexey Shkrabkin, Sergey Larin, Alexey Ushakov and Nikolay Nikulshin reached the North Pole on two custom-built 6 x 6 low-pressure-tire ATVs. The vehicles, Yemelya-1 and Yemelya-2, were designed by Vasily Elagin, a Russian mountain climber, explorer and engineer. They reached the North Pole on 26 April 2009, 17:30 (Moscow time). The expedition was partly supported by Russian State Aviation. The Russian Book of Records recognized it as the first successful vehicle trip from land to the Geographical North Pole.

MLAE 2013 Expedition

On 1 March 2013 the Russian Marine Live-Ice Automobile Expedition (MLAE 2013) with Vasily Elagin as a leader, and a team of Afanasy Makovnev, Vladimir Obikhod, Alexey Shkrabkin, Andrey Vankov, Sergey Isayev and Nikolay Kozlov on two custom-built 6 x 6 low-pressure-tire ATVs—Yemelya-3 and Yemelya-4—started from Golomyanny Island (the Severnaya Zemlya Archipelago) to the North Pole across drifting ice of the Arctic Ocean. The vehicles reached the Pole on 6 April and then continued to the Canadian coast. The coast was reached on 30 April 2013 (83°08N, 075°59W Ward Hunt Island), and on 5 May 2013 the expedition finished in Resolute Bay, NU. The way between the Russian borderland (Machtovyi Island of the Severnaya Zemlya Archipelago, 80°15N, 097°27E) and the Canadian coast (Ward Hunt Island, 83°08N, 075°59W) took 55 days; it was ~2300 km across drifting ice and about 4000 km in total. The expedition was totally self-dependent and used no external supplies. The expedition was supported by the Russian Geographical Society.

Day and night

The sun at the North Pole is continuously above the horizon during the summer and continuously below the horizon during the winter. Sunrise is just before the March equinox (around 20 March); the Sun then takes three months to reach its highest point of near 23½° elevation at the summer solstice (around 21 June), after which time it begins to sink, reaching sunset just after the September equinox (around 23 September). When the Sun is visible in the polar sky, it appears to move in a horizontal circle above the horizon. This circle gradually rises from near the horizon just after the vernal equinox to its maximum elevation (in degrees) above the horizon at summer solstice and then sinks back toward the horizon before sinking below it at the autumnal equinox. Hence the North and South Poles experience the slowest rates of sunrise and sunset on Earth.

The twilight period that occurs before sunrise and after sunset has three different definitions:

* a civil twilight period of about two weeks;
* a nautical twilight period of about five weeks; and
* an astronomical twilight period of about seven weeks.

These effects are caused by a combination of the Earth's axial tilt and its revolution around the Sun. The direction of the Earth's axial tilt, as well as its angle relative to the plane of the Earth's orbit around the Sun, remains very nearly constant over the course of a year (both change very slowly over long time periods). At northern midsummer the North Pole is facing towards the Sun to its maximum extent. As the year progresses and the Earth moves around the Sun, the North Pole gradually turns away from the Sun until at midwinter it is facing away from the Sun to its maximum extent. A similar sequence is observed at the South Pole, with a six-month time difference.

Since longitude is undefined at the north pole, the exact time is a matter of convention. Polar expeditions use whatever time is most convenient, such as Greenwich Mean Time or the time zone of their origin.

at-the-north-pole.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2432 2025-01-21 16:56:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2332) Veggie Burger

Gist

Commercially available veggie burgers may contain: Vegetable protein (derived from beans, soy, peas or other source) Other vegetables such as carrots, squash, mushrooms, peppers, beets, water chestnuts or onions.

A veggie burger is a food that looks like a hamburger but that is made with vegetables instead of meat.

Details

A veggie burger or meatless burger is a hamburger made with a patty that does not contain meat, or the patty of such a hamburger. The patty may be made from ingredients like beans (especially soybeans and tofu), nuts, grains, seeds, or fungi such as mushrooms or mycoprotein.

The essence of the veggie burger patty has existed in various Eurasian cuisines for millennia, including in the form of grilled or fried meatless discs, or as koftas, a commonplace item in Indian cuisine. These may be made of entirely vegetarian ingredients such as legumes or other plant-derived proteins.

Preparation

Whilst commercial brands of veggie burger are widespread, hundreds of recipes exist for veggie burgers online and in cookbooks, aimed at the home cook and based on cereal grains, nuts, seeds, breadcrumbs, beans, textured soya protein, with starchy flours or flaxseed meal to stabilize the mix. Recipes offer a variety of flavors and textures, often containing herbs and spices and ingredients, like tamari or nutritional yeast, to increase the umami taste. Desirable characteristics include mouthfeel, a seared surface, crunch, chewiness, spiciness and resistance to crumbling. Like a meat burger, they can be pan fried, grilled, barbecued or oven cooked. Some are designed to be eaten in a toasted bun or brioche, with similar accompaniments to a traditional meat burger, such as tomato slices, onion rings, dill pickled cucumber, mayonnaise, mustard and ketchup. Others are stand-alone patties that are eaten with other vegetables, salad or a dipping sauce. Home produced veggie burgers can be frozen and stored, just like commercial varieties.

Commercial brands

Products include dried mixes to which water is added before cooking, or ready-made burgers, often found in the store chiller or freezer compartments. Some popular brands of veggie burger include the Boca Burger, the Gardenburger, Morningstar Farms, and Quorn. In the 2010s, realistic meat-like burgers were developed, led by the companies Beyond Meat and Impossible Foods.

Origin

There have been numerous claims of invention of the veggie burger. The dish, by name, may have been created in London in 1982 by Gregory Sams, who called it the 'VegeBurger'. Sams and his brother Craig had run a natural food restaurant in Paddington since the 1960s; a Carrefour hypermarket in Southampton sold 2000 packets in three weeks after its launch. An earlier reference can be heard in the 7 June 1948 episode of the American radio drama series Let George Do It called "The Mister Mirch Case" where a character refers to "vegeburgers" as a burger made of nuts and legumes.

Using the name Gardenburger, an early veggie burger was developed by Paul Wenner around 1980 or 1981 in Wenner's vegetarian restaurant, The Gardenhouse, in Gresham, Oregon.

Restaurants

Some fast food companies have been offering vegetarian foods increasingly since the beginning of the 21st century.

India

In India where vegetarianism is widespread, McDonald's, Burger King, Wendy's and KFC serve veggie burgers. In 2012, McDonald's opened its first vegetarian-only restaurant in India. A popular type of burger is the Vada pav, also known as the Bombay burger. It originated in or near the city of Mumbai and consists of a fritter (vada), cooked with potatoes mixed with green chilis and various spices, enclosed in a bread roll (pav).

United States

Burger King (BK) introduced a veggie burger in 2002, the first to be made available nationally in the U.S. They have also sold veggie burgers in their Australian franchise, Hungry Jack's. In 2019, BK rolled out the Impossible Whopper as a veggie burger that realistically imitates their signature beef-based Whopper hamburger.

Veggie burgers have been sold in certain Subways and Harvey's, as well as many chain restaurants, such as Red Robin, Chili's, Denny's, Friendly's, Culvers, Johnny Rockets, and Hard Rock Cafe. Occasionally the veggie burger option will appear at the bottom of a menu as a possible substitution for beef or turkey burgers, rather than as an individual menu item.

McDonald's

Different kinds of veggie burgers, including the vegetarian McVeggie, the vegan McVegan, and the McPlant, are also served permanently in McDonald's restaurants in:

* India (McVeggie, consisting of a fried, breaded patty of ground vegetables, with lettuce and ketchup, in a wholewheat, sesame or focaccia bun)
* Bahrain
* Cheung Chau, Hong Kong (McVeggie, in Cheung Chau Bun Festival)
* Egypt (McFalafel, consisting of a falafel patty with tomato, lettuce and tahini sauce)
* Finland (McVegan)
* Germany since February 2010, McDonald's Germany, its fourth-biggest global market, is serving veggie burgers in all its restaurants.
* Greece (McVeggie, consisting of a breaded and fried vegetable patty with tomato, iceberg lettuce and ketchup, in a sesame bun)
* Malaysia
* The Netherlands (Groentenburger=Vegetable Burger)
* Portugal (McVeggie, since November 2016)
* New Zealand (McVeggie, since December 2019)
* Sweden (McVegan)
* Switzerland (Vegi Mac)
* United Arab Emirates
* United Kingdom (McPlant)

Manufacturing process

Manufacturing often follows certain steps. One commercial recipe runs as follows.

The grains and vegetables used in the patties are first washed and thoroughly cleaned to help ensure the removal of dirt, bacteria, chemical residues, and other materials that may be on the raw products. This process can either be done by hand or through the use of machinery such as high-pressure sprayers. With the use of a conveyor belt, the food is moved along under a high-pressure sprayer to remove the debris listed above. Another method that may be used by companies is the use of a hollow drum which circulates the food while water is sprayed onto it to remove the debris.

Next, a steam-heated mixer is used to cook the grain and remove any extra debris and excess water. The mixer typically has oils within it (such as safflower oil). As the oil simmers, the grains are gradually added in and the blades are used to mix the grains around. The steam created in the mixer allows the grains to cook resulting in a puree.

Next the vegetables are cut up into smaller pieces to allow more surface area for cooking purposes. This can be done by hand or through the use of machines in factories.

The vegetables are then added to the grain mixture in the steam-heated mixture. The exact ratio of grains to vegetables is unique to each company, resulting in different textures and tastes that are produced.

As the vegetables are being cooked in the mixer, their natural sugars release, resulting in caramelization. The sweet flavors thus created from this caramelization are mixed uniformly in the mixer. The technique used for the creation of this caramelization mixture is called mirepoix. This technique is very important to the production of veggie burgers, as it adds both texture and flavor to the patty.

Dry ingredients, such as oats, can be added to the manufacturing process.

The mirepoix mixture is then placed into another mixing tub, where dry ingredients such as oats, walnuts, potato flakes, and more are added. The mixture is then folded together to make a uniform mix. The moisture from the vegetables causes the mixture to become sticky, thus clumping together like cookie dough. This is important, as it allows the veggie burger to stick together to form the patty.

The mixture is now put into an automatic patty-making machine or press. The press then punches out the patties into a disc shape onto a conveyor belt underneath. A constant spray of water may also be used to prevent any of the mixture from sticking to machinery parts. Once on the conveyor tray, the patties move along to be put onto baking trays.

Patties are first inspected to make sure they are the correct shape, size, and texture to ensure a high-quality product. The trays are then put into a heated convection oven at a designated temperature and time.

Once out of the oven, the patties are quickly frozen with techniques such as individual quick freezing and cryogenic freezing. These quick-freezing methods freeze the patties within 30 minutes to lock in nutrients and preserve texture by the formation of a number of small ice crystals.

The frozen patties are again placed on a conveyor belt that takes them to a vacuum-packaging machine. The machine seals the patties into measured plastic sleeves and draws out any excess air. The packages are then loaded into printed cardboard boxes with the aid of another machine or done manually. The flaps on the box are then sealed closed and the product is kept in temperature-controlled storage before, during, and after delivery to grocery stores.

Purpose of ingredients:

Grains

Grains are primarily used in the manufacturing of veggie burgers to act as a meat substitute. The grains, such as rice and wheat, provide carbohydrates and protein components and to provide bulk to the patty. They also provide texture to the burger, which can change depending on the type of grain used. This texture and look is important as they wish to make the patty look like a beef patty.

Vegetables

Vegetables, such as corn, carrots, and mushrooms, provide texture and taste. Additionally, they provide moisture when heated. This allows the disc shape without breaking apart easily. The vegetables also provide nutrients with the addition of some vitamins and minerals.

Dry ingredients

Adding dry ingredients, such as oats, flours, nuts, or breadcrumbs, can absorb excess moisture and liquid, which results in the patty sticking together tightly. This could turn the moist veggie patties into a sticky consistency, which also helps the patties shape easily. Dry ingredients provide proteins and fiber, which add nutritional value to the veggie patty. Dry ingredients, such as walnuts and almonds, are also rich in energy, vitamins and minerals.

Stabilizers

Tapioca starch and vegetable gum are two common ingredients used as stabilizers in veggie burger. Tapioca starch is often used as a thickening agent due to its cheaper price. It gets sticky once it is wet, which helps to hold the burger patty tightly together.[34] Vegetable gum also helps to hold everything together in the patty.

Oils

Oils, such as safflower, coconut, and olive oil, can lubricate the grain mix, and allow further cooking processing when the wheat is added. This facilitates the Maillard reaction and brings out the flavors of the veggie burger. Oils can also prevent the ingredients from sticking to the mixing machine, thus allowing them to be mixed well and heated together.

Salt

Salt is typically used for flavor, and may also be used as a preservative in veggie burgers. With the use of salt, the water activity of the food is reduced. This helps prevent the growth of micro-organisms and prolongs the shelf life of the food.

Naming

In October 2020, the EU rejected an amendment proposed by the Committee on Agriculture and Rural Development which, if passed, would have resulted in companies being forced to call veggie burgers by the term "veggie discs".

20231204-SEA-VeganBurger-FredHardy-00-dbf603c78b694bfd99489b85ab44f4c4.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2433 2025-01-21 20:44:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2333) South Pole

Gist

South Pole is the southern end of the Earth’s axis, lying in Antarctica, about 300 miles (480 km) south of the Ross Ice Shelf. This geographic South Pole does not coincide with the magnetic South Pole, from which magnetic compasses point and which lies on the Adélie Coast (at about 66°00′ S, 139°06′ E; the magnetic pole moves about 8 miles [13 km] to the northwest each year). Nor does it coincide with the geomagnetic South Pole, the southern end of the Earth’s geomagnetic field (this pole also moves; during the early 1990s it was located about 79°13′ S, 108°44′ E). The geographic pole, at an elevation of some 9,300 feet (2,830 metres; the elevation also changes constantly) above sea level, has six months of complete daylight and six months of total darkness each year. Ice thickness is 8,850 feet (2,700 metres). First reached by the Norwegian explorer Roald Amundsen on December 14, 1911, the pole was reached the following year by the British explorer Robert F. Scott and in 1929 by the American explorer Richard E. Byrd. After reaching the South Pole on January 11, 1986, the British explorer Robert Swan led an expedition to the North Pole, reaching his destination on May 14, 1989 and thereby becoming the first person to walk to both poles. The South Pole is the site of a U.S. station and landing strip (Amundsen-Scott); owing to the movement of the polar ice cap, a new location of the exact rotational pole is marked periodically by station personnel.

Summary

The South Pole is the southern end of Earth’s axis. The axis is an imaginary line through the center of Earth around which the planet rotates. The South Pole is located in Antarctica.

Geographic and Magnetic Poles

In the geographic system of latitude and longitude, the South Pole is at 90° south. All the lines of longitude run between it and the North Pole.

This geographic South Pole is not the same as the magnetic South Pole. Compasses point away from the magnetic South Pole, toward the magnetic North Pole. Although the geographic poles are fixed, the magnetic poles move slowly over time. The magnetic South Pole is now on the Adélie Coast, in the part of eastern Antarctica that is across the ocean from Australia.

A Year at the Pole

The geographic South Pole does not experience seasons, days, or nights like most other places on Earth. At the pole, six months of darkness, or winter, follow six months of daylight, or summer. The Sun rises on about September 21. It appears to move in a circle until it sets on about March 22. For the other half of the year, the South Pole is dark. This phenomenon happens because, as Earth revolves around the Sun, Earth’s axis stays tilted at the same angle. During its six months of summer, the South Pole points toward the Sun. During its six months of winter, it points away from the Sun.

Exploration and Study

Antarctica is a difficult place to explore. The first efforts to do so and to reach the South Pole began in the early 1900s. Ernest Henry Shackleton, an Irish-born British explorer, almost reached the South Pole in 1909. The first people to succeed were the Norwegian explorer Roald Amundsen and his four companions. They reached the pole on December 14, 1911. The British explorer Robert F. Scott had hoped to beat Amundsen to the pole, but he did not arrive until January 17, 1912. Scott and his men died on their return trip. U.S. explorer Richard E. Byrd made the first flight over the South Pole on November 29, 1929.

In 1957–58 British explorer Vivian Fuchs led the first crossing of Antarctica by way of the pole. The group set out in tracked vehicles in November 1957. After making it to the South Pole, the group reached the opposite coast in March 1958.

In late 1956 a U.S. Navy team began building a station at the South Pole. The facility—called the Amundsen-Scott South Pole Station—was improved in the 1970s and early 2000s.

Scientists now live there year-round.

Details

The South Pole is the southernmost point on Earth. It is the precise point of the southern intersection of Earth's axis and Earth's surface.

From the South Pole, all directions are north. Its latitude is 90 degrees south, and all lines of longitude meet there (as well as at the North Pole).

The South Pole is located on Antarctica, one of Earth's seven continents. Although land at the South Pole is only about a hundred meters above sea level, the ice sheet above it is roughly 2,700-meters (9,000-feet) thick. This elevation makes the South Pole much colder than the North Pole, which sits in the middle of the Arctic Ocean. In fact, the warmest temperature ever recorded at the South Pole was a freezing -12.3 degrees Celsius (9.9 degrees Fahrenheit).

The South Pole is close to the coldest place on Earth. The coldest temperature recorded at the South Pole, -82.8 degrees Celsius (-117.0 degrees Fahrenheit), is still warmer than the coldest temperature ever recorded, -89.2 degrees Celsius (-128.6 degrees Fahrenheit). That temperature was recorded at the Russian Vostok Research Station, about 1,300 kilometers (808 miles) away.

Because Earth rotates on a tilted axis as it revolves around the sun, sunlight is experienced in extremes at the poles. In fact, the South Pole experiences only one sunrise (at the September equinox) and one sunset (at the March equinox) every year. From the South Pole, the sun is always above the horizon in the summer and below the horizon in the winter. This means the region experiences up to 24 hours of sunlight in the summer and 24 hours of darkness in the winter.

Due to plate tectonics, the exact location of the South Pole is constantly moving. Plate tectonics is the process of large slabs of Earth's crust moving slowly around the planet, bumping into and pulling apart from one another.

Over billions of years, Earth's continents have shifted together and drifted apart. Millions of years ago, land that today is the east coast of South America was at the South Pole. Today, the ice sheet above the South Pole drifts about 10 meters (33 feet) every year.

Amundsen–Scott South Pole Station

Compared to the North Pole, the South Pole is relatively easy to travel to and study. The North Pole is in the middle of the Arctic Ocean, while the South Pole is on a stable piece of land.

The United States has had scientists working at Amundsen–Scott South Pole Station since 1956. Between 50 and 200 scientists and support staff live at the this research station at any given time. The station itself does not sit on the ground or ice sheet. It is able to adjust its elevation, to prevent it from being buried in snow, which accumulates at a rate of about 20 centimeters (eight inches) every year, and does not melt.

In the winter, the Amundsen–Scott South Pole Station is completely self-sufficient. The dark sky, freezing temperatures, and gale-force winds prevent most supplies from being flown or trekked in. All food, medical supplies, and other material must be secured before the long Antarctic winter. The station's energy is provided by three enormous generators that run on jet fuel.

In winter, stores of food are supplemented by the Amundsen–Scott South Pole Station's greenhouse. Vegetables in the greenhouse are grown with hydroponics, in a nutrient solution instead of soil.

Some of the earliest discoveries made at South Pole research stations helped support the theory of continental drift, the idea that continents drift apart and shift together. Rock samples collected near the South Pole and throughout Antarctica match samples dated to the same time period collected at tropical latitudes. Geologists conclude that the samples formed at the same time and the same place, and were torn apart over millions of years, as the planet split into different continents.

Today, the Amundsen–Scott South Pole Station is host to a wide variety of research. The relatively undisturbed ice sheet maintains a pristine record of snowfalls, air quality, and weather patterns. Ice cores provide data for glaciologists, climatologists, and meteorologists, as well as scientists tracking patterns in climate change.

The South Pole has low temperatures and humidity and high elevation, making it an outstanding place to study astronomy and astrophysics. The South Pole Telescope studies low-frequency radiation, such as microwaves and radio waves. The South Pole Telescope is one of the instruments designed measure the cosmic microwave background (CMB)–faint, diffuse radiation left over from the Big Bang.

Astrophysicists also search for tiny particles called neutrinos at the South Pole. Neutrinos interact very, very weakly with all other matter. Neutrino detectors therefore must be very large to detect a measurable number of the particles. The Amundsen–Scott South Pole Station's IceCube Neutrino Detector has more than 80 "strings" of sensors reaching as deep as 2,450 meters (8,038 feet) beneath the ice. It is the largest neutrino detector in the world.

Ecosystems at the South Pole

Although the Antarctic coast is teeming with marine life, few biologists conduct research at the Amundsen–Scott South Pole Station. The habitat is far too harsh for most organisms to survive.

In fact, the South Pole sits in the middle of the largest, coldest, driest, and windiest desert on Earth. More temperate parts of this desert (called either East Antarctica or Maudlandia) support native flora such as moss and lichen, and organisms such as mites and midges. The South Pole itself has no native plant or animal life at all. Sometimes, however, seabirds such as skuas can be spotted if they are blown off-course.

Exploration

The early 20th century's "Race to the Pole" stands as a symbol of the harrowing nature of polar exploration.

European and American explorers had attempted to reach the South Pole since British Capt. Robert Falcon Scott's expedition of 1904. Scott, along with fellow Antarctic explorers Ernest Shackleton and Edward Wilson, came within 660 kilometers (410 miles) of the pole, but turned back due to weather and inadequate supplies.

Shackleton and Scott were determined to reach the pole. Scott worked with scientists, intent on using the best techniques to gather data and collect samples.

Shackleton also conducted scientific surveys, although his expeditions were more narrowly focused on reaching the South Pole. He came within 160 kilometers (100 miles) of the pole in 1907, but again had to turn back due to weather.

Scott gathered public support and public funding for his 1910 Terra Nova expedition. He secured provisions and scientific equipment. In addition to the sailors and scientists on his team, the Terra Nova expedition also included tourists—guests who helped finance the voyage in exchange for taking part in it.

On the way to Antarctica, the Terra Nova expedition stopped in Australia to take on final supplies. Here, Scott received a surprising telegram from Norwegian explorer Roald Amundsen: "Beg leave to inform you Fram [Amundsen's ship] proceeding Antarctic."

Amundsen was apparently racing for the pole, ahead of Scott, but had kept all preparation secret. His initial ambition, to be the first to reach the North Pole, had been thwarted by American explorers Frederick Cook and Robert Peary, both of whom claimed to reach the North Pole first. (Both claims are now disputed, and Amundsen's flight over the North Pole is generally recognized as the first verified journey there.)

The Terra Nova and Fram expeditions arrived in Antarctica about the same time, in the middle of the Antarctic summer (January). They set up base camps about 640 kilometers (400 miles) apart. As they proceeded south, both expeditions established resupply depots with supplies for their return journey. While Scott's team stuck to a route forged by Shackleton years earlier, Amundsen took a new route.

Scott proceeded with scientific and expeditionary equipment hauled by dogs, ponies, and motor sledges. The motorized equipment soon broke down, and the ponies could not adapt to the harsh Antarctic climate. Even the sled dogs became weary. All the ponies died, and most members of the expedition turned back. Only four men from the Terra Nova expedition (including Scott's friend Wilson) proceeded with Scott to the pole.

Amundsen traveled by dogsled, with a team of explorers, skiers, and mushers. The foresight and navigation paid off: Amundsen reached the pole in December 1911. He called the camp Polheim, and the entire Fram expedition successfully returned to their resupply depots, ship, and Norway.

More than a month later, Scott reached the South Pole, only to be met by Amundsen's camp—he had left a tent, equipment, and supplies for Scott, as well as a note for the King of Norway to be delivered if the Fram expedition failed to make it back.

Disheartened, Scott's team slowly headed back north. They faced colder temperatures and harsher weather than Amundsen's team. They had fewer supplies. Suffering from hunger, hypothermia, and frostbite, all members of Scott's South Pole expedition died fewer than 18 kilometers (11 miles) from a resupply depot.

American explorer Richard E. Byrd became the first person to fly over the South Pole, in 1926, and the Amundsen–Scott South Pole Station was established 30 years later.

However, the next overland expedition to the South Pole was not made until 1958, more than 40 years after Amundsen and Scott's deadly race. The 1958 expedition was led by legendary New Zealand mountaineer Sir Edmund Hillary, who had become the first person to scale Mount Everest in 1953.

Transportation to the South Pole

Almost all scientists and support personnel, as well as supplies, are flown in to the South Pole. Hardy military aircraft usually fly from McMurdo Station, an American facility on the Antarctic coast and the most populated area on the continent. The extreme and unpredictable weather around the pole can often delay flights.

In 2009, the U.S. completed construction of the South Pole Traverse. Also called the McMurdo-South Pole Highway, this stretch of unpaved road runs more than 1,600 kilometers (995 miles) over the Antarctic ice sheet, from McMurdo Station to the Amundsen–Scott South Pole Station. It takes about 40 days for supplies to reach the pole from McMurdo, but the route is far more reliable and inexpensive than air flights. The highway can also supply much heavier equipment (such as that needed by the South Pole's astrophysics laboratories) than aircraft.

Resources and Territorial Claims

The entire continent of Antarctica has no official political boundaries. Seven countries made defined claims to Antarctic territory prior to the Antarctic Treaty of 1959, which does not legally recognize any claims.

Additional Information

The South Pole, also known as the Geographic South Pole or Terrestrial South Pole, is the point in the Southern Hemisphere where the Earth's axis of rotation meets its surface. It is called the True South Pole to distinguish from the Magnetic South Pole.

The South Pole is by definition the southernmost point on the Earth, lying antipodally to the North Pole. It defines geodetic latitude 90° South, as well as the direction of true south. At the South Pole all directions point North; all lines of longitude converge there, so its longitude can be defined as any degree value. No time zone has been assigned to the South Pole, so any time can be used as the local time. Along tight latitude circles, clockwise is east and counterclockwise is west. The South Pole is at the center of the Southern Hemisphere. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year.

Geography

For most purposes, the Geographic South Pole is defined as the southern point of the two points where Earth's axis of rotation intersects its surface (the other being the Geographic North Pole). However, Earth's axis of rotation is actually subject to very small "wobbles" (polar motion), so this definition is not adequate for very precise work.

The geographic coordinates of the South Pole are usually given simply as 90°S, since its longitude is geometrically undefined and irrelevant. When a longitude is desired, it may be given as 0°. At the South Pole, all directions face north. For this reason, directions at the Pole are given relative to "grid north", which points northward along the prime meridian. Along tight latitude circles, clockwise is east, and counterclockwise is west, opposite to the North Pole.

The Geographic South Pole is presently located on the continent of Antarctica, although this has not been the case for all of Earth's history because of continental drift. It sits atop a featureless, barren, windswept and icy plateau at an altitude of 2,835 m (9,301 ft) above sea level, and is located about 1,300 km (810 mi) from the nearest open sea at the Bay of Whales. The ice is estimated to be about 2,700 m (8,900 ft) thick at the Pole, so the land surface under the ice sheet is actually near sea level.

The polar ice sheet is moving at a rate of roughly 10 m (33 ft) per year in a direction between 37° and 40° west of grid north, down towards the Weddell Sea. Therefore, the position of the station and other artificial features relative to the geographic pole gradually shift over time.

The Geographic South Pole is marked by a stake in the ice alongside a small sign; these are repositioned each year in a ceremony on New Year's Day to compensate for the movement of the ice. The sign records the respective dates that Roald Amundsen and Robert F. Scott reached the Pole, followed by a short quotation from each man, and gives the elevation as "9,301 FT.". A new marker stake is designed and fabricated each year by staff at the site.

Ceremonial South Pole

The Ceremonial South Pole is an area set aside for photo opportunities at the South Pole Station. It is located some meters from the Geographic South Pole, and consists of a metallic sphere on a short barber pole, surrounded by the flags of the original Antarctic Treaty signatory states.

Historic monuments:

Amundsen's Tent

The tent was erected by the Norwegian expedition led by Roald Amundsen on its arrival on 14 December 1911. It is currently buried beneath the snow and ice in the vicinity of the Pole. It has been designated a Historic Site or Monument (HSM 80), following a proposal by Norway to the Antarctic Treaty Consultative Meeting. The precise location of the tent is unknown, but based on calculations of the rate of movement of the ice and the accumulation of snow, it is believed, as of 2010, to lie between 1.8 and 2.5 km (1.1 and 1.5 miles) from the Pole at a depth of 17 m (56 ft) below the present surface.

Argentine Flagpole

A flagpole erected at the South Geographical Pole in December 1965 by the First Argentine Overland Polar Expedition has been designated a Historic Site or Monument (HSM 1) following a proposal by Argentina to the Antarctic Treaty Consultative Meeting.

Exploration:

Pre-1900

In 1820, several expeditions claimed to have been the first to have sighted Antarctica, with the first being the Russian expedition led by Fabian Gottlieb von Bellingshausen and Mikhail Lazarev. The first landing was probably just over a year later when English-born American captain John Davis, a sealer, set foot on the ice.

The basic geography of the Antarctic coastline was not understood until the mid-to-late 19th century. American naval officer Charles Wilkes claimed (correctly) that Antarctica was a new continent, basing the claim on his exploration in 1839–40, while James Clark Ross, in his expedition of 1839–1843, hoped that he might be able to sail all the way to the South Pole; He was unsuccessful.

1900–1950

British explorer Robert Falcon Scott on the Discovery Expedition of 1901–1904 was the first to attempt to find a route from the Antarctic coastline to the South Pole. Scott, accompanied by Ernest Shackleton and Edward Wilson, set out with the aim of travelling as far south as possible, and on 31 December 1902, reached 82°16′ S. Shackleton later returned to Antarctica as leader of the British Antarctic Expedition (Nimrod Expedition) in a bid to reach the Pole. On 9 January 1909, with three companions, he reached 88°23' S – 112 miles (180 km) from the Pole – before being forced to turn back.

The first men to reach the Geographic South Pole were the Norwegian Roald Amundsen and his party on 14 December 1911. Amundsen named his camp Polheim and the entire plateau surrounding the Pole King Haakon VII Vidde in honour of King Haakon VII of Norway. Robert Falcon Scott returned to Antarctica with his second expedition, the Terra Nova Expedition, initially unaware of Amundsen's secretive expedition. Scott and four other men reached the South Pole on 17 January 1912, thirty-four days after Amundsen. On the return trip, Scott and his four companions all died of starvation and extreme cold.

In 1914 Ernest Shackleton's Imperial Trans-Antarctic Expedition set out with the goal of crossing Antarctica via the South Pole, but his ship, the Endurance, was frozen in pack ice and sank 11 months later. The overland journey was never made.

US Admiral Richard Evelyn Byrd, with the assistance of his first pilot Bernt Balchen, became the first person to fly over the South Pole on 29 November 1929.

1950–present

It was not until 31 October 1956 that humans once again set foot at the South Pole, when a party led by Admiral George J. Dufek of the US Navy landed there in an R4D-5L Skytrain (C-47 Skytrain) aircraft. The US Amundsen–Scott South Pole Station was established by air over 1956–1957 for the International Geophysical Year and has been continuously staffed since then by research and support personnel.

After Amundsen and Scott, the next people to reach the South Pole overland (albeit with some air support) were Edmund Hillary (4 January 1958) and Vivian Fuchs (19 January 1958) and their respective parties, during the Commonwealth Trans-Antarctic Expedition. There have been many subsequent expeditions to arrive at the South Pole by surface transportation, including those by Havola, Crary, and Fiennes. The first group of women to reach the pole were Pam Young, Jean Pearson, Lois Jones, Eileen McSaveney, Kay Lindsay, and Terry Tickhill in 1969. In 1978–79, Michele Eileen Raney became the first woman to winter at the South Pole.

Subsequent to the establishment, in 1987, of the logistic support base at Patriot Hills Base Camp, the South Pole became more accessible to non-government expeditions.

In the summer of 1988-1989, Chilean glaciologist Alejo Contreras Steading reached the South Pole on foot; before that, he had arrived in 1980 by other means.

On 30 December 1989, Arved Fuchs and Reinhold Messner were the first to traverse Antarctica via the South Pole without animal or motorized help, using only skis and the help of wind. Two women, Victoria E. Murden and Shirley Metz, reached the pole by land on 17 January 1989.

The fastest unsupported journey to the Geographic South Pole from the ocean is 24 days and one hour from Hercules Inlet and was set in 2011 by Norwegian adventurer Christian Eide, who beat the previous solo record set in 2009 by American Todd Carmichael of 39 days and seven hours, and the previous group record also set in 2009 of 33 days and 23 hours.

The fastest solo, unsupported and unassisted trek to the south pole by a female was performed by Hannah McKeand from the UK in 2006. She made the journey in 39 days 9 hours 33 minutes. She started on 19 November 2006 and finished on 28 December 2006.

In the 2011–12 summer, separate expeditions by Norwegian Aleksander Gamme and Australians James Castrission and Justin Jones jointly claimed the first unsupported trek without dogs or kites from the Antarctic coast to the South Pole and back. The two expeditions started from Hercules Inlet a day apart, with Gamme starting first, but completing according to plan the last few kilometers together. As Gamme traveled alone he thus simultaneously became the first to complete the task solo.

On 28 December 2018, Captain Lou Rudd became the first Briton to cross the Antarctic unassisted via the south pole, and the second person to make the journey in 56 days.[30] On 10 January 2020, Mollie Hughes became the youngest person to ski to the pole, aged

Climate and day and night

During winter (May through August), the South Pole receives no sunlight at all, and is completely dark apart from moonlight. In summer (October through February), the sun is continuously above the horizon and appears to move in a counter-clockwise circle. However, it is always relatively low in the sky, reaching a maximum of approximately 23.5° around the December solstice because of the approximately 23.5° tilt of the earth's axis. Much of the sunlight that does reach the surface is reflected by the white snow. This lack of warmth from the sun, combined with the high altitude (about 2,800 metres (9,200 ft)), means that the South Pole has one of the coldest climates on Earth (though it is not quite the coldest; that record goes to the region in the vicinity of the Vostok Station, also in Antarctica, which lies at a higher elevation).

The South Pole is at an altitude of 9,200 feet (2,800 m) but feels like 11,000 feet (3,400 m). Centripetal force from the spin of the planet throws the atmosphere toward the equator. The South Pole is colder than the North Pole primarily because of the elevation difference and for being in the middle of a continent. The North Pole is a few feet from sea level in the middle of an ocean.

In midsummer, as the sun reaches its maximum elevation of about 23.5 degrees, high temperatures at the South Pole in January average at −25.9 °C (−15 °F). As the six-month "day" wears on and the sun gets lower, temperatures drop as well: they reach −55 °C (−67 °F) around sunset (late March) and sunrise (late September). In midwinter, the average temperature remains steady at around −60 °C (−76 °F). The highest temperature ever recorded at the Amundsen–Scott South Pole Station was −12.3 °C (9.9 °F) on Christmas Day, 2011, and the lowest was −82.8 °C (−117.0 °F) on 23 June 1982 (for comparison, the lowest temperature directly recorded anywhere on earth was −89.2 °C (−128.6 °F) at Vostok Station on 21 July 1983, though −93.2 °C (−135.8 °F) was measured indirectly by satellite in East Antarctica between Dome A and Dome F in August 2010). Mean annual temperature at the South Pole is –49.5 °C (–57.1 °F).

The South Pole has an ice cap climate (Köppen climate classification EF). It resembles a desert, receiving very little precipitation. Air humidity is near zero. However, high winds can cause the blowing of snowfall, and the accumulation of snow amounts to about 7 cm (2.8 in) per year. The former dome seen in pictures of the Amundsen–Scott station is partially buried due to snow storms, and the entrance to the dome had to be regularly bulldozed to uncover it. More recent buildings are raised on stilts so that the snow does not build up against their sides.

Time

In most places on Earth, local time is determined by longitude, such that the time of day is more-or-less synchronised to the perceived position of the Sun in the sky (for example, at midday the Sun is roughly perceived to be at its highest). This line of reasoning fails at the South Pole, where the Sun is seen to rise and set only once per year with solar elevation varying only with day of the year, not time of day. There is no a priori reason for placing the South Pole in any particular time zone, but as a matter of practical convenience the Amundsen–Scott South Pole Station keeps New Zealand Time (UTC+12/UTC+13). This is because the US flies its resupply missions ("Operation Deep Freeze") out of McMurdo Station, which is supplied from Christchurch, New Zealand.

Flora and fauna

Due to its exceptionally harsh climate, there are no native resident plants or animals at the South Pole. Off-course south polar skuas and snow petrels are occasionally seen there.

In 2000 it was reported that microbes had been detected living in the South Pole ice. Scientists published in the journal Gondwana Research that evidence had been found of dinosaurs with feathers to protect the animals from the extreme cold. The fossils had been found over 100 years ago in Koonwarra, Australia, but in sediment which had accumulated under a lake which had been near to the South Pole millions of years ago.

2018_02_08_40254_1518084368._large.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2434 2025-01-22 00:04:01

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2333) Matter

Gist

In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume.

Matter is anything that takes up space and can be weighed. In other words, matter has volume and mass.

Summary

Matter is material substance that constitutes the observable universe and, together with energy, forms the basis of all objective phenomena.

At the most fundamental level, matter is composed of elementary particles known as quarks and leptons (the class of elementary particles that includes electrons). Quarks combine into protons and neutrons and, along with electrons, form atoms of the elements of the periodic table, such as hydrogen, oxygen, and iron. Atoms may combine further into molecules such as the water molecule, H2O. Large groups of atoms or molecules in turn form the bulk matter of everyday life.

Depending on temperature and other conditions, matter may appear in any of several states. At ordinary temperatures, for instance, gold is a solid, water is a liquid, and nitrogen is a gas, as defined by certain characteristics: solids hold their shape, liquids take on the shape of the container that holds them, and gases fill an entire container. These states can be further categorized into subgroups. Solids, for example, may be divided into those with crystalline or amorphous structures or into metallic, ionic, covalent, or molecular solids, on the basis of the kinds of bonds that hold together the constituent atoms. Less-clearly defined states of matter include plasmas, which are ionized gases at very high temperatures; foams, which combine aspects of liquids and solids; and clusters, which are assemblies of small numbers of atoms or molecules that display both atomic-level and bulklike properties.

However, all matter of any type shares the fundamental property of inertia, which—as formulated within Isaac Newton’s three laws of motion—prevents a material body from responding instantaneously to attempts to change its state of rest or motion. The mass of a body is a measure of this resistance to change; it is enormously harder to set in motion a massive ocean liner than it is to push a bicycle. Another universal property is gravitational mass, whereby every physical entity in the universe acts so as to attract every other one, as first stated by Newton and later refined into a new conceptual form by Albert Einstein.

Although basic ideas about matter trace back to Newton and even earlier to Aristotle’s natural philosophy, further understanding of matter, along with new puzzles, began emerging in the early 20th century. Einstein’s theory of special relativity (1905) shows that matter (as mass) and energy can be converted into each other according to the famous equation E = mc2, where E is energy, m is mass, and c is the speed of light. This transformation occurs, for instance, during nuclear fission, in which the nucleus of a heavy element such as uranium splits into two fragments of smaller total mass, with the mass difference released as energy. Einstein’s theory of gravitation, also known as his theory of general relativity (1916), takes as a central postulate the experimentally observed equivalence of inertial mass and gravitational mass and shows how gravity arises from the distortions that matter introduces into the surrounding space-time continuum.

The concept of matter is further complicated by quantum mechanics, whose roots go back to Max Planck’s explanation in 1900 of the properties of electromagnetic radiation emitted by a hot body. In the quantum view, elementary particles behave both like tiny balls and like waves that spread out in space—a seeming paradox that has yet to be fully resolved. Additional complexity in the meaning of matter comes from astronomical observations that began in the 1930s and that show that a large fraction of the universe consists of “dark matter.” This invisible material does not affect light and can be detected only through its gravitational effects. Its detailed nature has yet to be determined.

On the other hand, through the contemporary search for a unified field theory, which would place three of the four types of interactions between elementary particles (the strong force, the weak force, and the electromagnetic force, excluding only gravity) within a single conceptual framework, physicists may be on the verge of explaining the origin of mass. Although a fully satisfactory grand unified theory (GUT) has yet to be derived, one component, the electroweak theory of Sheldon Glashow, Abdus Salam, and Steven Weinberg (who shared the 1979 Nobel Prize for Physics for this work) predicted that an elementary subatomic particle known as the Higgs boson imparts mass to all known elementary particles. After years of experiments using the most powerful particle accelerators available, scientists finally announced in 2012 the discovery of the Higgs boson.

Details

In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.

Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However, this is only somewhat correct because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space.

For much of the history of the natural sciences, people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, appeared in both ancient Greece and ancient India. Early philosophers who proposed the particulate theory of matter include the ancient Indian philosopher Kanada (c. 6th–century BCE or after), pre-Socratic Greek philosopher Leucippus (~490 BCE), and pre-Socratic Greek philosopher Democritus (~470–380 BCE).

Related concepts:

Comparison with mass

Matter should not be confused with mass, as the two are not the same in modern physics. Matter is a general term describing any 'physical substance'. By contrast, mass is not a substance but rather an extensive property of matter and other substances or systems; various types of mass are defined within physics – including but not limited to rest mass, inertial mass, relativistic mass, and mass–energy.

While there are different views on what should be considered matter, the mass of a substance has exact scientific definitions. Another difference is that matter has an "opposite" called antimatter, but mass has no opposite—there is no such thing as "anti-mass" or negative mass, so far as is known, although scientists do discuss the concept. Antimatter has the same (i.e. positive) mass property as its normal matter counterpart.

Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings from a time when there was no reason to distinguish mass from simply a quantity of matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" can be defined in several ways. Sometimes in the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality.

Relation with chemical substance

A chemical substance is a unique form of matter with constant chemical composition and characteristic properties. Chemical substances may take the form of a single element or chemical compounds. If two or more chemical substances can be combined without reacting, they may form a chemical mixture. If a mixture is separated to isolate one chemical substance to a desired degree, the resulting substance is said to be chemically pure.

Chemical substances can exist in several different physical states or phases (e.g. solids, liquids, gases, or plasma) without changing their chemical composition. Substances transition between these phases of matter in response to changes in temperature or pressure. Some chemical substances can be combined or converted into new substances by means of chemical reactions. Chemicals that do not possess this ability are said to be inert.

Pure water is an example of a chemical substance, with a constant composition of two hydrogen atoms bonded to a single oxygen atom (i.e. H2O). The atomic ratio of hydrogen to oxygen is always 2:1 in every molecule of water. Pure water will tend to boil near 100 °C (212 °F), an example of one of the characteristic properties that define it. Other notable chemical substances include diamond (a form of the element carbon), table salt (NaCl; an ionic compound), and refined sugar (C12H22O11; an organic compound).

Definition:

Based on atoms

A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. Such atomic matter is also sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can be extended to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition.

Based on protons, neutrons and electrons

A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example electron beams in an old cathode ray tube television, or white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together, leading to the next definition.

Based on quarks and leptons

As seen in the above discussion, many early definitions of what can be called "ordinary matter" were based upon its structure or "building blocks". On the scale of elementary particles, a definition that follows this tradition can be stated as: "ordinary matter is everything that is composed of quarks and leptons", or "ordinary matter is everything that is composed of any elementary fermions except antiquarks and antileptons". The connection between these formulations follows.

Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: "ordinary matter is anything that is made of the same things that atoms and molecules are made of". (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons and neutrons are made of quarks, this definition in turn leads to the definition of matter as being "quarks and leptons", which are two of the four types of elementary fermions (the other two being antiquarks and antileptons, which can be considered antimatter as described later). Carithers and Grannis state: "Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino." (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.)

This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter.

The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluon fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately 12.5 MeV/c^2, which is low compared to the mass of a nucleon (approximately 938 MeV/c^2). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components.

The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles.

This quark–lepton definition of matter also leads to what can be described as "conservation of (net) matter" laws—discussed later below. Alternatively, one could return to the mass–volume–space concept of matter, leading to the next definition, in which antimatter becomes included as a subclass of matter.

Based on elementary fermions (mass, volume, and space)

A common or traditional definition of matter is "anything that has mass and volume (occupies space)". For example, a car would be said to be made of matter, as it has mass and volume (occupies space).

The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle, which applies to fermions. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below.

Thus, matter can be defined as everything composed of elementary fermions. Although we do not encounter them in everyday life, antiquarks (such as the antiproton) and antileptons (such as the positron) are the antiparticles of the quark and the lepton, are elementary fermions as well, and have essentially the same properties as quarks and leptons, including the applicability of the Pauli exclusion principle which can be said to prevent two particles from being in the same place at the same time (in the same state), i.e. makes each particle "take up space". This particular definition leads to matter being defined to include anything made of these antimatter particles as well as the ordinary quark and lepton, and thus also anything made of mesons, which are unstable particles made up of a quark and an antiquark.

In general relativity and cosmology

In the context of relativity, mass is not an additive quantity, in the sense that one cannot add the rest masses of particles in a system to get the total rest mass of the system.  In relativity, usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. Matter, therefore, is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are all part of matter.

GettyImages-947148218-1742a84786ae46b0b229da848348fb0f.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2435 2025-01-22 18:10:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2334) Herbarium

Gist

A herbarium (Latin: hortus siccus) is a collection of plant samples preserved for long-term study, usually in the form of dried and pressed plants mounted on paper. The dried and mounted plant samples are generally referred to as herbarium specimens.

An herbarium (plural: herbaria) is a collection of preserved plant specimens maintained for scientific purposes. Specimen are collected, mounted on rigid paper (100% acid-free), and filed in cabinets using techniques perfected over several centuries.

Summary

Herbarium is a collection of dried plant specimens mounted on sheets of paper. The plants are usually collected in situ (e.g., where they were growing in nature), identified by experts, pressed, and then carefully mounted to archival paper in such a way that all major morphological characteristics are visible (i.e., both sides of the leaves and the floral structures). The mounted plants are labeled with their proper scientific names, the name of the collector, and, usually, information about where they were collected and how they grew and general observations. The specimens are commonly filed in cases according to families and genera and are available for ready reference.

Herbarium collections are often housed in botanical gardens, arboretums, natural history museums, and universities. The largest herbaria, many of which are in Europe, contain several million specimens, some of which date back hundreds of years. Herbaria are the “dictionaries” of the plant kingdom and provide comparative material that is indispensable for studies in plant taxonomy and systematics. Given that nearly every plant species has a dried “type specimen” on which its description and Latin name are based, taxonomic disputes are commonly resolved by referencing type specimens in herbaria. The collections are also essential to the proper naming of unknown plants and to the identification of new species.

In addition to their taxonomic import, herbaria are commonly used in the fields of ecology, plant anatomy and morphology, conservation biology, biogeography, ethnobotany, and paleobotany. The sheets provide biogeographic information that can be used to document the historic ranges of plants, to locate rare or endangered species, or to trace the expeditions of explorers and plant collectors. Physically, the specimens are important sources of genetic material for DNA analyses and of pollen for palynological studies. Herbarium sheets are often shared among researchers worldwide, and the specimens of many herbaria have been digitized to further facilitate their use.

Details

A herbarium (plural: herbaria) is a collection of preserved plant specimens and associated data used for scientific study.

The specimens may be whole plants or plant parts; these will usually be in dried form mounted on a sheet of paper (called exsiccatum, plur. exsiccata) but, depending upon the material, may also be stored in boxes or kept in alcohol or other preservative. The specimens in a herbarium are often used as reference material in describing plant taxa; some specimens may be types, some may be specimens distributed in series called exsiccatae.

The same term is often used in mycology to describe an equivalent collection of preserved fungi, otherwise known as a fungarium. A xylarium is a herbarium specialising in specimens of wood. The term hortorium (as in the Liberty Hyde Bailey Hortorium) has occasionally been applied to a herbarium specialising in preserving material of horticultural origin.

History

The techniques for making herbaria have changed little over at least six centuries. They have been an important step in the transformation of the study of plants from a branch of medicine to an independent discipline, and to make available plant material from far away places and over a long period of time.

The oldest traditions of making herbarium collections have been traced to Italy. The Bologna physician and botanist, Luca Ghini (1490–1556) reintroduced the study of actual plants as opposed to relying on classical texts, such as Dioscorides, which lacked sufficient accuracy for identification. At first, he needed to make available plant material, even in winter, hence his Hortus hiemalis (winter garden) or Hortus siccus (dry garden). He and his students placed freshly gathered plants between two sheets of paper and applied pressure to flatten them and absorb moisture. The dried specimen was then glued onto a page in a book and annotated. This practice was supplemented by the parallel development of the Hortus simplicium or Orto botanico (botanical garden) to supply material, which he established at the University of Pisa in 1544.

Although Ghini's herbarium has not survived,[10] the oldest extant herbarium is that of Gherardo Cibo from around 1532. and in the Lower Countries the hortus siccus (1566) of Petrus Cadé. While most of the early herbaria were prepared with sheets bound into books, Carl Linnaeus came up with the idea of maintaining them on free sheets that allowed their easy re-ordering within cabinets.

Specimen preservation

Commensurate with the need to identify the specimen, it is essential to include in a herbarium sheet as much of the plant as possible (e.g., roots, flowers, stems, leaves, seed, and fruit), or at least representative parts of them in the case of large specimens. To preserve their form and colour, plants collected in the field are carefully arranged and spread flat between thin sheets, known as flimsies (equivalent to sheets of newsprint), and dried, usually in a plant press, between blotters or absorbent paper.

During the drying process the specimens are retained within their flimsies at all times to minimize damage, and only the thicker, absorbent drying sheets are replaced. For some plants it may prove helpful to allow the fresh specimen to wilt slightly before being arranged for the press. An opportunity to check, rearrange and further lay out the specimen to best reveal the required features of the plant occurs when the damp absorbent sheets are changed during the drying/pressing process.

The specimens, which are then mounted on sheets of stiff white paper, are labelled with all essential data, such as date and place found, description of the plant, altitude, and special habitat conditions. The sheet is then placed in a protective case. As a precaution against insect attack, the pressed plant is frozen or poisoned, and the case disinfected.

Certain groups of plants and fungi are soft, bulky, or otherwise not amenable to drying and mounting on sheets. For these plants, other methods of preparation and storage may be used. For example, conifer cones and palm fronds may be stored in labelled boxes. Representative flowers or fruits may be pickled in formaldehyde to preserve their three-dimensional structure. Small specimens, such as saprophytic and plant parasitic microfungi, mosses and lichens, are often air-dried and packaged in small paper envelopes.

No matter the method of preservation, detailed information on where and when the plant and fungus was collected, habitat, color (since it may fade over time), and the name of the collector is usually included.

The value of a herbarium is much enhanced by the possession of types, that is, the original specimens on which the study of a species was founded. Thus the herbarium at the British Museum, which is especially rich in the earlier collections made in the eighteenth and early nineteenth centuries, contains the types of many species founded by the earlier workers in botany. It is also rich in types of Australian plants from the collections of Sir Joseph Banks and Robert Brown, and contains in addition many valuable modern collections. The large herbaria have many exsiccata series included in their collections.

Collections management

Most herbaria utilize a standard system of organizing their specimens into herbarium cases. Specimen sheets are stacked in groups by the species to which they belong and placed into a large lightweight folder that is labelled on the bottom edge. Groups of species folders are then placed together into larger, heavier folders by genus. The genus folders are then sorted by taxonomic family according to the standard system selected for use by the herbarium and placed into pigeonholes in herbarium cabinets.

Locating a specimen filed in the herbarium requires knowing the nomenclature and classification used by the herbarium. It also requires familiarity with possible name changes that have occurred since the specimen was collected, since the specimen may be filed under an older name.

Uses

Herbarium collections can have great significance and value to science, and have many uses. Herbaria have long been essential for the study of plant taxonomy, the study of geographic distributions, and the stabilizing of nomenclature. Most of Carl Linnaeus's collections are housed at the Linnaean Herbarium, which contains over 4,000 types and now belongs to the Linnean Society in England. Modern scientists continue to develop novel, non-traditional uses for herbarium specimens that extend beyond what the original collectors could have anticipated.

Specimens housed in herbaria may be used to catalogue or identify the flora of an area. A large collection from a single area is used in writing a field guide or manual to aid in the identification of plants that grow there. With more specimens available, the author or the guide will better understand the variability of form in the plants and the natural distribution over which the plants grow.

Herbaria also preserve a historical record of change in vegetation over time. In some cases, plants become extinct in one area or may become extinct altogether. In such cases, specimens preserved in a herbarium can represent the only record of the plant's original distribution. Environmental scientists make use of such data to track changes in climate and human impact.

Herbaria have also proven very useful as source of plant DNA for use in taxonomy and molecular systematics. Even ancient fungaria represent a source for DNA-barcoding of ancient samples.

Many kinds of scientists and naturalists use herbaria to preserve voucher specimens; representative samples of plants used in a particular study to demonstrate precisely the source of their data, or to enable confirmation of identification at a future date.

They may also be a repository of viable seeds for rare species.

Institutional herbaria

Many universities, museums, and botanical gardens maintain herbaria. Each is assigned an alphabetic code in the Index Herbariorum, between one and eight letters long.

The largest herbaria in the world, in approximate order of decreasing size, are:

* Muséum National d'Histoire Naturelle (P) (Paris, France)
* New York Botanical Garden (NY) (Bronx, New York, US)
* Komarov Botanical Institute (LE) (St. Petersburg, Russia)
* Kew Herbarium (K) (Kew, England, UK)
* Missouri Botanical Garden (MO) (St. Louis, Missouri, US)
* Conservatoire et Jardin botaniques de la Ville de Genève (G) (Geneva, Switzerland)
* Naturalis Biodiversity Center (Nationaal Herbarium Nederland) (AMD, L, U, WAG) (Leiden, Netherlands)
* The Natural History Museum (BM) (London, England, UK)
* Harvard University (HUH) (Cambridge, Massachusetts, US)
* Museum of Natural History of Vienna (W) (Vienna, Austria)
* Swedish Museum of Natural History (S) (Stockholm, Sweden)
* United States National Herbarium (Smithsonian Institution) (US) (Washington, DC, US)
* Université Montpellier (MPU) (Montpellier, France)
* Université Claude Bernard (LY) (Villeurbanne, France)
* Herbarium Universitatis Florentinae (FI) (Florence, Italy)
* National Botanic Garden of Belgium (BR) (Meise, Belgium)
* University of Helsinki (H) (Helsinki, Finland)
* Botanischer Garten und Botanisches Museum Berlin-Dahlem, Zentraleinrichtung der Freien Universität Berlin (B) (Berlin, Germany)
* The Field Museum (F) (Chicago, Illinois, US)
* University of Copenhagen (C) (Copenhagen, Denmark)
* Chinese National Herbarium, (Chinese Academy of Sciences) (PE) (Beijing, People's Republic of China)
* University and Jepson Herbaria (UC/JEPS) (Berkeley, California, US)
* Royal Botanic Garden, Edinburgh (E) (Edinburgh, Scotland, UK)
* Herbarium Bogoriense (BO) (Bogor, West Java, Indonesia)
* Acharya Jagadish Chandra Bose Indian Botanic Garden (Central National Herbarium (CAL), Howrah, India)
* Herbarium Hamburgense (HBG) (Hamburg, Germany).

Additional Information:

How are Specimens Obtained?

Specimens accessioned into the herbarium are collected by faculty, students, amateur botanists, or professionals, including agency biologists and environmental consultants. Amateur botanists have collected many of our noteworthy specimens.

Other specimens are received as gifts from other herbaria. As a common practice, collectors generally prepare several duplicates of each voucher specimen, depending on how common the plants are. The best specimen is kept at the home institution, while the duplicates are exchanged at other institutions from around the world.

The exchange of specimens between herbaria is an effective means for participating institutions to amass much more diverse collections. It also provides a degree of “insurance,” diminishing the scientific impact if a catastrophe destroys any one institution.

Who Uses Herbarium Specimens?

At Tennessee Tech, the collections are used in outreach to the general public and private sectors as comparative material to aid with plant identification.

However, the most significant use is by the scientific community, particularly plant systematists.

Data from herbarium specimens may help with conservation efforts, morphological or molecular studies, and the creation of floras of a particular region or a monograph of a particular genus.

What Do Herbarium Specimens Represent?

A herbarium specimen is a voucher documenting a species growing at a given site at certain time. As such, herbarium holdings worldwide collectively provide the raw data underpinning our scientific knowledge of what species exists, what their diagnostic features are, what range of variation exists within each species, and where each species occurs. In this regard the herbaria of the world play an important role in the scientific heritage of humans.

Why Keep More than One Voucher Specimen?

Multiple specimens are needed to document the variability (phenotypic plasticity) that exists within a species. The variation of a species is driven by many factors such as growing conditions, time of year, and age.

For example, multiple size and shapes of leaves of the same species (heterophylly).

Among terrestrial plants shade leaves are larger than sun leaves and in aquatic plants aerial leaves are larger that submerged leaves.

Herbarium.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2436 2025-01-23 00:08:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2335) Material

Gist

A material is any substance that an object is made from. Glass, chalk, paper, wax, water, air, clay and plastic are all different materials. All materials are made up of matter. Almost everything we know of is made up of materials.

Descriptions and properties of common materials such as wood, metal, glass, plastics, ceramics and paper. Everything we make is made up of one or more materials. Different materials have different properties. Because of these different properties, they can be used to make many kinds of objects.

Summary

The substance used to make something is called a material. A school desk, for example, may be made from wood, plastic, or metal—or a mixture of all three materials.

When an object is designed and made, it is important to choose the best material for the job. Materials have certain qualities, or properties, such as strength, color, and hardness, that have to be considered carefully. Other factors, such as cost and availability, may also be important.

Types of Materials

Materials may be natural or artificial. They can come from living or nonliving things. A material that has not yet been changed in any way is called a raw material. Some natural materials will run out one day, so they must be used carefully and replaced wherever possible. This is described as the sustainable use of natural resources.

Materials from Living Things

Wood, paper, and cardboard are all made from trees. Leather comes from cow skin, wool comes from sheep, and cotton comes from plants. Mother-of-pearl is a hard, shiny, and colorful material used for jewelry or to decorate objects. It comes from the inside of certain shells.

Materials from Nonliving Things

Metals and precious gems, such as diamonds, are taken from rocks in the ground. Chalk, clay, coal, and sand are also examples of materials from nonliving things.

Some materials are a mixture of living and nonliving things. Soil is made up of tiny organisms, dead plants, stones, tiny particles of rock, air, and water.

Properties of Materials

Materials can be described by their properties. Understanding a material’s properties is important when deciding whether the material is suitable for the use planned for it. Materials may be soft, hard, flexible (bendable), rigid (stiff), transparent (see-through), opaque (meaning light does not shine through it), rough, smooth, shiny, or dull.

For example, glass is a transparent, hard, and smooth material. It can be molded into different shapes when it is being made, it is waterproof, and it breaks easily. It is used to make windows, containers, eyeglasses, and many more objects. Plastic is another type of material. It is strong, waterproof, and durable (long-lasting). It can be transparent or opaque. It can be used to make many everyday objects, including bottles, bags, toys, and computer equipment.

Physical and Chemical Properties

All materials have physical properties. A physical property is one that a person can measure without changing the material. Color, amount, hardness, and temperature are examples of physical properties.

All materials also have chemical properties. A chemical property tells how a material will change into a different substance under special conditions. For example, certain metals turn to rust if they sit out in the rain. How easily a material rusts is a chemical property. Paper and wood burn to ashes if they touch flame. How easily a material burns is another chemical property.

Insulators and Conductors

Some materials are insulators and others are conductors. These terms describe physical properties related to how well a material allows heat or electricity to flow through it.

Heat travels from hot places to cold places, and thermal insulators prevent, or slow down, this movement of heat. Fabrics are good examples of thermal insulators because they trap warm air and stop it from moving away. Warm clothes are made from such fabric materials as polyester and wool because they trap body heat.

Thermal conductors allow heat to travel through them more quickly. Metals are thermal conductors, which is why they are used to make cooking pots.

Some materials, especially metals, allow electricity to pass through them easily. They are called electrical conductors. Materials that do not conduct electricity, such as plastic and rubber, are called electrical insulators.

Changing Materials

Materials often undergo changes. These changes can happen in nature, or they can be caused by people. A material may undergo a physical or a chemical change.

A physical change takes place when a material changes form but is still the same substance. For example, snapping a pencil into two pieces changes the pencil’s form, but it does not change the substances that make it a pencil. Heating frozen water, or ice, will make the ice change physically into liquid water. However, both the frozen water and the liquid water are still the same substance: water.

A chemical change takes place when a material changes into an entirely new substance. The smallest units of the material, called molecules, break apart and form new molecules. For example, when wood burns, its molecules change to form new molecules of smoke and ash. When iron is exposed to oxygen and moisture for a long time, the iron molecules change to form new molecules of iron oxide, or rust. A chemical change cannot be undone. It is known as an irreversible change.

Physical or chemical changes may happen when a material is heated or cooled. They may also happen when materials are mixed together or separated from one another.

Heating and Cooling

All materials are made up of matter. Matter is anything that takes up space. The three most familiar forms, or states, of matter are solid, liquid, and gas. Heating and cooling a substance may change it from one state to another. For example, at room temperature water is a liquid. If it is cooled enough it will turn to a solid, ice. This is reversible because if the ice is heated it will once again become liquid. When a material changes from one state to another, it undergoes a reversible, physical change.

When food is cooked, it usually undergoes a chemical change. When an egg is fried, its texture, shape, smell, and appearance become different than they were. The heat has caused molecules within the egg to change. It is an irreversible change.

Mixing and Separating

Mixing two or more substances together can cause them to change, and sometimes a new material is produced. Some of these changes are reversible, others are irreversible.

Whenever two or more substances are mixed together and a new substance is formed, the result is called a compound. Compounds are formed from chemical changes. For example, mixing cement powder and water causes a tough, new substance to form. Cement is made up of a number of compounds. These compounds can only be separated into their elements by chemical changes.

Whenever two or more substances are mixed together and do not form a new substance, the result is called a mixture. Mixtures are the result of physical changes. Mixing sand into a glass of water results in the sand collecting at the bottom of the glass. It is a physical change that can be reversed by straining the water from the mixture and letting the sand dry.

Sometimes mixing two substances together can form a special kind of mixture called a solution. This happens if the two substances stay evenly mixed. When ordinary sugar is stirred into a beaker of clean water it will dissolve and form a solution. The tiny molecules that make up sugar spread apart evenly throughout the water. However, the individual sugar molecules do not break apart. Mixing sugar and water causes a physical change to happen because the sugar and water molecules remain the same. Like all mixtures, solutions can be separated. For example, the water in a sugar-water solution will eventually evaporate, or change from a liquid to a gas, leaving the sugar behind.

Materials and the Environment

In recent times people have come to understand that finding, using, and changing materials can have a long-term consequence for the environment. Using materials in a way that will not damage the environment is called sustainable use.

Finding, or sourcing, materials has a big impact on the environment and on people’s lives. Coltan is a material found underground in the forests of the Democratic Republic of the Congo. It is used to make cell phones and computer parts. In the early 2000s many people became concerned about the mining of coltan. The people who dug it out of the ground worked in poor conditions, but they did it because they needed the money it brought. The animals that live in the forests were affected by the mining as well. The gorillas and elephants lost much of their habitat, and many were shot by the miners for food.

Some raw materials, such as oil, coal, and gas, are present in Earth’s surface in limited amounts. Once they have been used up they cannot be replaced, so alternative sources of energy will have to be found.

Raw materials such as wood can be replaced, if people are willing to plant new trees. Cutting down forests, however, means that other plants and animals that live there lose their habitats, probably forever.

Details

A material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.

Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.

In industry, materials are inputs to manufacturing processes to produce products or more complex materials.

Historical elements

Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.

Classification by use

Materials can be broadly categorized in terms of their use, for example:

* Building materials are used for construction
* Building insulation materials are used to retain heat within buildings
* Refractory materials are used for high-temperature applications
* Nuclear materials are used for nuclear power and weapons
* Aerospace materials are used in aircraft and other aerospace applications
* Biomaterials are used for applications interacting with living systems

Material selection is a process to determine which material should be used for a given application.

Classification by structure

The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.

Microstructure

In engineering, materials can be categorised according to their microscopic structure:

* Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingredient.
* Ceramics: non-metal, inorganic solids
* Glasses: amorphous solids
* Crystals: a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions.
* Metals: pure or combined chemical elements with specific chemical bonding behavior
* Alloys: a mixture of chemical elements of which at least one is often a metal.
* Polymers: materials based on long carbon or silicon chains
* Hybrids: Combinations of multiple materials, for example composites.

Larger-scale structure

A metamaterial is any material engineered to have a property that is not found in naturally occurring materials, usually by combining several materials to form a composite and / or tuning the shape, geometry, size, orientation and arrangement to achieve the desired property.

In foams and textiles, the chemical structure is less relevant to immediately observable properties than larger-scale material features: the holes in foams, and the weave in textiles.

Classification by properties

Materials can be compared and classified by their large-scale physical properties.

Mechanical properties

Mechanical properties determine how a material responds to applied forces.

Examples include:

* Stiffness
* Strength
* Toughness
* Hardness

Thermal properties

Materials may degrade or undergo changes of properties at different temperatures. Thermal properties also include the material's thermal conductivity and heat capacity, relating to the transfer and storage of thermal energy by the material.

Other properties

Materials can be compared and categorized by any quantitative measure of their behavior under various conditions. Notable additional properties include the optical, electrical, and magnetic behavior of materials.

Additional Information

Everything we make is made up of one or more materials. Different materials have different properties. Because of these different properties, they can be used to make many kinds of objects. Materials can be soft or hard. They can be flexible or stiff. They can be delicate or very strong. Let’s take a look at some examples of different materials.

Wood

Wood can be classified as either hardwood or softwood.

Hardwood comes from deciduous trees. These are trees that lose their leaves in the fall. Hardwood is usually used to make furniture and in construction projects that need to last for a long time. Examples of hardwoods are oak, maple, and walnut.

Softwood comes from coniferous trees. Coniferous, or evergreen trees, keep their needles all year round. Most timber, or wood that is prepared for construction, is made from softwood trees. Softwood is usually used in parts of buildings, like windows and doors. It is also used in some kinds of furniture. Examples of softwoods are pine, fir, and spruce.

The terms “hardwood” and “softwood” do not refer to how hard the wood in a tree is. These terms refer to how the tree reproduces. Coniferous (softwood) trees reproduce through seeds in cones. Deciduous (hardwood) trees reproduce through seeds that come from a fruit or flower.

Different types of trees produce wood with different properties. But all types of wood have some physical characteristics in common. First, wood is strong. Its strength depends on its grain. Grain is the natural direction of growth of the fibres in the wood. Wood is very resistant to compression when force is applied in the direction of the grain. But it can break easily if force is applied against the grain.

Wood also has an interesting relationship with water. It is a very buoyant material. This means it can float. This is why wood is often used to make ships and boats. But wood is also hygroscopic. This means that it can absorb water. Some types of wood can absorb and hold a lot of water. It is important to consider this characteristic when choosing wood for a project. If a wood contains too much water it may eventually rot. When wood rots, it breaks down.

Balsa wood is one of the lightest and least dense woods, but it’s technically considered a hardwood because the trees that produce it create seeds!

Metals

Metals are some of the most important materials used in manufacturing and building. Some examples of metals are iron, aluminum, copper, zinc, tin, and lead. Many metals we use today are alloys. Alloys are made by combining two or more metals. They can also combine a metal with a nonmetal material. Alloys are made to give the metal new characteristics. Things like increased hardness or strength. For example, steel is an alloy of iron that contains a small amount of carbon.

All metals share three main characteristics:

* Lustre: they are shiny when cut or scratched
* Malleability: although they are strong, they can be bent or shaped with the right amount of heat and force
* Conductivity: they conduct heat and electricity

But individual metals have different properties. Metals and metal alloys are usually chosen for objects based on their properties. Many types of metals are used in household objects, from copper to steel, even gold!

Many metals are likely to corrode. Corrosion is a chemical reaction where metal reacts with oxygen. Sometimes this is good because it strengthens the metal. But when iron or steel react with oxygen, rust is created. Corrosion can eventually make metal break down entirely into rust.

Ceramics

Ceramics are often defined by what they’re not. They are nonmetallic and inorganic solids. This means they aren’t made of metal, wood, plastics, or rubber. They are made by baking clay, sand, and other natural materials at very high temperatures.

A few examples of ceramics are bricks, tiles, and concrete. Ceramic materials are used to make everything from the homes we live in to the pots we cook food in to dental implants for our teeth. It is even used to make the insulating tiles on space shuttles! Glass (see below) is also a ceramic. So, you are surrounded by ceramics and you may not know it!

The main properties of ceramics are:

* They are usually hard
* Heat resistant: they have a high melting point
* Resistant to chemical corrosion
* They do not conduct heat or electricity: this means they make good insulators

Some types of ceramics, like glass and porcelain, can also be brittle (they can be broken easily). Nonetheless they can last a very long time.

Glass

Glass is one of the most versatile materials created by humans. Glass is made mostly of sand, which is made up of silicon dioxide. When sand is heated to a very high temperature (about 1700°C) it becomes a liquid. When it cools again, it undergoes a complete transformation and becomes a clear solid.

The glass we are most familiar with today is called soda-lime-silica-glass. It is made mostly of sand, but some other ingredients as well. Soda ash, which is made up of sodium carbonate, reduces the sand’s melting point. This means it doesn’t have to be heated to as high a temperature before it turns into a liquid. But soda ash also makes the glass water-soluble. This means it can dissolve in water! Limestone, or calcium carbonate, is added to stop this from happening.

When the liquid glass mixture is cooled a bit, it can be used in many different ways. It can be poured into a mould to create things like bottles or lightbulbs. It can also be “floated” to create perfectly flat sheets that will become windows or mirrors. The mixture is then allowed to cool and become solid.

The main properties of glass are:

* transparency: you can see through it
* heat resistance: it doesn’t melt easily
* hardness: inability to break

You may not think glass is particularly strong. But the objects you’re familiar with, like lightbulbs and water glasses, are made of very thin pieces of glass. If you had a very thick piece of glass (think of a brick made of glass) it would be very strong!

When people make glass objects, they can add different ingredients to give the glass new properties. For example, oven-proof glass like Pyrex contains boron oxide. Glass used to make decorative crystal objects, like vases and figurines, contains lead oxide. This allows it to be cut more easily. Stained or coloured glass has different colours because metals are added when it’s in its liquid form!

Plastics

Plastics come in many different forms. They are used to make a wide variety of products. Plastic molecules are made up of long chains. These molecules are called polymers.

Did you know?

The word “plastic” comes from the Greek “plastikos” which means “able to be shaped”.

Most plastics are either thermoplastics or thermoset plastics. Thermoplastics are heated and then moulded into shape. They can be reheated later and reshaped. Most plastic bottles are thermoplastic. Thermoset plastics can only be heated and shaped once. Thermoset plastics are used to make things like electrical insulation, dinner plates and automobile parts.

Plastics have many useful properties. They are:

* Usually easy and low-cost to manufacture
* Strong and durable
* Resistant to electricity and water
* Resistant to many types of chemical corrosion

But this durability and resistance to damage can be a problem as well. Plastics can take a very long time to break down. Plastic bottles take about 450 years to break down. Plastic shopping bags can take as long as 10,000 years! This is why it is important to recycle plastics. Thermoplastics are recyclable, but thermoset plastics are not. When possible, it’s better to choose thermoplastics over thermoset plastics so the plastic can be given a new life after use.

Plastic objects

Assortment of plastic objects including a bowl, water bottle, cup, packing material, bag, cutlery, syringe, compact disk (CD), calculator, tape, clothespin and kitchen timer..

Textiles

The word textiles originally referred to woven fabrics. Now it usually refers to all fibres, yarns, and fabrics. Textiles can be made from natural materials like wool and cotton, or from synthetic materials like polyester. Textiles are used to make clothing, carpet, and many other products.

Did you know?

The earliest-produced textiles have been traced back to about 5000 BCE. Some of the oldest forms of textile production include net-making and basket-weaving.

Textiles are made up of many tiny parts called fibres. Textile fibres must have specific properties in order to be spun into yarn or made directly into fabrics. They must be strong, flexible, elastic, and durable. Fibres with these properties can be made into yarns and fabrics with similar properties.

But not all fibres have the same properties. Some are warmer, some are more durable, some are softer or more comfortable. Sometimes it takes a mix of fibres to achieve the desired properties of a finished textile product!

Leather

Traditional leather is made from animal skins. Synthetic, or faux leather is manufactured. Leather is used to make everything from car seats to furniture to footballs to handbags. It is durable and has a natural finish. These properties are difficult to recreate with synthetic materials.

Faux leather is usually made of a mix of natural and synthetic fibres that are coated with a plastic polymer. This material mimics the properties of genuine leather. Like genuine leather, faux leather is soft to the touch and water-resistant. Although it is not as durable as traditional leather, faux leather is difficult to cut or tear. As a result, it’s often used to make furniture.

There are ethical concerns about traditional leather because it is an animal product. But because traditional leathers are made of a natural material, they can biodegrade, or break down naturally. Faux leather behaves more like plastic and takes a very long time to break down.

Paper and Boxboard

Paper is an important material that many people use every day. From reading newspapers to drawing pictures to wrapping presents, you probably don’t realize how often you use paper. Paper can also be used to make other materials, like cardboard.

Paper is made from a material called pulp. Pulp is made from wood fibres mixed with water. These fibres usually come from softwood trees like spruce and pine. To make paper, trees are cut up and the bark is removed. Then the wood is ground into tiny pieces and mixed with water to create pulp. The pulp is chemically treated then pressed flat and dried.

Paper factory

Cardboard is made up of several layers of paper combined. Corrugated cardboard is made up of two sheets of flat paper that have a third sheet of paper corrugated or bent to form a wave shape between them. The final product is stiff, strong, and very lightweight. This cardboard can be folded up and glued to create boxes or other packing materials.

Rubber

There are two main types of rubber: natural rubber and synthetic rubber. Natural rubber is made from latex, which is produced by plants. Synthetic rubber is made using a mix of chemicals. Synthetic rubber has many of the same characteristics as natural rubber. It can be used in tires, hoses, belts, flooring and more.

Did you know?

If you’ve ever picked a dandelion, you may have seen the milky white fluid on the inside of the stem. This is latex!

Almost 99% of the world’s natural rubber is made from the latex of a plant called Hevea brasiliensis. This plant is commonly known as the rubber tree. Latex undergoes a number of different processes to be made into the versatile, springy material we think of as “rubber”. First, it is “chewed up”, then chemicals are added to it. Next, it is squeezed and stretched, and then cooked at about 140°C so that it holds its shape. The final product is strong, stretchy, elastic, durable, and waterproof. It can be used to make products ranging from pencil erasers to running shoes to wetsuits!

Mechanical-Properties-Of-Materials.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2437 2025-01-24 00:10:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2336) Radiographer

Gist

Radiographers are allied health professionals who take x-rays and other medical images including MRI (magnetic resonance imaging) scans and CT (computed tomography), to assist medical doctors in diagnosing, monitoring and treating illnesses and injuries. They are also known as medical imaging technologists.

Summary

Radiographers, also called radiologic technologists, are health care professionals who operate special scanning machines that make images for medical purposes. They use equipment like X-ray machines, CT scanners, and advanced technologies such as digital fluoroscopy.

What Does a Radiographer Do?

Radiographers often pass their findings on to radiologists, who interpret the images to help make a diagnosis. This is called diagnostic radiology. Interventional radiology uses imaging during a medical procedure to guide and assist in treatment.

Besides a medical center’s radiology department, radiologic technologists can work in areas like surgery, the emergency room, cardiac care, intensive care, and patient rooms.

Radiographers may use tools and procedures such as:

* CT scanners
* Fluoroscopies
* MRIs
* PET scanners
* Radiotherapy
* Ultrasounds
* X-rays

Radiographers' tasks include:

* Helping oncologists with radiation treatment for cancer patients
* Preparing patients for radiologic procedures
* Maintaining imaging equipment
* Ensuring that safety protocols are being followed
* Helping surgeons, such as with imaging during complicated procedures

Education and Training

An associate degree in radiography or a bachelor's degree in medical radiography is required to become a medical radiographer. Associate programs typically take 2 years, while bachelor's programs take about 4 years.

They teach things like:

* Basic radiographic imaging
* Anatomy and physiology
* Radiographic safety
* Medical ethics
* Lab skills
* Communication skills
* Patient positioning

Programs must be accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT) to prepare radiography students for the American Registry of Radiologic Technologists (ARRT) certification exam.

Diagnostic imaging often focuses on specific areas of treatment, such as mammography, computed tomography (CT), fluoroscopy, nuclear medicine, and bone densitometry. Radiographers can also specialize in:

* Angiography (imaging of blood vessels and the heart)
* Mobile radiography (using special machines for patients who are too sick to travel)
* Trauma radiography (usually in an ER)
* Working in operating rooms (to assist surgeons with special X-ray equipment)

Reasons to See a Radiographer

You may need to see a radiographer for medical imaging if you:

* Have a broken bone
* Have a blocked artery or other vessel
* Have a foreign object in your body
* May have a tumor or cancer
* Are pregnant
* Tear a muscle

You’ll most likely see a radiologist — and a radiologic technologist — after a recommendation from another doctor, which may be your primary care doctor or a specialist like an orthopedist.

What to Expect with the Radiographer

You’ll often be sent to the lab right from your primary care doctor's office if they think you need imaging. There, you'll talk with a radiographer about the procedure and what to expect.

Your appointment may take just minutes, though more complex procedures can take 2 hours or more. You might have to avoid certain foods, medications, and drinks beforehand.

Always tell the radiology office if you’re pregnant, as X-rays and CT scans use low doses of radiation. But keep in mind that the risk of not having a test that you need could be higher than the risk from the radiation. Talk to your doctor about the risks and your concerns.

Details

Radiographers, also known as radiologic technologists, diagnostic radiographers and medical radiation technologists are healthcare professionals who specialise in the imaging of human anatomy for the diagnosis and treatment of pathology. Radiographers are infrequently, and almost always erroneously, known as x-ray technicians. In countries that use the title radiologic technologist they are often informally referred to as techs in the clinical environment; this phrase has emerged in popular culture such as television programmes. The term radiographer can also refer to a therapeutic radiographer, also known as a radiation therapist.

Radiographers are allied health professionals who work in both public healthcare and private healthcare and can be physically located in any setting where appropriate diagnostic equipment is located, most frequently in hospitals. The practice varies from country to country and can even vary between hospitals in the same country.

For the first three decades of medical imaging's existence (1897 to the 1930s), there was no standardized differentiation between the roles that we now differentiate as radiologic technologist (a technician in an allied health profession who obtains the images) versus radiologist (a physician who interprets them). By the 1930s and 1940s, as it became increasingly apparent that proper interpretation of the images required not only a physician but also one who was specifically trained and experienced in doing so, the differentiation between the roles was formalized. Simultaneously, it also became increasingly true that just as a radiologic technologist cannot do the radiologist's job, the radiologist also cannot do the radiologic technologist's job, as it requires some knowledge, skills, experience, and certifications that are specific to it.

Radiography's origins and fluoroscopy's origins can both be traced to 8 November 1895, when German physics professor Wilhelm Röntgen discovered the X-ray and noted that, while it could pass through human tissue, it could not pass through bone or metal. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. He received the first Nobel Prize in Physics for his discovery.

There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays using a fluorescent screen painted with barium platinocyanide and a Crookes tube which he had wrapped in black cardboard to shield its fluorescent glow. He noticed a faint green glow from the screen, about 1 metre away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow: they were passing through an opaque object to affect the film behind it.

The first radiograph

Röntgen discovered X-rays' medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first ever photograph of a human body part using X-rays. When she saw the picture, she said, "I have seen my death."

The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards also became the first to use X-rays in a surgical operation.

The United States saw its first medical X-ray obtained using a discharge tube of Ivan Pulyui's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of Pulyui's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work.

X-rays were put to diagnostic use very early; for example, Alan Archibald Campbell-Swinton opened a radiographic laboratory in the United Kingdom in 1896, before the dangers of ionizing radiation were discovered. Indeed, Marie Curie pushed for radiography to be used to treat wounded soldiers in World War I. Initially, many kinds of staff conducted radiography in hospitals, including physicists, photographers, physicians, nurses, and engineers. The medical speciality of radiology grew up over many years around the new technology. When new diagnostic tests were developed, it was natural for the radiographers to be trained in and to adopt this new technology. Radiographers now perform fluoroscopy, computed tomography, mammography, ultrasound, nuclear medicine and magnetic resonance imaging as well. Although a nonspecialist dictionary might define radiography quite narrowly as "taking X-ray images", this has long been only part of the work of "X-ray Departments", Radiographers, and Radiologists. Initially, radiographs were known as roentgenograms, while Skiagrapher (from the Ancient Greek words for "shadow" and "writer") was used until about 1918 to mean Radiographer.

The history of magnetic resonance imaging includes many researchers who have discovered NMR and described its underlying physics, but it is regarded to be invented by Paul C. Lauterbur in September 1971; he published the theory behind it in March 1973. The factors leading to image contrast (differences in tissue relaxation time values) had been described nearly 20 years earlier by Erik Odeblad (doctor and scientist) and Gunnar Lindström.

In 1950, spin echoes and free induction decay were first detected by Erwin Hahn and in 1952, Herman Carr produced a one-dimensional NMR spectrum as reported in his Harvard PhD thesis. In the Soviet Union, Vladislav Ivanov filed (in 1960) a document with the USSR State Committee for Inventions and Discovery at Leningrad for a Magnetic Resonance Imaging device, although this was not approved until the 1970s.

By 1959, Jay Singer had studied blood flow by NMR relaxation time measurements of blood in living humans. Such measurements were not introduced into common medical practice until the mid-1980s, although a patent for a whole-body NMR machine to measure blood flow in the human body was already filed by Alexander Ganssen in early 1967.

In the 1960s and 1970s the results of a very large amount of work on relaxation, diffusion, and chemical exchange of water in cells and tissues of various types appeared in the scientific literature. In 1967, Ligon reported the measurement of NMR relaxation of water in the arms of living human subjects. In 1968, Jackson and Langham published the first NMR signals from a living animal.

Role in healthcare

A radiographer uses their expertise and knowledge of patient care, physics, human anatomy, physiology, pathology and radiology to assess patients, develop optimum radiological techniques and evaluate the resulting radiographic media.

This branch of healthcare is extremely varied, especially between different countries, and as a result radiographers in one country often have a completely different role to that of radiographers in another. However, the base responsibilities of the radiographer are summarised below:

* Autonomy as a professional
* Accountability as a professional
* Contribute to and participate in continuing professional development
* Enforcement of radiation protection (There is a duty of care to patients, colleagues and any lay persons that may be irradiated.)
* Justification of radiographic examinations
* Patient care
* Production of diagnostic media
* Safe, efficient and correct use of diagnostic equipment
* Supervise students and assistants

On a basic level, radiographers do not generally interpret diagnostic media, rather they evaluate media and make a decision about its diagnostic effectiveness. In order to make this evaluation radiographers must have a comprehensive but not necessarily exhaustive knowledge of pathology and radiographic appearances; it is for this reason that radiographers often do not interpret or diagnose without further training. Notwithstanding, it is now becoming more common that radiographers have an extended and expanded clinical role, this includes a role in initial radiological diagnosis, diagnosis consultation and what subsequent investigations to conduct. It is not uncommon for radiographers to now conduct procedures which would have previously been undertaken by a cardiologist, urologist, radiologist or oncologist autonomously.

Contrary to what could be inferred, radiographers conduct and contribute to investigations which are not necessarily radiological in nature, e.g. sonography and magnetic resonance imaging.

Radiographers often have opportunities to enter military service due to their role in healthcare. As with most other occupations in the medical field many radiographers have rotating shifts that include night duties.

Career pathways

Radiography is a deeply diverse profession with many different modalities and specialities. It is not uncommon for radiographers to be specialised in more than one modality and even have expertise of interventional procedures themselves; however this depends on the country in which they operate. As a result of this the typical career pathway for a radiographer is hard to summarise. Upon qualifying it is common for radiographers to focus solely on plain film radiography before specialising in any one chosen modality. After a number of years in the profession, non-imaging based roles often become open and radiographers may then move into these positions.

Non-imaging modalities

Non-imaging modalities vary, and are often undertaken in addition to imaging modalities. They commonly include:

* Academia – Education role.
* Clinical Management – Clinical managerial role which can be varied; may include managing audits, rotas, department budgets, etc.
* Clinical Research – Research role.
* Medical Physics – Multidisciplinary role ensuring the correct calibration of and most efficient use of diagnostic equipment.
* PACS Management – Managerial role concerned with maintaining and supervising appropriate and correct use of the RIS and PACS systems.
* Radiation Protection – A managerial role concerned with monitoring the level of ionising radiation absorbed by anyone who comes into contact with ionising radiation at their site.
* Reporting Radiography – A clinical role involved with interpretation of radiographs and various other radiological media for diagnosis.

Additional Information

* Radiographers are allied health professionals who are trained to take medical images. Radiologists are specialist doctors.
* Radiologists are medical doctors who assist other doctors by making a diagnosis and by providing treatment using medical imaging.
* Radiologists and radiographers often work together, but they are not the same.
* It is important to get a referral from your doctor or specialist so your doctor will be informed about the results of your visit.

Radiographers and radiologists: what's the difference?

Many people are confused by the differences between a radiographer and a radiologist. Radiologists and radiographers often work together, but they are not the same.

Radiographers are allied health professionals who take x-rays and other medical images including MRI (magnetic resonance imaging) scans and CT (computed tomography), to assist medical doctors in diagnosing, monitoring and treating illnesses and injuries. They are also known as medical imaging technologists.

In order to practise in Australia, a radiographer must complete a university degree and undergo supervised training in an approved hospital radiology department or private clinic.

Radiologists are specialist medical doctors trained to interpret x-rays and other medical imaging tests. They diagnose and carry out treatments using:

* ultrasound
* x-rays
* CT scans
* MRIs (Magnetic Resonance Imaging)
* PET scans (Positron Emission Tomography)

and other medical imaging technology.

A radiologist interprets the findings of your imaging to assist in making a diagnosis. After qualifying as doctors and working in hospital, radiologists complete a specialist medical training program run by The Royal Australian and New Zealand College of Radiologists (RANZCR). A radiologists may do extra training to become an interventional radiologist. This involves performing image-guided procedures inside a person's body, such as treating cancerous tumours or inserting stents to open arteries.

When do I see a radiographer or radiologist?

Your doctor can refer you to radiology for many reasons. Some common reasons include to have scans:

* to diagnose and identify metabolic changes within the body
* to provide cross-sectional images of the body
* to identify any trauma (injury) to bones
* during pregnancy
* for radiation therapy for cancers

There are questions you can ask your doctor to prepare for your appointment with a radiographer.

Do I need a referral for a radiographer or radiologist?

It is important to get a referral from your doctor or specialist. That way, your doctor will be informed about the results of your visit and any tests performed. Also, if you don't have a referral, neither Medicare nor private health insurance will contribute to the cost of your care.

How much do radiographers and radiologists cost?

Diagnostic imaging providers set their own fees. This means that the amount you pay varies. This depends on:

* where you go
* the type of imaging
* who is referring you
* the condition being scanned or treated

Some scans may be covered by Medicare, however, you may have to pay for some tests. Ask what your out-of-pocket costs will be before visiting a radiologist or before you have the scan.

Where do radiographers and radiologists work?

Radiographers and radiologists work closely together and work as part of a team with other healthcare professionals including medical and nursing staff. They work in major public and private hospitals, medical centres and specialist clinics, such as cancer clinics.

radiography.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2438 2025-01-25 00:03:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2337) Avalanche

Gist

An avalanche is a mass of snow and ice falling suddenly down a mountain slope and often taking with it earth, rocks and rubble of every description.

An avalanche is a rapid flow of snow down a hill or mountainside. Although avalanches can occur on any slope given the right conditions, certain times of the year and certain locations are more dangerous than others. Winter, particularly from December to April in the Northern Hemisphere, is when most avalanches tend to happen.

Summary

Avalanche is a mass of snow sliding, flowing, or tumbling down a slope.

Avalanches can quickly reach speeds up to 100 mph. They vary in destructive power from harmless to large enough to destroy mature forests or flatten villages. On average, avalanches kill about 40 people per year in North America. Because avalanches come in wide varieties, we use ’Avalanche Types’ and ‘Avalanche Problems’ to classify and describe them.

Avalanche Types are used to classify avalanches based on physical, objective, and easily observable characteristics. Avalanche Problems describe the avalanche hazard situation and are composed of the type, location, likelihood, and destructive size of expected avalanches.

Confused about the difference between avalanche type and avalanche problem? To use a weather analogy, “rain” is a precipitation type, whereas “isolated rain showers” or “prolonged heavy downpours” are precipitation problems. Even though both situations would result in the same precipitation type, you would plan your day outside differently depending on the expected precipitation problem.

Details

An avalanche is a rapid flow of snow down a slope, such as a hill or mountain. Avalanches can be triggered spontaneously, by factors such as increased precipitation or snowpack weakening, or by external means such as humans, other animals, and earthquakes. Primarily composed of flowing snow and air, large avalanches have the capability to capture and move ice, rocks, and trees.

Avalanches occur in two general forms, or combinations thereof: slab avalanches made of tightly packed snow, triggered by a collapse of an underlying weak snow layer, and loose snow avalanches made of looser snow. After being set off, avalanches usually accelerate rapidly and grow in mass and volume as they capture more snow. If an avalanche moves fast enough, some of the snow may mix with the air, forming a powder snow avalanche.

Though they appear to share similarities, avalanches are distinct from slush flows, mudslides, rock slides, and serac collapses. They are also different from large scale movements of ice. Avalanches can happen in any mountain range that has an enduring snowpack. They are most frequent in winter or spring, but may occur at any time of the year. In mountainous areas, avalanches are among the most serious natural hazards to life and property, so great efforts are made in avalanche control. There are many classification systems for the different forms of avalanches. Avalanches can be described by their size, destructive potential, initiation mechanism, composition, and dynamics.

Formation

Most avalanches occur spontaneously during storms under increased load due to snowfall and/or erosion. Metamorphic changes in the snowpack, such as melting due to solar radiation, is the second-largest cause of natural avalanches. Other natural causes include rain, earthquakes, rockfall, and icefall. Artificial triggers of avalanches include skiers, snowmobiles, and controlled explosive work. Contrary to popular belief, avalanches are not triggered by loud sound; the pressure from sound is orders of magnitude too small to trigger an avalanche.

Avalanche initiation can start at a point with only a small amount of snow moving initially; this is typical of wet snow avalanches or avalanches in dry unconsolidated snow. However, if the snow has sintered into a stiff slab overlying a weak layer, then fractures can propagate very rapidly, so that a large volume of snow, possibly thousands of cubic metres, can start moving almost simultaneously.

A snowpack will fail when the load exceeds the strength. The load is straightforward; it is the weight of the snow. However, the strength of the snowpack is much more difficult to determine and is extremely heterogeneous. It varies in detail with properties of the snow grains, size, density, morphology, temperature, water content; and the properties of the bonds between the grains. These properties may all metamorphose in time according to the local humidity, water vapour flux, temperature and heat flux. The top of the snowpack is also extensively influenced by incoming radiation and the local air flow. One of the aims of avalanche research is to develop and validate computer models that can describe the evolution of the seasonal snowpack over time. A complicating factor is the complex interaction of terrain and weather, which causes significant spatial and temporal variability of the depths, crystal forms, and layering of the seasonal snowpack.

Slab avalanches

Slab avalanches are formed frequently in snow that has been deposited, or redeposited by wind. They have the characteristic appearance of a block (slab) of snow cut out from its surroundings by fractures. Elements of slab avalanches include a crown fracture at the top of the start zone, flank fractures on the sides of the start zones, and a fracture at the bottom called the stauchwall. The crown and flank fractures are vertical walls in the snow delineating the snow that was entrained in the avalanche from the snow that remained on the slope. Slabs can vary in thickness from a few centimetres to three metres. Slab avalanches account for around 90% of avalanche-related fatalities.

Powder snow avalanches

The largest avalanches form turbulent suspension currents known as powder snow avalanches or mixed avalanches, a kind of gravity current. These consist of a powder cloud, which overlies a dense avalanche. They can form from any type of snow or initiation mechanism, but usually occur with fresh dry powder. They can exceed speeds of 300 km/h (190 mph), and masses of 1,000,000 tons; their flows can travel long distances along flat valley bottoms and even uphill for short distances.

Wet snow avalanches

In contrast to powder snow avalanches, wet snow avalanches are a low velocity suspension of snow and water, with the flow confined to the track surface.  The low speed of travel is due to the friction between the sliding surface of the track and the water saturated flow. Despite the low speed of travel (≈10–40 km/h), wet snow avalanches are capable of generating powerful destructive forces, due to the large mass and density. The body of the flow of a wet snow avalanche can plough through soft snow, and can scour boulders, earth, trees, and other vegetation; leaving exposed and often scored ground in the avalanche track. Wet snow avalanches can be initiated from either loose snow releases, or slab releases, and only occur in snowpacks that are water saturated and isothermally equilibrated to the melting point of water. The isothermal characteristic of wet snow avalanches has led to the secondary term of isothermal slides found in the literature  At temperate latitudes wet snow avalanches are frequently associated with climatic avalanche cycles at the end of the winter season, when there is significant daytime warming.

Ice avalanche

An ice avalanche occurs when a large piece of ice, such as from a serac or calving glacier, falls onto ice (such as the Khumbu Icefall), triggering a movement of broken ice chunks. The resulting movement is more analogous to a rockfall or a landslide than a snow avalanche. They are typically very difficult to predict and almost impossible to mitigate.

Avalanche pathway

As an avalanche moves down a slope it follows a certain pathway that is dependent on the slope's degree of steepness and the volume of snow/ice involved in the mass movement. The origin of an avalanche is called the Starting Point and typically occurs on a 30–45 degree slope. The body of the pathway is called the Track of the avalanche and usually occurs on a 20–30 degree slope. When the avalanche loses its momentum and eventually stops it reaches the Runout Zone. This usually occurs when the slope has reached a steepness that is less than 20 degrees. These degrees are not consistently true due to the fact that each avalanche is unique depending on the stability of the snowpack that it was derived from as well as the environmental or human influences that triggered the mass movement.

Injuries and deaths

People caught in avalanches can die from suffocation, trauma, or hypothermia. From "1950–1951 to 2020–2021" there were 1,169 people who died in avalanches in the United States. For the 11-year period ending April 2006, 445 people died in avalanches throughout North America. On average, 28 people die in avalanches every winter in the United States. In 2001 it was reported that globally an average of 150 people die each year from avalanches. From 2014-2024, the majority of those killed in avalanches in the United States were skiing  followed by snowmobiling, snowshoeing/climbing/hiking, and snowboarding.  Three of the deadliest recorded avalanches have killed over a thousand people each.

Additional Information

An avalanche is a mass of material moving rapidly down a slope. An avalanche is typically triggered when material on a slope breaks loose from its surroundings; this material then quickly collects and carries additional material down the slope. There are various kinds of avalanches, including rock avalanches (which consist of large segments of shattered rock), ice avalanches (which typically occur in the vicinity of a glacier), and debris avalanches (which contain a variety of unconsolidated materials, such as loose stones and soil). Snow avalanches, the subject of the remainder of this article, constitute a relatively common phenomenon in many mountainous areas.

The size of a snow avalanche can range from a small shifting of loose snow (called sluffing) to the displacement of enormous slabs of snow. In a slab avalanche, the mass of descending snow may reach a speed of 130 km (80 miles) per hour and is capable of destroying forests and small villages in its path. Avalanches kill about 150 people a year in North America and Europe. Most of those killed are backcountry skiers, climbers, snowshoers, and snowmobilers who accidentally trigger an avalanche and become buried in the snow. The number of North American fatalities has risen with the increasing popularity of winter sports. Avalanches also have been triggered intentionally in warfare to kill enemy troops. In World War I, during fighting in the Alps on the Austrian-Italian front in December 1916, more than 10,000 troops were killed in a single day by avalanches triggered by artillery fired onto slopes of unstable snow.

Avalanche conditions

The occurrence of an avalanche depends on the interaction of mountainous terrain, weather conditions, snowpack conditions, and a trigger. Slab avalanches typically occur on slopes of 30 to 50 degrees. On slopes that are less steep, there is generally insufficient gravitational force to overcome frictional resistance and cause the displacement of a snow slab; on steeper slopes snow tends to sluff off. However, slab avalanches do occur on steeper slopes in climates with dense, wet snowfall. An important feature of terrain that can lead to an avalanche is the lack of objects that serve to anchor the snow, such as trees. Slab avalanches will not occur on slopes with sufficiently dense tree cover, which is about 1,000 conifer trees per hectare (400 per acre) on steep slopes and about half that density on gentler slopes. Other objects that can anchor the snow are large exposed rock outcroppings and rocks that are large enough to stick up through the snow cover. The probability of avalanches may be increased or decreased by several other terrain features, such as slope shape, a slope’s exposure to sun and wind, and elevation.

Certain types of weather lead directly to dangerous avalanche conditions—that is, to a high risk that an avalanche will occur. Slab avalanches are commonly associated with heavy snowfall and strong wind. With heavy snowfall, weaknesses in the existing snowpack may become overloaded, and the snow may fall so quickly that the new snow is unable to bond to the snow beneath it. Strong wind tends to break down the snow into ice crystals that readily bond together into a slab, and it also transports snow onto the lee sides of ridges and gullies, where wind-loaded snow leads to more frequent avalanching. Other meteorological conditions that can quickly lead to dangerous avalanche conditions are rapidly rising air temperature and rainfall on existing snow cover.

A snowpack consists of layers of snow, each formed at different times. Once the snow is on the ground, the ice crystals undergo physical changes that differentiate the layers deeper in the snowpack from those on top. These changes can weaken a layer underlying a cohesive slab of snow and thereby help set up a slab avalanche.

Once the conditions for an avalanche exist, a trigger simply applies sufficient force to release it. Natural triggers include new snowfall, wind-deposited snow, and a falling cornice (an overhanging mass of windblown ice or snow extending from a ridge). Other triggers include skiers, snowmobilers, snowboarders, and explosive blasts. Contrary to popular belief, noises such as yelling, yodeling, or the sound of a snowmobile will not trigger avalanches. Research has shown that only the loudest sonic booms under the most sensitive avalanche conditions might be able to trigger a slide.

Prediction and protective measures

In order to reduce fatalities and to protect villages and roads, people attempt to predict and prevent avalanches. Accurate avalanche prediction requires an experienced avalanche forecaster who often works both in the field to gather snowpack information and in the office with sophisticated tools such as remotely accessed weather data, detailed historical weather and avalanche databases, weather models, and avalanche-forecasting models. Avalanche forecasters combine their historical knowledge of past conditions with their knowledge of the affected terrain, current weather, and current snowpack conditions to predict when and where avalanches are most likely to occur. Such forecasting work typically takes place along mountain highways, adjacent to potentially affected villages, at ski areas, and in terrain heavily used for backcountry skiing and snowmobiling.

evRgYjPBMWPoG3jsVSFMmJ-650-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2439 2025-01-26 00:00:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2338) Iceberg

Gist

An  iceberg is a large floating mass of ice detached from a glacier.

Summary

An iceberg is a piece of fresh water ice more than 15 meters (16 yards) long[1] that has broken off a glacier or an ice shelf and is floating freely in open water. Smaller chunks of floating glacially derived ice are called "growlers" or "bergy bits". Much of an iceberg is below the water's surface, which led to the expression "tip of the iceberg" to illustrate a small part of a larger unseen issue. Icebergs are considered a serious maritime hazard.

Icebergs vary considerably in size and shape. Icebergs that calve from glaciers in Greenland are often irregularly shaped while Antarctic ice shelves often produce large tabular (table top) icebergs. The largest iceberg in recent history, named B-15, was measured at nearly 300 by 40 kilometres (186 by 25 mi) in 2000.[6] The largest iceberg on record was an Antarctic tabular iceberg measuring 335 by 97 kilometres (208 by 60 mi) sighted 240 kilometres (150 mi) west of Scott Island, in the South Pacific Ocean, by the USS Glacier on November 12, 1956. This iceberg was larger than Belgium.

Details

An iceberg is a floating mass of freshwater ice that has broken from the seaward end of either a glacier or an ice shelf. Icebergs are found in the oceans surrounding Antarctica, in the seas of the Arctic and subarctic, in Arctic fjords, and in lakes fed by glaciers.

Origin of icebergs:

Antarctic icebergs

Icebergs of the Antarctic calve from floating ice shelves and are a magnificent sight, forming huge, flat “tabular” structures. A typical newly calved iceberg of this type has a diameter that ranges from several kilometres to tens of kilometres, a thickness of 200–400 metres (660–1,320 feet), and a freeboard, or the height of the “berg” above the waterline, of 30–50 metres (100–160 feet). The mass of a tabular iceberg is typically several billion tons. Floating ice shelves are a continuation of the flowing mass of ice that makes up the continental ice sheet. Floating ice shelves fringe about 30 percent of Antarctica’s coastline, and the transition area where floating ice meets ice that sits directly on bedrock is known as the grounding line. Under the pressure of the ice flowing outward from the centre of the continent, the ice in these shelves moves seaward at 0.3–2.6 km (0.2–1.6 miles) per year. The exposed seaward front of the ice shelf experiences stresses from subshelf currents, tides, and ocean swell in the summer and moving pack ice during the winter. Since the shelf normally possesses cracks and crevasses, it will eventually fracture to yield freely floating icebergs. Some minor ice shelves generate large iceberg volumes because of their rapid velocity; the small Amery Ice Shelf, for instance, produces 31 cubic km (about 7 cubic miles) of icebergs per year as it drains about 12 percent of the east Antarctic Ice Sheet.

Iceberg calving may be caused by ocean wave action, contact with other icebergs, or the behaviour of melting water on the upper surface of the berg. With the use of tiltmeters (tools that can detect a change in the angle of the slope of an object), scientists monitoring iceberg-calving events have been able to link the breaking stress occurring near the ice front to long storm-generated swells originating tens of thousands of kilometres away. This bending stress is enhanced in the case of glacier tongues (long narrow floating ice shelves produced by fast-flowing glaciers that protrude far into the ocean). The swell causes the tongue to oscillate until it fractures. In addition, on a number of occasions, iceberg calving has been observed immediately after the collision of another iceberg with the ice front. Furthermore, the mass breakout of icebergs from Larsen Ice Shelf between 1995 and 2002, though generally ascribed to global warming, is thought to have occurred because summer meltwater on the surface of the shelf filled nearby crevasses. As the liquid water refroze, it expanded and produced fractures at the bases of the crevasses. This phenomenon, known as frost wedging, caused the shelf to splinter in several places and brought about the disintegration of the shelf.

Arctic icebergs

Most Arctic icebergs originate from the fast-flowing glaciers that descend from the Greenland Ice Sheet. Many glaciers are funneled through gaps in the chain of coastal mountains. The irregularity of the bedrock and valley wall topography both slows and accelerates the progress of glaciers. These stresses cause crevasses to form, which are then incorporated into the structure of the icebergs. Arctic bergs tend to be smaller and more randomly shaped than Antarctic bergs and also contain inherent planes of weakness, which can easily lead to further fracturing. If their draft exceeds the water depth of the submerged sill at the mouth of the fjord, newly calved bergs may stay trapped for long periods in their fjords of origin. Such an iceberg will change shape, especially in summer as the water in the fjord warms, through the action of differential melt rates occurring at different depths. Such variations in melting can affect iceberg stability and cause the berg to capsize. Examining the profiles of capsized bergs can help researchers detect the variation of summer temperature occurring at different depths within the fjord. In addition, the upper surfaces of capsized bergs may be covered by small scalloped indentations that are by-products of small convection cells that form when ice melts at the ice-water interface.

The Arctic Ocean’s equivalent of the classic tabular iceberg of Antarctic waters is the ice island. Ice islands can be up to 30 km (19 miles) long but are only some 60 metres (200 feet) thick. The main source of ice islands used to be the Ward Hunt Ice Shelf on Canada’s Ellesmere Island near northwestern Greenland, but the ice shelf has been retreating as ice islands and bergs continue to calve from it. (The ice shelf is breaking into pieces faster than new ice can be formed.) Since the beginning of observations in the 1950s, the Ward Hunt Ice Shelf has virtually disappeared. The most famous of its ice islands was T-3, which was so named because it was the third in a series of three radar targets detected north of Alaska. This ice island carried a manned scientific station from 1952 to 1974. Ice islands produced by Ellesmere Island calve into the Beaufort Gyre (the clockwise-rotating current system in the Arctic Ocean) and may make several circuits of the Canada Basin before exiting the Arctic Ocean via Fram Strait (an ocean passage between Svalbard and Greenland).

A third source of ice islands, one that has become more active, is northeastern Greenland. The Flade Isblink, a small ice cap on Nordostrundingen in the northeastern corner of Greenland, calves thin tabular ice islands with clearly defined layering into Fram Strait. Observations in 1984 showed 60 grounded bergs with freeboards of 12–15 metres (40–50 feet) off Nordostrundingen in 37–53 metres (120–175 feet) of water. Similar bergs acted as pinning points for pressure ridges, which produced a blockage of the western part of Fram Strait for several years during the 1970s. In 2003 the multiyear cover of fast ice (see sea ice) along the northeastern Greenland coast broke out. This allowed a huge number of tabular icebergs to emerge from the fast-flowing Nioghalvfjerdsfjorden Glacier and Zachariae Isstrøm in northeastern Greenland. Some of these reached the Labrador Sea two to three years later, while others remained grounded in 80–110 metres (260–360 feet) of water on the Greenland shelf.

Iceberg structure

An Antarctic tabular iceberg retains the physical properties of the outer part of the parent ice shelf. The shelf has the same layered structure as the continental ice sheet from which it flowed. All three features are topped with recently fallen snow that is underlain by older annual layers of increasing density. Annual layers are often clearly visible on the vertical side of a new tabular berg, which implies that the freeboard of the iceberg is mainly composed of compressed snow rather than ice. Density profiles through newly calved bergs show that at the surface of the berg the density might be only 400 kg per cubic metre (25 pounds per cubic foot)—pure ice has a density of 920 kg per cubic metre (57 pounds per cubic foot)—and both air and water may pass through the spaces between the crystal grains. Only when the density reaches 800 kg per cubic metre (50 pounds per cubic foot) deep within the berg do the air channels collapse to form air bubbles. At this point, the material can be properly classified as “ice,” whereas the lower- density material above the ice is more properly called “firn.” Corresponding to a layer some 150–200 years old and coinciding approximately with the waterline, the firn-ice transition occurs about 40–60 metres (130–200 feet) below the surface of the iceberg. Deeper still, as density and pressure increase, the air bubbles become compressed. Within the Greenland Ice Sheet, pressures of 10–15 atmospheres (10,100–15,200 millibars) have been measured; the resulting air bubbles tend to be elongated, possessing lengths up to 4 mm (0.2 inch) and diameters of 0.02–0.18 mm (0.0008–0.007 inch). In Antarctic ice shelves and icebergs, the air bubbles are more often spherical or ellipsoidal and possess a diameter of 0.33–0.49 mm (0.01–0.02 inch). The size of the air bubbles decreases with increasing depth within the ice.

As soon as an iceberg calves, it starts to warm relative to its parent ice shelf. This warming accelerates as the berg drifts into more temperate regions, especially when it drifts free of the surrounding pack ice. Once the upper surface of the berg begins to melt, the section above the waterline warms relatively quickly to temperatures that approach the melting point of ice. Meltwater at the surface can percolate through the permeable uppermost 40–60 metres (130–200 feet) and refreeze at depth. This freezing releases the berg’s latent heat, and the visible part of the berg becomes a warm mass that has little mechanical strength; it is composed of firn and thus can be easily eroded. The remaining mechanical strength of the iceberg is contained in the “cold core” below sea level, where temperatures remain at −15 to −20 °C (5 to −4 °F). In the cold core, heat transfer is inhibited owing to the lack of percolation and refreezing.

Iceberg size and shape

For many years, the largest reliably measured Antarctic iceberg was the one first observed off Clarence Island (one of the South Shetland Islands) by the whale catcher Odd I in 1927; it was 180 km (110 miles) long, was approximately square, and possessed a freeboard of 30–40 metres (100–130 feet). In 1956 an iceberg was sighted by USS Glacier off Scott Island (a small island about 500 km [300 miles] northeast of Victoria Land in the Ross Sea) with unconfirmed length of 335 km (210 miles) and width of 100 km (60 miles). However, recently there have been many calvings of giant icebergs in the Ross and Weddell seas with dimensions that have been measured accurately by satellite. In 2000 iceberg B-15 broke off the Ross Ice Shelf with an initial length of 295 km (about 185 miles). Although B-15 broke into two fragments after a few days, B-15A—the larger portion, measuring 120 km (75 miles) long by 20 km (12 miles) wide—obstructed the entrance to McMurdo Sound and prevented the pack ice in the sound from clearing out in the summer. In October 2005 B-15A broke up into several large pieces off Cape Adare in Victoria Land because of the impact of distant swell. Iceberg C-19 was an even larger but narrow iceberg that broke off the Ross Ice Shelf in May 2002. It fragmented before it could drift far.

The Antarctic Peninsula has been warming significantly in recent decades (by 2.5 °C [4.5 °F] since the 1950s). Three ice shelves on the peninsula, the Wordie and Wilkins ice shelves on the west side of the peninsula and the Larsen Ice Shelf on the east side, have been disintegrating. This has caused the release of tremendous numbers of icebergs. The Larsen Ice Shelf has retreated twice since 2000; each event involved the fracture and release of a vast area of shelf ice in the form of multiple gigantic icebergs and innumerable smaller ones. The breakout of 3,250 square km (1,250 square miles) of shelf over 35 days in early 2002 effectively ended the existence of the Larsen B portion of the shelf. Although these events received much attention and were thought to be symptomatic of global warming, the Ross Sea sector does not seem to be warming at present. It is likely that the emission of giant icebergs in this zone was an isolated event. Intense iceberg outbreaks, such as the one described above, may not necessarily be occurring with a greater frequency than in the past. Rather, they are more easily detected with the aid of satellites.

In the typically ice-free Southern Ocean, surveys of iceberg diameters show that most bergs have a typical diameter of 300–500 metres (1,000–1,600 feet), although a few exceed 1 km (0.6 mile). It is possible to calculate the flexural (bending) response of a tabular iceberg to long Southern Ocean swells, and it has been found that a serious storm is capable of breaking down most bergs larger than 1 km into fragments.

Arctic bergs are generally smaller than Antarctic bergs, especially when newly calved. The largest recorded Arctic iceberg (excluding ice islands) was observed off Baffin Island in 1882; it was 13 km (8 miles) long by 6 km (4 miles) wide and possessed a freeboard, or the height of the berg above the waterline, of 20 metres (65 feet). Most Arctic bergs are much smaller and have a typical diameter of 100–300 metres (330–1,000 feet). Owing to their origin in narrow, fast-flowing glaciers, many Arctic bergs calve into random shapes that often develop further as they fracture and capsize. Antarctic bergs also evolve by the erosion of the weak freeboard or via further calving into tilted shapes. Depending on the local shape of the ice shelf at calving, the surfaces of icebergs, even while still predominantly tabular, may be domed or concave.

Erosion and melting

Most of the erosion taking place on Antarctic icebergs occurs after the bergs have emerged into the open Southern Ocean. Melt and percolation through the weak firn layer bring most of the freeboard volume to the melting point. This allows ocean wave action around the edges to penetrate the freeboard portion of the berg. Erosion occurs both mechanically and through the enhanced transport of heat from ocean turbulence. The result is a wave cut that can penetrate for several metres into the berg. The snow and firn above it may collapse to create a growler (a floating block about the size of a grand piano) or a bergy bit (a larger block about the size of a small house). At the same time, the turbulence level is enhanced around existing irregularities such as cracks and crevasses. Waves eat their way into these features, causing cracks to grow into caves whose unsupported roofs may also collapse. Through these processes, the iceberg can evolve into a drydock or a pinnacled berg. (Both types are composed of apparently independent freeboard elements that are linked below the waterline.) Such a berg may look like a megalithic stone circle with shallow water in the centre.

In the case of Arctic icebergs, which often suffer from repeated capsizes, there is no special layer of weak material. Instead, the whole berg gradually melts at a rate dependent on the salinity (the salt concentration present in a volume of water) and temperature at various depths in the water column and on the velocity of the berg relative to the water near the surface.

In Arctic icebergs, erosion often leads to a loss of stability and capsizing. For an Antarctic tabular berg, complete capsize is uncommon, though tiltmeter measurements have shown that some long, narrow bergs may roll completely over a very long period. More common is a shift to a new position of stability, which creates a new waterline for wave erosion. When tabular icebergs finally fragment into smaller pieces, these smaller individual bergs melt faster, because a larger proportion of their surface relative to volume is exposed to the water.

Iceberg distribution and drift trajectories

In the Antarctic, a freshly calved iceberg usually begins by moving westward in the Antarctic Coastal Current, with the coastline on its left. Since its trajectory is also turned to the left by the Coriolis force owing to Earth’s rotation, it may run aground and remain stationary for years before moving on. For instance, a large iceberg called Trolltunga calved from the Fimbul Ice Shelf near the Greenwich meridian in 1967, and it became grounded in the southern Weddell Sea for five years before continuing its drift. If a berg can break away from the coastal current (as Trolltunga had done by late 1977), it enters the Antarctic Circumpolar Current, or West Wind Drift. This eastward-flowing system circles the globe at latitudes of 40°–60° S. Icebergs tend to enter this current system at four well-defined longitudes or “retroflection zones”: the Weddell Sea, east of the Kerguelen Plateau at longitude 90° E, west of the Balleny Islands at longitude 150° E, and in the northeastern Ross Sea. These zones reflect the partial separation of the surface water south of the Antarctic Circumpolar Current into independently circulating gyres, and they imply that icebergs found at low latitudes may originate from specific sectors of the Antarctic coast.

Once in the Antarctic Circumpolar Current, the iceberg’s track is generally eastward, driven by both the current and the wind. Also, the Coriolis force pushes the berg slightly northward. The berg will then move crabwise in a northeasterly direction so that it can end up at relatively low latitudes and in relatively warm waters before disintegrating. In November 2006, for instance, a chain of four icebergs was observed just off Dunedin (at latitude 46° S) on New Zealand’s South Island. Under extreme conditions, such as its capture by a cold eddy, an iceberg may succeed in reaching extremely low latitudes. For example, clusters of bergs with about 30 metres (100 feet) of freeboard were sighted in the South Atlantic at 35°50′ S, 18°05′ E in 1828. In addition, icebergs have been responsible for the disappearance of innumerable ships off Cape Horn.

In the Arctic Ocean, the highest latitude sources of icebergs are Svalbard archipelago north of Norway and the islands of the Russian Arctic. The iceberg production from these sources is not large—an estimated 6.28 cubic km (1.5 cubic miles) per year in a total of 250–470 cubic km (60–110 cubic miles) for the entire Arctic region. An estimated 26 percent originates in Svalbard, 36 percent stems from Franz Josef Land, 32 percent is added by Novaya Zemlya, about 6 percent begins in Severnaya Zemlya, and 0.3 percent comes from Ushakov Island. Many icebergs from these sources move directly into the shallow Barents or Kara seas, where they run aground. Looping trails of broken pack ice are left as the bergs move past the obstacles. Other bergs pass through Fram Strait and into the East Greenland Current. As these icebergs pass down the eastern coast of Greenland, their numbers are augmented by others produced by tidewater glaciers, especially those from Scoresby Sund. Scoresby Sund is an inlet that is large enough to have an internal gyral circulation. Water driven by the East Greenland Current enters on the north side of the inlet and flows outward on the south side. This pattern encourages the flushing of icebergs from the fjord. In contrast, narrower fjords offer more opportunities for icebergs to run aground; they also experience an estuarine circulation pattern where outward flow at the surface is nearly balanced by an inward flow at depth. An iceberg feels both currents because of its draft and thus does not move seaward as readily as sea ice generated in the fjord.

As the increased flux of icebergs reaches Cape Farewell, most bergs turn into Baffin Bay, although a few “rogue” icebergs continue directly into the Labrador Sea, especially if influenced by prolonged storm activity. Icebergs entering Baffin Bay first move northward in the West Greenland Current and are strongly reinforced by icebergs from the prolific West Greenland glaciers. About 10,000 icebergs are produced in this region every year. Bergs then cross to the west side of the bay, where they move south in the Baffin Island Current toward Labrador. At the northern end of Baffin Bay, in Melville Bay, lies an especially fertile iceberg-producing glacier front produced by the Humboldt Glacier, the largest glacier in the Northern Hemisphere.

Some icebergs take only 8–15 months to move from Lancaster Sound to Davis Strait, but the total passage around Baffin Bay can take three years or more, owing to groundings and inhibited motion when icebergs are embedded in winter sea ice. The flux of bergs that emerges from Davis Strait into the Labrador Current, where the final part of the bergs’ life cycle occurs, is extremely variable. The number of bergs decreases linearly with latitude. This reduction is primarily due to melting and break-up or grounding followed by breakup. On average, 473 icebergs per year manage to cross the 48° N parallel and enter the zone where they are a danger to shipping—though numbers vary greatly from year to year. Surviving bergs will have lost at least 85 percent of their original mass. They are fated to melt on the Grand Banks or when they reach the “cold wall,” or surface front, that separates the Labrador Current from the warm Gulf Stream between latitudes 40° and 44° N.

Much work has gone into modeling the patterns of iceberg drift, especially because of the need to divert icebergs away from oil rigs. It is often difficult to predict an iceberg’s drift speed and direction, given the wind and current velocities. An iceberg is affected by the frictional drag of the wind on its smooth surfaces (skin friction drag) and upon its protuberances (form drag). Likewise, the drag of the current acts upon its immersed surfaces; however, the current changes direction with increasing depth, by means of an effect known as the Ekman spiral. Another important factor governing an iceberg’s speed and direction is the Coriolis force, which diverts icebergs toward the right of their track in the Northern Hemisphere and toward the left in the Southern Hemisphere. This force is typically stronger on icebergs than on sea ice, because icebergs have a larger mass per unit of sea-surface area. As a result, it is unusual for icebergs to move in the same direction as sea ice. Typically, their direction of motion relative to the surface wind is some 40°–50° to the right (Northern Hemisphere) or left (Southern Hemisphere). Icebergs progress at about 3 percent of the wind speed.

Iceberg scour and sediment transport

When an iceberg runs aground, it can plow a furrow several metres deep in the seabed that may extend for tens of kilometres. Iceberg scour marks have been known from the Labrador Sea and Grand Banks since the early 1970s. In the Arctic, many marks are found at depths of more than 400 metres (1,300 feet), whereas the deepest known sill, or submerged ridge, within Greenland fjords is 220 metres (about 725 feet) deep. This unsolved anomaly suggests that icebergs were much deeper in the past or that sedimentation rates within the fjords are so slow that marks dating from periods of reduced sea level have not yet been filled in. It is also possible that an irregular berg can increase its draft by capsizing, though model studies suggest that the maximum gain is only a few percent. Since not all iceberg-producing fjords have been adequately surveyed, another possibility is that Greenland fjords exist with entrances of greater depth. In the Antarctic, the first scours were found in 1976 at latitude 16° W off the coast of Queen Maud Land in the eastern Weddell Sea, and further discoveries were made off Wilkes Land and Cape Hallett at the eastern entrance to the Ross Sea.

In addition, iceberg scour marks have been found on land. On King William Island in the Canadian Arctic, scour marks have been identified in locations where the island rose out of the sea—the result of a postglacial rebound after the weight of the Laurentide Ice Sheet was removed. Furthermore, Canadian geologist Christopher Woodworth-Lynas has found evidence of iceberg scour marks in the satellite imagery of Mars. Scour marks are strong indicators of past water flow.

Observations indicate that long furrows like plow marks are made when an iceberg is driven by sea ice, whereas a freely floating berg makes only a short scour mark or a single depression. Apart from simple furrows, “washboard patterns” have been seen. It is thought that these patterns are created when a tabular berg runs aground on a wide front and is then carried forward by tilting and plowing on successive tides. Circular depressions, thought to be made when an irregular iceberg touches bottom with a small “foot” and then swings to and fro in the current, have also been observed. Grounded bergs have a deleterious effect on the ecosystem of the seabed, often scraping it clear of all life.

Both icebergs and pack ice transport sediment in the form of pebbles, cobbles, boulders, finer material, and even plant and animal life thousands of kilometres from their source area. Arctic icebergs often carry a top burden of dirt from the eroded sides of the valley down which the parent glacier ran, whereas both Arctic and Antarctic bergs carry stones and dirt on their underside. Stones are lifted from the glacier bed and later deposited out at sea as the berg melts. The presence of ice-rafted debris (IRD) in seabed-sediment cores is an indicator that icebergs, sea ice, or both have occurred at that location during a known time interval. (The age of the deposit is indicated by the depth in the sediment at which the debris is found.) Noting the locations of ice-rafted debris is a very useful method of mapping the distribution of icebergs and thus the cold surface water occurring during glacial periods and at other times in the geologic past. IRD mapping surveys have been completed for the North Atlantic, North Pacific, and Southern oceans. The type of rock in the debris can also be used to identify the source region of the transporting iceberg. Caution must be used in such interpretation because, even in the modern era, icebergs can spread far beyond their normal limits under exceptional conditions. For instance, reports of icebergs off the coast of Norway in spring 1881 coincided with the most extreme advance ever recorded of East Greenland sea ice. It is likely that the bergs were carried eastward along with the massive production and outflow of Arctic sea ice.

It is ice-rafted plant life that gives the occasional exotic colour to an iceberg. Bergs are usually white (the colour of snow or bubbly ice) or blue (the colour of glacial ice that is relatively bubble-free). A few deep green icebergs are seen in the Antarctic; it is believed that these are formed when seawater rich in organic matter freezes onto the bottoms of the ice shelves.

Additional Information

An iceberg is ice that broke off from glaciers or shelf ice and is floating in open water.

To be classified as an iceberg, the height of the ice must be greater than 16 feet above sea level and the thickness must be 98-164 feet and the ice must cover an area of at least 5,382 square feet.

There are smaller pieces of ice known as “bergy bits” and “growlers.” Bergy bits and growlers can originate from glaciers or shelf ice, and may also be the result of a large iceberg that has broken up. A bergy bit is a medium to large fragment of ice. Its height is generally greater than three feet but less than 16 feet above sea level and its area is normally about 1,076-3,229 square feet. Growlers are smaller fragments of ice and are roughly the size of a truck or grand piano. They extend less than three feet above the sea surface and occupy an area of about 215 square feet.

Icebergs are also classified by shape, most commonly being either tabular or non-tabular. Tabular icebergs have steep sides and a flat top. Non-tabular icebergs have different shapes, with domes and spires.

Icebergs are monitored worldwide by the U.S. National Ice Center (NIC). NIC produces analyses and forecasts of Arctic, Antarctic, Great Lakes, and Chesapeake Bay ice conditions. NIC is the only organization that names and tracks all Antarctic Icebergs.

X9EseURkPSeVLsmEU2pmUR-650-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2440 2025-01-26 18:18:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2339) Mariana Trench

Gist

The Mariana Trench reaches more than 7 miles below the surface of the Pacific Ocean.

The Mariana Trench is the deepest oceanic trench on Earth and home to the two lowest points on the planet.

The crescent-shaped trench is in the Western Pacific, just east of the Mariana Islands near Guam. The region surrounding the trench is noteworthy for many unique environments, including vents bubbling up liquid sulfur and carbon dioxide, active mud volcanoes and marine life adapted to pressures 1,000 times that at sea level.

The Challenger Deep, in the southern end of the Mariana Trench (sometimes called the Marianas Trench), is the deepest spot in the ocean. Its depth is difficult to measure from the surface, but in 2010, the National Oceanic and Atmospheric Administration used sound pulses sent through the ocean and pegged the Challenger Deep d at 36,070 feet (10,994 meters). A 2021 estimate using pressure sensors found the deepest spot in Challenger Deep was 35,876 feet (10,935 m). Other modern estimates vary by less than 1,000 feet (305 m).

Summary

Mariana Trench, deep-sea trench in the floor of the western North Pacific Ocean, the deepest such trench known on Earth, located mostly east as well as south of the Mariana Islands. It is part of the western Pacific system of oceanic trenches coinciding with subduction zones—points where two adjacent tectonic plates collide, one being forced below the other. An arcing depression, the Mariana Trench stretches for more than 1,580 miles (2,540 km) with a mean width of 43 miles (69 km). The greatest depths are reached in Challenger Deep, a smaller steep-walled valley on the floor of the main trench southwest of Guam. The Mariana Trench, which is situated within the territories of the U.S. dependencies of the Northern Mariana Islands and Guam, was designated a U.S. national monument in 2009.

Measuring the greatest depths in the Mariana Trench is an exceedingly difficult task, given the technical challenges of delivering instrumentation to such a remote location and then obtaining accurate readings. The first attempt was made in 1875 during the Challenger Expedition (1872–76), when a sounding of 26,850 feet (8,184 metres) was obtained near the southern end of the trench. In 1899 Nero Deep (31,693 feet [9,660 metres]) was discovered southeast of Guam. That sounding was not exceeded until a 32,197-foot (9,813-metre) hole was found in the vicinity 30 years later. In 1957, during the International Geophysical Year, the Soviet research ship Vityaz sounded a new world record depth of 36,056 feet (10,990 metres) in Challenger Deep. That value was later increased to 36,201 feet (11,034 metres). Since then several measurements of the Challenger Deep have been made, using increasingly sophisticated electronic equipment. Notable among these is the depth of 35,840 feet (10,924 metres) reported by a Japanese expedition in 1984 and one of 36,070 feet (10,994 metres) obtained by a U.S. research team in 2011. In addition, another deep hole—originally called HMRG Deep (for Hawaii Mapping Research Group, the discoverers of the location) and later renamed Sirena Deep—is situated south of Guam and east of Challenger Deep. First encountered in 1997, its depth has been reported variously as 34,911 and 35,463 feet (10,641 and 10,809 metres).

The first descent to the bottom of the Mariana Trench took place on January 23, 1960. The French-built U.S. Navy-operated bathyscaphe Trieste—with Swiss ocean engineer Jacques Piccard (who helped his father, Auguste Piccard, design the bathyscaphe) and U.S. naval officer Don Walsh aboard—made a record dive to 35,814 feet (10,916 metres) in Challenger Deep. The next person to descend into that location did so more than 50 years after Piccard and Walsh. On March 26, 2012, Canadian filmmaker James Cameron piloted the submersible Deepsea Challenger (which he had helped design) to 35,756 feet (10,898 metres), in the process establishing a new world record depth for a solo descent.

Details

The Mariana Trench is an oceanic trench located in the western Pacific Ocean, about 200 kilometres (124 mi) east of the Mariana Islands; it is the deepest oceanic trench on Earth. It is crescent-shaped and measures about 2,550 km (1,580 mi) in length and 69 km (43 mi) in width. The maximum known depth is 10,984 ± 25 metres (36,037 ± 82 ft; 6,006 ± 14 fathoms; 6.825 ± 0.016 mi) at the southern end of a small slot-shaped valley in its floor known as the Challenger Deep. The deepest point of the trench is more than 2 km (1.2 mi) farther from sea level than the peak of Mount Everest.

At the bottom of the trench, the water column above exerts a pressure of 1,086 bar (15,750 psi), more than 1,071 times the standard atmospheric pressure at sea level. At this pressure, the density of water is increased by 4.96%. The temperature at the bottom is 1 to 4 °C (34 to 39 °F).

In 2009, the Mariana Trench was established as a US National Monument, Mariana Trench Marine National Monument.

One-celled organisms called monothalamea have been found in the trench at a record depth of 10.6 km (35,000 ft; 6.6 mi) below the sea surface by researchers from the Scripps Institution of Oceanography. Data has also suggested that microbial life forms thrive within the trench.

Etymology

The Mariana Trench is named after the nearby Mariana Islands, which are named Las Marianas in honor of Spanish Queen Mariana of Austria. The islands are part of the island arc that is formed on an over-riding plate, called the Mariana plate (also named for the islands), on the western side of the trench.

Geology

The Mariana Trench is part of the Izu–Bonin–Mariana subduction system that forms the boundary between two tectonic plates. In this system, the western edge of one plate, the Pacific plate, is subducted (i.e., thrust) beneath the smaller Mariana plate that lies to the west. Crustal material at the western edge of the Pacific plate is some of the oldest oceanic crust on Earth (up to 170 million years old), and is, therefore, cooler and denser; hence its great height difference relative to the higher-riding (and younger) Mariana plate. The deepest area at the plate boundary is the Mariana Trench proper.

The movement of the Pacific and Mariana plates is also indirectly responsible for the formation of the Mariana Islands. These volcanic islands are caused by flux melting of the upper mantle due to the release of water that is trapped in minerals of the subducted portion of the Pacific plate.

Research history

The trench was first sounded during the Challenger expedition in 1875 using a weighted rope, which recorded a depth of 4,475 fathoms (8,184 metres; 26,850 feet). In 1877, a map was published called Tiefenkarte des Grossen Ozeans ("Depth map of the Great Ocean") by Petermann, which showed a Challenger Tief ("Challenger deep") at the location of that sounding. In 1899, USS Nero, a converted collier, recorded a depth of 5,269 fathoms (9,636 metres; 31,614 feet).

In 1951, under Chief Scientist Thomas Gaskell, Challenger II surveyed the trench using echo sounding, a much more precise and vastly easier way to measure depth than the sounding equipment and drag lines used in the original expedition. During this survey, the deepest part of the trench was recorded when the Challenger II measured a depth of 5,960 fathoms (10,900 metres; 35,760 feet) at 11°19′N 142°15′E, known as the Challenger Deep.

In 1957, the Soviet vessel Vityaz reported a depth of 11,034 m (36,201 ft; 6,033 fathoms) at a location dubbed the Mariana Hollow.

In 1962, the surface ship M.V. Spencer F. Baird recorded a maximum depth of 10,915 m (35,810 ft; 5,968 fathoms) using precision depth gauges.

In 1984, the Japanese survey vessel Takuyō collected data from the Mariana Trench using a narrow, multi-beam echo sounder; it reported a maximum depth of 10,924 metres (35,840 ft), also reported as 10,920 ± 10 m (35,827 ± 33 ft; 5,971.1 ± 5.5 fathoms). Remotely Operated Vehicle KAIKO reached the deepest area of the Mariana Trench and made the deepest diving record of 10,911 m (35,797 ft; 5,966 fathoms) on 24 March 1995.

During surveys carried out between 1997 and 2001, a spot was found along the Mariana Trench that had a depth similar to the Challenger Deep, possibly even deeper. It was discovered while scientists from the Hawaii Institute of Geophysics and Planetology were completing a survey around Guam; they used a sonar mapping system towed behind the research ship to conduct the survey. This new spot was named the HMRG (Hawaii Mapping Research Group) Deep, after the group of scientists who discovered it.

On 1 June 2009, mapping aboard the RV Kilo Moana (mothership of the Nereus vehicle), indicated a spot with a depth of 10,971 m (35,994 ft; 5,999 fathoms). The sonar mapping of the Challenger Deep was possible by its Simrad EM120 sonar multibeam bathymetry system for deep water. The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth across the entire swath (implying that the depth figure is accurate to ± 22 metres (72 ft; 12 fathoms)).

In 2011, it was announced at the American Geophysical Union Fall Meeting that a US Navy hydrographic ship equipped with a multibeam echosounder conducted a survey which mapped the entire trench to 100 m (330 ft; 55 fathoms) resolution. The mapping revealed the existence of four rocky outcrops thought to be former seamounts.

The Mariana Trench was a site chosen by researchers at Washington University in St. Louis and the Woods Hole Oceanographic Institution in 2012 for a seismic survey to investigate the subsurface water cycle. Using both ocean-bottom seismometers and hydrophones, the scientists were able to map structures as deep as 97 kilometres (318,000 ft; 53,000 fathoms; 60 miles) beneath the surface.

Descents

As of 2022, 22 crewed descents and seven uncrewed descents have been achieved. The first was the crewed descent by Swiss-designed, Italian-built, United States Navy-owned bathyscaphe Trieste, which reached the bottom at 1:06 pm on 23 January 1960, with Don Walsh and Jacques Piccard on board. Iron shot was used for ballast, with gasoline for buoyancy. The onboard systems indicated a depth of 37,800 feet (11,521 m; 6,300 fathoms), but this was later revised to 35,814 feet (10,916 m; 5,969 fathoms). The depth was estimated from a conversion of pressure measured and calculations based on the water density from sea surface to seabed.

This was followed by the uncrewed ROVs Kaikō in 1996 and Nereus in 2009. The first three expeditions directly measured very similar depths of 10,902 to 10,916 m (35,768 to 35,814 ft; 5,961 to 5,969 fathoms). The fourth was made by Canadian film director James Cameron on 26 March 2012. He reached the bottom of the Mariana Trench in the submersible vessel Deepsea Challenger, diving to a depth of 10,908 m (35,787 ft; 5,965 fathoms).

In July 2015, members of the National Oceanic and Atmospheric Administration, Oregon State University, and the Coast Guard submerged a hydrophone into the deepest part of the Mariana Trench, the Challenger Deep, never having previously deployed one past a mile. The titanium-shelled hydrophone was designed to withstand the immense pressure 7 miles (37,000 ft; 6,200 fathoms; 11,000 m) under. Although researchers were unable to retrieve the hydrophone until November, the data capacity was full within the first 23 days. After months of analyzing the sounds, the experts were surprised to pick up natural sounds like earthquakes, typhoons, baleen whales, and machine-made sounds such as boats. Due to the mission's success, the researchers announced plans to deploy a second hydrophone in 2017 for an extended period of time.

Victor Vescovo achieved a new record descent to 10,928 m (35,853 ft; 5,976 fathoms) on 28 April 2019 using the DSV Limiting Factor, a Triton 36000/2 model manufactured by Florida-based Triton Submarines. He dived four times between 28 April and 5 May 2019, becoming the first person to dive into Challenger Deep more than once.

On 8 May 2020, a joint project between the Russian shipbuilders, scientific teams of the Russian Academy of Sciences with the support of the Russian Foundation for Advanced Research Projects and the Pacific Fleet submerged the autonomous underwater vehicle Vityaz-D to the bottom of the Mariana Trench at a depth of 10,028 m (32,900 ft; 5,483 fathoms). Vityaz-D is the first underwater vehicle to operate autonomously at the extreme depths of the Mariana Trench. The duration of the mission, excluding diving and surfacing, was more than 3 hours.

On 10 November 2020, the Chinese submersible Fendouzhe reached the bottom of the Mariana Trench at a depth of 10,909 m (35,791 ft; 5,965 fathoms).

Life

The expedition conducted in 1960 claimed to have observed, with great surprise because of the high pressure, large creatures living at the bottom, such as a flatfish about 30 cm (12 in) long, and shrimp. According to Piccard, "The bottom appeared light and clear, a waste of firm diatomaceous ooze". Many marine biologists are now skeptical of the supposed sighting of the flatfish, and it is suggested that the creature may instead have been a sea cucumber. During the second expedition, the uncrewed vehicle Kaikō collected mud samples from the seabed. Tiny organisms were found to be living in those samples.

In July 2011, a research expedition deployed untethered landers, called drop cams, equipped with digital video cameras and lights to explore this deep-sea region. Among many other living organisms, some gigantic single-celled foraminiferans with a size of more than 10 cm (4 in), belonging to the class of monothalamea, were observed. Monothalamea are noteworthy for their size, their extreme abundance on the seafloor, and their role as hosts for a variety of organisms.

In December 2014, a new species of snailfish was discovered at a depth of 8,145 m (26,722 ft; 4,454 fathoms), breaking the previous record for the deepest living fish seen on video.

During the 2014 expedition, several new species were filmed, including huge amphipods known as supergiants. Deep-sea gigantism is the process where species grow larger than their shallow-water relatives.

In May 2017, an unidentified type of snailfish was filmed at a depth of 8,178 metres (26,800 ft).

Pollution

In 2016, a research expedition looked at the chemical makeup of crustacean scavengers collected from the range of 7,841–10,250 m (25,725–33,629 ft; 4,288–5,605 fathoms) within the trench. Within these organisms, the researchers found extremely elevated concentrations of PCBs, a chemical toxin banned in the 1970s for its environmental harm, concentrated at all depths within the sediment of the trench. Further research has found that amphipods also ingest microplastics, with 100% of amphipods having at least one piece of synthetic material in their stomachs.

In 2019, Victor Vescovo reported finding a plastic bag and candy wrappers at the bottom of the trench. That year, Scientific American also reported that carbon-14 from nuclear bomb testing has been found in the bodies of aquatic animals found in the trench.

Possible nuclear waste disposal site

Like other oceanic trenches, the Mariana Trench has been proposed as a site for nuclear waste disposal in the hope that tectonic plate subduction occurring at the site might eventually push the nuclear waste deep into the Earth's mantle, the second layer of the Earth. In 1979 Japan planned to dump low-level nuclear wastes near Maug, in the Northern Marianas. However, ocean dumping of nuclear waste is prohibited by international law. Furthermore, plate subduction zones are associated with very large megathrust earthquakes, the effects of which are unpredictable for the safety of long-term disposal of nuclear wastes within the hadopelagic ecosystem.

Additional Information

The Mariana Trench is a crescent-shaped, deep sea oceanic trench in the western Pacific Ocean. It is located about 200 mi (322 km) southwest of Guam and southeast of the Mariana Islands. It stretches to around 1,580 mi (2,550 km) and is about 43 mi (69 km) wide.

Somewhere between Hawaii and the Philippines near the small island of Guam, far below the surface of the water, sits the Mariana Trench, the deepest spot in the ocean. What’s down there?

How deep is the Mariana Trench?

The Trench sits like a crescent-shaped dent in the floor of the Pacific Ocean, extending over 1500 miles long with an average width around 43 miles and a depth of almost 7 miles (or just under 36,201 feet). At that depth, the weight of all that water above makes the pressure in the Trench around 1000 times higher than it would be in, say, Miami or New York. Floor vents release bubbles of liquid sulfur and carbon dioxide. Temperatures are just above freezing, and everything is drowning in darkness.

For comparison, most ocean life lives above a depth of 660 feet. Nuclear submarines hover around 850 feet below the surface as they travel through the ocean waters. Whales aren’t usually seen below about 8,200 feet. The site of Jack and Rose’s true (albeit fictional) love, the sunken Titanic, can be found at 12,467 feet.

According to National Geographic, if you were to put Mount Everest at the bottom of the Mariana Trench, its peak would still sit around 7,000 feet below sea level.

Toward the southern end of the Mariana Trench lies the Challenger Deep. It sits 36,070 feet below sea level, making it the point most distant from the water’s surface and the deepest part of the Trench.

Marianatrenchmap.png.webp?itok=Orvibm6V


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2441 2025-01-27 00:05:02

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2340) Seashore

Gist

The seashore is the land that borders an ocean or sea.

You can call the seashore the coast, the beach, or even just the shore. It's the area right next to the sea, and it can be rocky and dramatic or soft and sandy. Sometimes scientists use this word to mean the specific area that's covered with water at high tide but uncovered at low tide. This area is also known as the intertidal zone.

A coast – also called the coastline, shoreline, or seashore – is the land next to the sea or the line that forms the boundary between the land and the ocean or a lake.

Summary

A beach is sediments that accumulate along the sea or lake shores, the configuration and contours of which depend on the action of coastal processes, the kinds of sediment involved, and the rate of delivery of this sediment. There are three different kinds of beaches. The first occurs as a sediment strip bordering a rocky or cliffy coast; the second is the outer margin of a plain of marine or fluvial accumulation (free beaches); and the third, of fairly peculiar character, consists of the narrow sediment barriers stretching for dozens or even hundreds of kilometres parallel to the general direction of the coast. These barriers separate lagoons from the open sea and generally are dissected by some tidal inlets. Certain sediment forelands, such as spits, points, and tombolos (which connect an island with a mainland), also occasionally are called beaches.

The upper limit of the active beach is the swash line reached by highest sea level during big storms. The lower beach margin is beneath the water surface and can be determined only if there is a definite border present between the sediment layer and the naked surface of the rocky bench. If the sediment cover extends into deep water, the lowest beach margin may be defined as the line where the strongest waves no longer sort and move the sand. It occurs approximately at a depth equal to one-third the wavelength or 10 times the wave height.

The profile of an active beach varies greatly. Its form and dimensions depend on a number of factors, such as wave parameters, tide height, and sediment composition and distribution. The following, however, constitute some of the profile elements that commonly occur. At the upper part, above high sea level, a beach terrace is located, and there may be a series of beach ridges or berms created by the waves of a previous major storm. This terrace surface is inclined seaward. The next element is a steeper, frontal beach slope or face, and beneath it a low-tide terrace may be developed. If the tides are high enough (more than 2 m [6.6 feet]), the frontal slope may be more than 1 km (0.6 mile) in width in regions with abundant sand and a shallow bottom. In some areas the low-tide terrace terminates with another inclined shoreface, if the nearshore sea zone is rather deep. Finally, one or several parallel, submarine, long-shore bars with intervening troughs may exist along sandy shores; if present, these bars constitute the last profile element.

Some minor relief forms are usually present on the surface of sand beaches. These include oscillation ripples, swash or rill furrows, and the well-known beach cusps (concave seaward) at the beach margin.

Given the established system of strong waves normal to the shoreline, submarine bars are sometimes dismembered and are converted into large crescent elements convex seaward. These relief forms reflect the existence of large water eddies with vertical axes, which form as a result of the ebb and flow of the water. Often the water outflow proceeds in the form of linear rip currents. These may be so strong that they cause erosion of deep channels in the submarine slopes.

In many countries the wind strongly affects the dynamics of the beach. The beach is exposed to the sea wind, and sand is usually blown off to the rear parts of the beach, where it forms small hummocks. As these join together, foredunes are being built, and, if the beach is well-supplied with sand in the right area, several rows of dunes will be formed. When the sand is abundant, dunes will shift to adjacent low-lying plains and may bury fertile soils, woods, and buildings.

If sand is no longer delivered to the region of developed dunes, gaps will form in the ridges parallel to the shore. In such zones, parabolic dunes with their summits coastward are created. After long stabilization, the summits of the parabolas may be broken through by the wind, thus gradually forming a series of ridges parallel to the prevailing winds.

Beach sands in temperate latitudes consist mainly of quartz, some feldspars, and a small percentage of heavy minerals. In the tropics, however, calcareous beaches composed of skeletal remnants of marine organisms and precipitated particles, such as oolites, are widespread.

Sometimes the basement layers of the beach are cemented by calcium carbonate, precipitated from the groundwater. This will commonly result if fresh water penetrates a beach from swamps behind it. If the beach undergoes erosion and thus retreats, the cemented strata become exposed; termed beach rock, they are widespread in the tropics and along the shores of the Mediterranean, Black, and Caspian seas.

The practical significance of beaches is not limited to their function as protectors of the coast or as recreation sites. The sorting mechanism of the offshore waves and currents determines the accumulation of heavy-mineral (specific weight more than 2.7) concentrates. On any sand beach there are thin layers of dark sand that can be seen. Some heavy minerals contain valuable metals, such as titanium, zirconium, germanium, tin, uranium, and gold. In many places the concentrations are so great that they are of industrial significance; placer deposits are worked in India, Brazil, Japan, Australia, Russia, and Alaska. Heavy-mineral concentrates also are extracted from the submarine slopes by means of dredging ships.

Details

A coast – also called the coastline, shoreline, or seashore – is the land next to the sea or the line that forms the boundary between the land and the ocean or a lake. Coasts are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore that is created. Earth contains roughly 620,000 km (390,000 mi) of coastline.

Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas, they harbor salt marshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic animals. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds.

In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 m (3.3–164.0 ft).

According to an atlas prepared by the United Nations, about 44% of the human population lives within 150 km (93 mi) of the sea as of 2013. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism.

Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.

However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, as well as related issues like coastal erosion, saltwater intrusion, and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems.

The interactive effects of climate change, habitat destruction, overfishing, and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water", which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.

Since coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).

Size

Somalia has the longest coastline in Africa.

The Earth has approximately 620,000 kilometres (390,000 mi) of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. As of October 2010, about 2.86% of exclusive economic zones were part of marine protected areas.

The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientists might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).

While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.

Challenges of precisely measuring the coastline

The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve-like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.

The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.

Formation

Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.

Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than 4 m (13 ft); mesotidal coasts with a tidal range of 2 to 4 m (6.6 to 13 ft); and microtidal coasts with a tidal range of less than 2 m (7 ft). The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.

Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.

Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.

Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).

Beach

A beach is a landform alongside a body of water which consists of loose particles. The particles composing a beach are typically made from rock, such as sand, gravel, shingle, pebbles, etc., or biological sources, such as mollusc shells or coralline algae. Sediments settle in different densities and structures, depending on the local wave action and weather, creating different textures, colors and gradients or layers of material.

Though some beaches form on inland freshwater locations such as lakes and rivers, most beaches are in coastal areas where wave or current action deposits and reworks sediments. Erosion and changing of beach geologies happens through natural processes, like wave action and extreme weather events. Where wind conditions are correct, beaches can be backed by coastal dunes which offer protection and regeneration for the beach. However, these natural forces have become more extreme due to climate change, permanently altering beaches at very rapid rates. Some estimates describe as much as 50 percent of the earth's sandy beaches disappearing by 2100 due to climate-change driven sea level rise.

Sandy beaches occupy about one third of global coastlines. These beaches are popular for recreation, playing important economic and cultural roles—often driving local tourism industries. To support these uses, some beaches have human-made infrastructure, such as lifeguard posts, changing rooms, showers, shacks and bars. They may also have hospitality venues (such as resorts, camps, hotels, and restaurants) nearby or housing, both for permanent and seasonal residents.

Human forces have significantly changed beaches globally: direct impacts include bad construction practices on dunes and coastlines, while indirect human impacts include water pollution, plastic pollution and coastal erosion from sea level rise and climate change. Some coastal management practices are designed to preserve or restore natural beach processes, while some beaches are actively restored through practices like beach nourishment.

Wild beaches, also known as undeveloped or undiscovered beaches, are not developed for tourism or recreation. Preserved beaches are important biomes with important roles in aquatic or marine biodiversity, such as for breeding grounds for sea turtles or nesting areas for seabirds or penguins. Preserved beaches and their associated dune are important for protection from extreme weather for inland ecosystems and human infrastructure.

Location and profile

Although the seashore is most commonly associated with the word beach, beaches are also found by lakes and alongside large rivers.

Beach may refer to:

* small systems where rock material moves onshore, offshore, or alongshore by the forces of waves and currents; or
* geological units of considerable size.

The former are described in detail below; the larger geological units are discussed elsewhere under bars.

There are several conspicuous parts to a beach that relate to the processes that form and shape it. The part mostly above water (depending upon tide), and more or less actively influenced by the waves at some point in the tide, is termed the beach berm. The berm is the deposit of material comprising the active shoreline. The berm has a crest (top) and a face—the latter being the slope leading down towards the water from the crest. At the very bottom of the face, there may be a trough, and further seaward one or more long shore bars: slightly raised, underwater embankments formed where the waves first start to break.

The sand deposit may extend well inland from the berm crest, where there may be evidence of one or more older crests (the storm beach) resulting from very large storm waves and beyond the influence of the normal waves. At some point the influence of the waves (even storm waves) on the material comprising the beach stops, and if the particles are small enough (sand size or smaller), winds shape the feature. Where wind is the force distributing the grains inland, the deposit behind the beach becomes a dune.

The differences between summer and winter on beaches in areas where the winter conditions are rougher and waves have a shorter wavelength but higher energy. In winter, sand from the beach is stored offshore.
These geomorphic features compose what is called the beach profile. The beach profile changes seasonally due to the change in wave energy experienced during summer and winter months. In temperate areas where summer is characterised by calmer seas and longer periods between breaking wave crests, the beach profile is higher in summer. The gentle wave action during this season tends to transport sediment up the beach towards the berm where it is deposited and remains while the water recedes. Onshore winds carry it further inland forming and enhancing dunes.

Conversely, the beach profile is lower in the storm season (winter in temperate areas) due to the increased wave energy, and the shorter periods between breaking wave crests. Higher energy waves breaking in quick succession tend to mobilise sediment from the shallows, keeping it in suspension where it is prone to be carried along the beach by longshore currents, or carried out to sea to form longshore bars, especially if the longshore current meets an outflow from a river or flooding stream. The removal of sediment from the beach berm and dune thus decreases the beach profile.

If storms coincide with unusually high tides, or with a freak wave event such as a tidal surge or tsunami which causes significant coastal flooding, substantial quantities of material may be eroded from the coastal plain or dunes behind the berm by receding water. This flow may alter the shape of the coastline, enlarge the mouths of rivers and create new deltas at the mouths of streams that had not been powerful enough to overcome longshore movement of sediment.

The line between beach and dune is difficult to define in the field. Over any significant period of time, sediment is always being exchanged between them. The drift line (the high point of material deposited by waves) is one potential demarcation. This would be the point at which significant wind movement of sand could occur, since the normal waves do not wet the sand beyond this area. However, the drift line is likely to move inland under assault by storm waves.

Formation

Beaches are the result of wave action by which waves or currents move sand or other loose sediments of which the beach is made as these particles are held in suspension. Alternatively, sand may be moved by saltation (a bouncing movement of large particles). Beach materials come from erosion of rocks offshore, as well as from headland erosion and slumping producing deposits of scree. A coral reef offshore is a significant source of sand particles. Some species of fish that feed on algae attached to coral outcrops and rocks can create substantial quantities of sand particles over their lifetime as they nibble during feeding, digesting the organic matter, and discarding the rock and coral particles which pass through their digestive tracts.

The composition of the beach depends upon the nature and quantity of sediments upstream of the beach, and the speed of flow and turbidity of water and wind. Sediments are moved by moving water and wind according to their particle size and state of compaction. Particles tend to settle and compact in still water. Once compacted, they are more resistant to erosion. Established vegetation (especially species with complex network root systems) will resist erosion by slowing the fluid flow at the surface layer. When affected by moving water or wind, particles that are eroded and held in suspension will increase the erosive power of the fluid that holds them by increasing the average density, viscosity, and volume of the moving fluid.

Coastlines facing very energetic wind and wave systems will tend to hold only large rocks as smaller particles will be held in suspension in the turbid water column and carried to calmer areas by longshore currents and tides. Coastlines that are protected from waves and winds will tend to allow finer sediments such as clay and mud to precipitate creating mud flats and mangrove forests. The shape of a beach depends on whether the waves are constructive or destructive, and whether the material is sand or shingle. Waves are constructive if the period between their wave crests is long enough for the breaking water to recede and the sediment to settle before the succeeding wave arrives and breaks.

Fine sediment transported from lower down the beach profile will compact if the receding water percolates or soaks into the beach. Compacted sediment is more resistant to movement by turbulent water from succeeding waves. Conversely, waves are destructive if the period between the wave crests is short. Sediment that remains in suspension when the following wave crest arrives will not be able to settle and compact and will be more susceptible to erosion by longshore currents and receding tides. The nature of sediments found on a beach tends to indicate the energy of the waves and wind in the locality.

Constructive waves move material up the beach while destructive waves move the material down the beach. During seasons when destructive waves are prevalent, the shallows will carry an increased load of sediment and organic matter in suspension. On sandy beaches, the turbulent backwash of destructive waves removes material forming a gently sloping beach. On pebble and shingle beaches the swash is dissipated more quickly because the large particle size allows greater percolation, thereby reducing the power of the backwash, and the beach remains steep. Compacted fine sediments will form a smooth beach surface that resists wind and water erosion.

During hot calm seasons, a crust may form on the surface of ocean beaches as the heat of the sun evaporates the water leaving the salt which crystallises around the sand particles. This crust forms an additional protective layer that resists wind erosion unless disturbed by animals or dissolved by the advancing tide. Cusps and horns form where incoming waves divide, depositing sand as horns and scouring out sand to form cusps. This forms the uneven face on some sand shorelines. White sand beaches look white because the quartz or eroded limestone in the sand reflects or scatters sunlight without signicantly absorbing any colors.

Sand colors

The composition of the sand varies depending on the local minerals and geology.[4] Some of the types of sand found in beaches around the world are:

* White sand: Mostly made of quartz and limestone, it can also contain other minerals like feldspar and gypsum .
* Light-colored sand: This sand gets its color from quartz and iron, and is the most common sand color in Southern Europe and other regions of the Mediterranean Basin, such as Tunisia.
* Tropical white sand: On tropical islands, the sand is composed of calcium carbonate from the shells and skeletons of marine organisms, like corals and mollusks, as found in Aruba.
* Pink coral sand: Like the above, is composed of calcium carbonate and gets its pink hue from fragments of coral, such as in Bermuda and the Bahama Islands.
* Black sand: Black sand is composed of volcanic rock, like basalt and obsidian, which give it its gray-black color. Hawaii's Punaluu Beach, Madeira's Praia Formosa and Fuerteventura's Ajuy beach are examples of this type of sand.
* Red sand: This kind of sand is created by the oxidation of iron from volcanic rocks. Santorini's Kokkini Beach or the beaches on Prince Edward Island in Canada are examples of this kind of sand.
* Orange sand: Orange sand is high on iron. It can also be a combination of orange limestone, crushed shells, and volcanic deposits. Ramla Bay in Gozo, Malta or Porto Ferro in Sardinia are examples of each, respectively.
* Green sand: In this kind of sand, the mineral olivine has been separated from other volcanic fragments by erosive forces. A famous example is Hawaii's Papakolea Beach, which has sand containing basalt and coral fragments. Olivine beaches have a high potential for carbon sequestration, and artificial greensand beaches are being explored for this process by Project Vesta.

Playalinda2.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2442 Yesterday 00:03:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2341) Waterfall

Gist

A waterfall is a river or other body of water's steep fall over a rocky ledge into a plunge pool below. Waterfalls are also called cascades. The process of erosion, the wearing away of earth, plays an important part in the formation of waterfalls. Waterfalls themselves also contribute to erosion.

Summary

A waterfall is an area where flowing river water drops abruptly and nearly vertically (see video). Waterfalls represent major interruptions in river flow. Under most circumstances, rivers tend to smooth out irregularities in their flow by processes of erosion and deposition. In time, the long profile of a river (the graph of its gradient) takes the form of a smooth curve, steepest toward the source, gentlest toward the mouth. Waterfalls interrupt this curve, and their presence is a measure of the progress of erosion. A waterfall may also be termed a falls or sometimes a cataract, the latter designation being most common when large volumes of water are involved. Waterfalls of small height and lesser steepness are called cascades; this term is often applied to a series of small falls along a river. Still gentler reaches of rivers that nonetheless exhibit turbulent flow and white water in response to a local increase in channel gradient are called rapids.

A brief treatment of waterfalls follows.

The highest waterfall in the world is Angel Falls in Venezuela (807 m [2,650 feet]). Arguably the largest waterfall is the Chutes de Khone (Khone Falls) on the Mekong River in Laos: the volume of water passing over it has been estimated at 11,600 cubic m (410,000 cubic feet) per second, although its height is only 70 m (230 feet).

There are several conditions that give rise to waterfalls. One of the most common reasons for a waterfall’s existence is difference in rock type. Rivers cross many lithological boundaries, and, if a river passes from a resistant rock bed to a softer one, it is likely to erode the soft rock more quickly and steepen its gradient at the junction between the rock types. This situation can occur as a river cuts and exhumes a junction between different rock beds. The riverbed of Niagara Falls, which forms part of the boundary between the United States and Canada, has a blocky dolomite cap overlying a series of weaker shales and sandstones.

A related cause of waterfalls is the presence of bars of hard rock in the riverbed. A series of cataracts has been created on the Nile where the river has worn its bed sufficiently to uncover the hard crystalline basement rock.

Other waterfalls are caused less by the character of rock formations and more by the structure or shape of the land. Uplifted plateau basalts, for example, may provide a resistant platform at the edge of which rivers produce waterfalls, as occurs on the Antrim basalts in Northern Ireland. On a much larger scale, the morphology of the southern half of Africa, a high plateau surrounded by a steep scarp slope, creates waterfalls and rapids on most of the area’s major rivers. These include the Livingstone Falls on the Congo River and the Augrabies Falls on the Orange River. In general, the occurrence of waterfalls increases in mountainous terrain as slopes get steeper.

Erosion and geology are not the only factors that create waterfalls. Tectonic movement along a fault may bring hard and soft rocks together and encourage the establishment of a waterfall. A drop in sea level promotes increased downcutting and the retreat upstream of a knickpoint (sharp change of gradient indicating the change of a river’s base-level). Depending on the change of sea level, river flow, and geology (among other factors), falls or rapids may develop at the knickpoint. Many waterfalls have been created by glaciation where valleys have been over-deepened by ice and tributary valleys have been left high up on steep valley sides. In the glacially gouged Yosemite Valley in California, the Yosemite Upper Falls tumble 436 m (1,430 feet) from such a hanging valley.

Within a river’s time scale, a waterfall is a temporary feature that is eventually worn away. The rapidity of erosion depends on the height of a given waterfall, its volume of flow, the type and structure of rocks involved, and other factors. In some cases the site of the waterfall migrates upstream by headward erosion of the cliff or scarp, while in others erosion may tend to act downward, to bevel the entire reach of the river containing the falls. With the passage of time, by either or both of these means, the inescapable tendency of rivers is to eliminate any waterfall that may have formed. The energy of rivers is directed toward the achievement of a relatively smooth, concave upward, longitudinal profile.

Even in the absence of entrained rock debris, which serve as an erosive tool of rivers, the energy available for erosion at the base of a waterfall is great. One of the characteristic features associated with waterfalls of any great magnitude, with respect to volume of flow as well as to height, is the presence of a plunge pool, a basin that is scoured out of the river channel beneath the falling water. In some instances the depth of a plunge pool may nearly equal the height of the cliff causing the falls. Plunge pools eventually cause the collapse of the cliff face and the retreat of the waterfall. Retreat of waterfalls is a pronounced feature in some places. At Niagara, for example, the falls have retreated 11 km (7 miles) from the face of the escarpment where they began. Today much of Niagara’s water is diverted for hydroelectric power generation, but it has been estimated that with normal flow the rate of retreat would be about 1 m (3 feet) per year.

Details

A waterfall is a river or other body of water's steep fall over a rocky ledge into a plunge pool below. Waterfalls are also called cascades.

The process of erosion, the wearing away of earth, plays an important part in the formation of waterfalls. Waterfalls themselves also contribute to erosion.

Often, waterfalls form as streams flow from soft rock to hard rock. This happens both laterally (as a stream flows across the earth) and vertically (as the stream drops in a waterfall). In both cases, the soft rock erodes, leaving a hard ledge over which the stream falls.

A fall line is the imaginary line along which parallel rivers plunge as they flow from uplands to lowlands. Many waterfalls in an area help geologists and hydrologists determine a region's fall line and underlying rock structure.

As a stream flows, it carries sediment. The sediment can be microscopic silt, pebbles, or even boulders. Sediment can erode stream beds made of soft rock, such as sandstone or limestone. Eventually, the stream's channel cuts so deep into the stream bed that only a harder rock, such as granite, remains. Waterfalls develop as these granite formations form cliffs and ledges.

A stream's velocity increases as it nears a waterfall, increasing the amount of erosion taking place. The movement of water at the top of a waterfall can erode rocks to be very flat and smooth. Rushing water and sediment topple over the waterfall, eroding the plunge pool at the base. The crashing flow of the water may also create powerful whirlpools that erode the rock of the plunge pool beneath them.

The resulting erosion at the base of a waterfall can be very dramatic, and cause the waterfall to "recede." The area behind the waterfall is worn away, creating a hollow, cave-like structure called a "rock shelter." Eventually, the rocky ledge (called the outcropping) may tumble down, sending boulders into the stream bed and plunge pool below. This causes the waterfall to "recede" many meters upstream. The waterfall erosion process starts again, breaking down the boulders of the former outcropping.

Erosion is just one process that can form waterfalls. A waterfall may form across a fault, or crack in the Earth’s surface. An earthquake, landslide, glacier, or volcano may also disrupt stream beds and help create waterfalls.

Classifying Waterfalls

There is not a standard way to classify waterfalls. Some scientists classify waterfalls based on the average volume of water in the waterfall. A Class 10 waterfall using this scale is Inga Falls, Democratic Republic of Congo, where the Congo River twists in a series of rapids. The estimated volume of water discharged from Inga Falls is 25,768 cubic meters per second (910,000 cubic feet per second).

Another popular way of classifying waterfalls is by width. One of the widest waterfalls is Khone Phapheng Falls, Laos. At the Khone Phapheng Falls, the Mekong River flows through a succession of relatively shallow rapids. The width of the Khone Phapheng Falls is about 10,783 meters (35,376 feet).

Waterfalls are also classified by height. Angel Falls, the world’s tallest waterfall, plummets 979 meters (3,212 feet) into a remote canyon in a rain forest in Venezuela. The water, from the Gauja River, often does not reach the bottom. The fall is so long, and so steep, that air pressure is stronger often than the water pressure of the falls. The water is turned to mist before it reaches the small tributary below.

Types of Waterfalls

One of the most popular, if least scientific, ways to classify waterfalls is by type. A waterfall's type is simply the way the descends. Most waterfalls fit more than one category.

* A block waterfall descends from a wide stream. Niagara Falls, in the United States and Canada, is a block waterfall on the Niagara River.

* A cascade is a waterfall that descends over a series of rock steps. Monkey Falls, in the Indira Gandhi Wildlife Sanctuary and National Park in Tamil Nadu, India, is a gently sloping cascade. The waterfall is safe enough for children to play in the water.

* A cataract is a powerful, even dangerous, waterfall. Among the widest and wildest of cataracts are the thundering waters of the Iguazu River on the border between Brazil and Argentina.

* A chute is a waterfall in which the stream passage is very narrow, forcing water through at unusually high pressure. Three Chute Falls is named for the three "chutes" through which the Tenaya Creek falls in Yosemite National Park, California, U.S.

* Fan waterfalls are named for their shape. Water spreads out horizontally as it descends. Virgin Falls is a striking fan waterfall on Tofino Creek, on Vancouver Island, British Columbia, Canada.

* Frozen waterfalls are just what they sound like. For at least part of the year, the waterfall freezes. Mountaineers often climb frozen waterfalls as a challenging test of their skill. The Fang is a single pillar of ice in Vail, Colorado, U.S., that vertically plunges more than 30 meters (100 feet).

* Horsetail waterfalls maintain contact with the hard rock that underlies them. Reichenbach Falls, a fall on the Reichenbach Stream in Switzerland, is a horsetail waterfall where legendary detective Sherlock Holmes allegedly fell to his doom.

* Multi-step waterfalls are a series of connected waterfalls, each with their own plunge pool. The breathtaking "falling lakes" of Plitvice Lakes National Park, Croatia, are a series of multi-step waterfalls.

* Plunge waterfalls, unlike horsetail falls, lose contact with the hard rock. The tallest waterfall in Japan, Hannoki Falls, is a plunge waterfall that stands 497 meters (1,640 feet). Hannoki Falls is seasonally fed by snowmelt from the Tateyama Mountains.

* Punchbowl waterfalls are characterized by wide pools at their base. Wailua Falls is a punchbowl waterfall on the island of Kauai, Hawai'i, U.S. Although the plunge pool is tranquil and popular for swimming, the area around Wailua Falls itself is dangerous.

The water flowing over segmented waterfalls separate as distinct streams. Huge outcroppings of hard rock separate the streams of Nigretta Falls, a segmented waterfall in Victoria, Australia, before they meet in a large plunge pool.

Case Study: Niagara Falls

The Niagara River has two falls, one in the U.S. state of New York and one in the province of Ontario, Canada. Each waterfall is less than 60 meters (200 feet) tall, but together they are more than a kilometer (0.62 miles) wide.

Niagara and many other falls with large volumes of water are used to generate hydroelectric power. A tremendous volume of water flows over Niagara Falls, as much as 5,525 cubic meters (195,000 cubic feet) per second. Power stations upstream from the falls convert hydroelectric energy into electricity for residential and commercial use.

The U.S. and Canadian governments manage the Niagara River so carefully that it is possible for either country to "turn off" the falls. This is done at night, so as not to disturb the tourism industry, and the falls are never actually turned off, just slowed down. Water is diverted to canals and reservoirs, and the decreased flow allows engineers to check for erosion and other damage on the falls. U.S. and Canadian authorities also work together to ensure Niagara Falls doesn’t freeze in the winter, which would threaten power production.

Because waterfalls are barriers to navigation, canals are sometimes built to get around them. Niagara Falls prevents passage between Lake Erie and Lake Ontario on the Niagara River. In the 19th century, the Welland Canal was built to make passage between the two Great Lakes possible.

Additional Information

A waterfall is any point in a river or stream where water flows over a vertical drop or a series of steep drops. Waterfalls also occur where meltwater drops over the edge of a tabular iceberg or ice shelf.

Waterfalls can be formed in several ways, but the most common method of formation is that a river courses over a top layer of resistant bedrock before falling onto softer rock, which erodes faster, leading to an increasingly high fall. Waterfalls have been studied for their impact on species living in and around them.

Humans have had a distinct relationship with waterfalls since prehistory, travelling to see them, exploring and naming them. They can present formidable barriers to navigation along rivers. Waterfalls are religious sites in many cultures. Since the 18th century, they have received increased attention as tourist destinations, sources of hydropower, and—particularly since the mid-20th century—as subjects of research.

Definition and terminology

A waterfall is generally defined as a point in a river where water flows over a steep drop that is close to or directly vertical. In 2000 Mabin specified that "The horizontal distance between the positions of the lip and plunge pool should be no more than c 25% of the waterfall height." There are various types and methods to classify waterfalls. Some scholars have included rapids as a subsection. What actually constitutes a waterfall continues to be debated.

Waterfalls are sometimes interchangeably referred to as "cascades" and "cataracts", though some sources specify a cataract as being a larger and more powerful waterfall and a cascade as being smaller. A plunge pool is a type of stream pool formed at the bottom of a waterfall. A waterfall may also be referred to as a "foss" or "force".

Formation

Waterfalls are commonly formed in the upper course of a river where lakes flow into valleys in steep mountains.

A river sometimes flows over a large step in the rocks that may have been formed by a fault line. Waterfalls can occur along the edge of a glacial trough, where a stream or river flowing into a glacier continues to flow into a valley after the glacier has receded or melted. The large waterfalls in Yosemite Valley are examples of this phenomenon, which is referred to as a hanging valley. Another reason hanging valleys may form is where two rivers join and one is flowing faster than the other.

When warm and cold water meets by a gorge in the ocean, large underwater waterfalls can form as the cold water rushes to the bottom.

Caprock model

The caprock model of waterfall formation states that the river courses over resistant bedrock, erosion happens slowly and is dominated by impacts of water-borne sediment on the rock, while downstream the erosion occurs more rapidly. As the watercourse increases its velocity at the edge of the waterfall, it may pluck material from the riverbed, if the bed is fractured or otherwise more erodible. Hydraulic jets and hydraulic jumps at the toe of a falls can generate large forces to erode the bed, especially when forces are amplified by water-borne sediment. Horseshoe-shaped falls focus the erosion to a central point, also enhancing riverbed change below a waterfall.

A process known as "potholing" involves local erosion of a potentially deep hole in bedrock due to turbulent whirlpools spinning stones around on the bed, drilling it out. Sand and stones carried by the watercourse therefore increase erosion capacity. This causes the waterfall to carve deeper into the bed and to recede upstream. Often over time, the waterfall will recede back to form a canyon or gorge downstream as it recedes upstream, and it will carve deeper into the ridge above it. The rate of retreat for a waterfall can be as high as one-and-a-half metres per year.

Often, the rock stratum just below the more resistant shelf will be of a softer type, meaning that undercutting due to splashback will occur here to form a shallow cave-like formation known as a rock shelter under and behind the waterfall. Eventually, the outcropping, more resistant cap rock will collapse under pressure to add blocks of rock to the base of the waterfall. These blocks of rock are then broken down into smaller boulders by attrition as they collide with each other, and they also erode the base of the waterfall by abrasion, creating a deep plunge pool in the gorge downstream.

Streams can become wider and shallower just above waterfalls due to flowing over the rock shelf, and there is usually a deep area just below the waterfall because of the kinetic energy of the water hitting the bottom. However, a study of waterfalls systematics reported that waterfalls can be wider or narrower above or below a falls, so almost anything is possible given the right geological and hydrological setting. Waterfalls normally form in a rocky area due to erosion. After a long period of being fully formed, the water falling off the ledge will retreat, causing a horizontal pit parallel to the waterfall wall. Eventually, as the pit grows deeper, the waterfall collapses to be replaced by a steeply sloping stretch of river bed. In addition to gradual processes such as erosion, earth movement caused by earthquakes or landslides or volcanoes can lead to the formation of waterfalls.

Ecology

Waterfalls are an important factor in determining the distribution of lotic organisms such as fish and aquatic invertebrates, as they may restrict dispersal along streams. The presence or absence of certain species can have cascading ecological effects, and thus cause differences in trophic regimes above and below waterfalls. Certain aquatic plants and insects also specialize in the environment of the waterfall itself. A 2012 study of the Agbokim Waterfalls, has suggested that they hold biodiversity to a much higher extent than previously thought.

Waterfalls also affect terrestrial species. They create a small microclimate in their immediate vicinity characterized by cooler temperatures and higher humidity than the surrounding region, which may support diverse communities of mosses and liverworts. Species of these plants may have disjunct populations at waterfall zones far from their core range.

Waterfalls provide nesting cover for several species of bird, such as the black swift and white-throated dipper. These species preferentially nest in the space behind the falling water, which is thought to be a strategy to avoid predation.

zimbabwes-waterfalls-619697.jpg?_gl=1*159zx67*_ga*MzY5NjIxMDgyLjE3MzgwNjExMTU.*_ga_1BJ0B91QPT*MTczODA2MTExNS4xLjAuMTczODA2MTExOS4wLjAuMA..


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2443 Yesterday 17:44:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2342) Computational Biology

Gist

Computational biology is the science that answers the question “How can we learn and use models of biological systems constructed from experimental measurements?”  These models may describe what biological tasks are carried out by particular nucleic acid or peptide sequences, which gene (or genes) when expressed produce a particular phenotype or behavior, what sequence of changes in gene or protein expression or localization lead to a particular disease, and how changes in cell organization influence cell behavior.   This field is sometimes referred to as bioinformatics, but many scientists use the latter term to describe the field that answers the question “How can I efficiently store, annotate, search and compare information from biological measurements and observations?"

Summary

Computational biology is a branch of biology involving the application of computers and computer science to the understanding and modeling of the structures and processes of life. It entails the use of computational methods (e.g., algorithms) for the representation and simulation of biological systems, as well as for the interpretation of experimental data, often on a very large scale.

Underpinnings of computational biology

The beginnings of computational biology essentially date to the origins of computer science. British mathematician and logician Alan Turing, often called the father of computing, used early computers to implement a model of biological morphogenesis (the development of pattern and form in living organisms) in the early 1950s, shortly before his death. At about the same time, a computer called MANIAC, built at the Los Alamos National Laboratory in New Mexico for weapons research, was applied to such purposes as modeling hypothesized genetic codes. (Pioneering computers had been used even earlier in the 1950s for numeric calculations in population genetics, but the first instances of authentic computational modeling in biology were the work by Turing and by the group at Los Alamos.)

By the 1960s, computers had been applied to deal with much more-varied sets of analyses, namely those examining protein structure. These developments marked the rise of computational biology as a field, and they originated from studies centred on protein crystallography, in which scientists found computers indispensable for carrying out laborious Fourier analyses to determine the three-dimensional structure of proteins.

Starting in the 1950s, taxonomists began to incorporate computers into their work, using the machines to assist in the classification of organisms by clustering them based on similarities of sets of traits. Such taxonomies have been useful particularly for phylogenetics (the study of evolutionary relationships). In the 1960s, when existing techniques were extended to the level of DNA sequences and amino acid sequences of proteins and combined with a burgeoning knowledge of cellular processes and protein structures, a whole new set of computational methods was developed in support of molecular phylogenetics. These computational methods entailed the creation of increasingly sophisticated techniques for the comparison of strings of symbols that benefited from the formal study of algorithms and the study of dynamic programming in particular. Indeed, efficient algorithms always have been of primary concern in computational biology, given the scale of data available, and biology has in turn provided examples that have driven much advanced research in computer science. Examples include graph algorithms for genome mapping (the process of locating fragments of DNA on chromosomes) and for certain types of DNA and peptide sequencing methods, clustering algorithms for gene expression analysis and phylogenetic reconstruction, and pattern matching for various sequence search problems.

Beginning in the 1980s, computational biology drew on further developments in computer science, including a number of aspects of artificial intelligence (AI). Among these were knowledge representation, which contributed to the development of ontologies (the representation of concepts and their relationships) that codify biological knowledge in “computer-readable” form, and natural-language processing, which provided a technological means for mining information from text in the scientific literature. Perhaps most significantly, the subfield of machine learning found wide use in biology, from modeling sequences for purposes of pattern recognition to the analysis of high-dimensional (complex) data from large-scale gene-expression studies.

Applications of computational biology

Initially, computational biology focused on the study of the sequence and structure of biological molecules, often in an evolutionary context. Beginning in the 1990s, however, it extended increasingly to the analysis of function. Functional prediction involves assessing the sequence and structural similarity between an unknown and a known protein and analyzing the proteins’ interactions with other molecules. Such analyses may be extensive, and thus computational biology has become closely aligned with systems biology, which attempts to analyze the workings of large interacting networks of biological components, especially biological pathways.

Biochemical, regulatory, and genetic pathways are highly branched and interleaved, as well as dynamic, calling for sophisticated computational tools for their modeling and analysis. Moreover, modern technology platforms for the rapid, automated (high-throughput) generation of biological data have allowed for an extension from traditional hypothesis-driven experimentation to data-driven analysis, by which computational experiments can be performed on genome-wide databases of unprecedented scale. As a result, many aspects of the study of biology have become unthinkable without the power of computers and the methodologies of computer science.

Distinctions among related fields

How best to distinguish computational biology from the related field of bioinformatics, and to a lesser extent from the fields of mathematical and theoretical biology, has long been a matter of debate. The terms bioinformatics and computational biology are often used interchangeably, even by experts, and many feel that the distinctions are not useful. Both fields fundamentally are computational approaches to biology. However, whereas bioinformatics tends to refer to data management and analysis using tools that are aids to biological experimentation and to the interpretation of laboratory results, computational biology typically is thought of as a branch of biology, in the same sense that computational physics is a branch of physics. In particular, computational biology is a branch of biology that is uniquely enabled by computation. In other words, its formation was not defined by a need to deal with scale; rather, it was defined by virtue of the techniques that computer science brought to the formulation and solving of challenging problems, to the representation and examination of domain knowledge, and ultimately to the generation and testing of scientific hypotheses.

Computational biology is more easily distinguished from mathematical biology, though there are overlaps. The older discipline of mathematical biology was concerned primarily with applications of numerical analysis, especially differential equations, to topics such as population dynamics and enzyme kinetics. It later expanded to include the application of advanced mathematical approaches in genetics, evolution, and spatial modeling. Such mathematical analyses inevitably benefited from computers, especially in instances involving systems of differential equations that required simulation for their solution. The use of automated calculation does not in itself qualify such activities as computational biology. However, mathematical modeling of biological systems does overlap with computational biology, particularly where simulation for purposes of prediction or hypothesis generation is a key element of the model. A useful distinction in this regard is that between numerical analysis and discrete mathematics; the latter, which is concerned with symbolic rather than numeric manipulations, is considered foundational to computer science, and in general its applications to biology may be considered aspects of computational biology.

Computational biology can also be distinguished from theoretical biology (which itself is sometimes grouped with mathematical biology), though again there are significant relationships. Theoretical biology often focuses on mathematical abstractions and speculative interpretations of biological systems that may or may not be of practical use in analysis or amenable to computational implementation. Computational biology generally is associated with practical application, and indeed journals and annual meetings in the field often actively encourage the presentation of biological analyses using real data along with theory. On the other hand, important contributions to computational biology have arisen through aspects of theoretical biology derived from information theory, network theory, and nonlinear dynamical systems (among other areas). As an example, advances in the mathematical study of complex networks have increased scientists’ understanding of naturally occurring interactions among genes and gene products, providing insight into how characteristic network architectures may have arisen in the course of evolution and why they tend to be robust in the face of perturbations such as mutations.

Details

Computational biology refers to the use of techniques in computer science, data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and data science, the field also has foundations in applied mathematics, molecular biology, cell biology, chemistry, and genetics.

History

Bioinformatics, the analysis of informatics processes in biological systems, began in the early 1970s. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data pushed biological researchers to use computers to evaluate and compare large data sets in their own field.

By 1982, researchers shared information via punch cards. The amount of data grew exponentially by the end of the 1980s, requiring new computational methods for quickly interpreting relevant information.

Perhaps the best-known example of computational biology, the Human Genome Project, officially began in 1990. By 2003, the project had mapped around 85% of the human genome, satisfying its initial goals. Work continued, however, and by 2021 level " a complete genome" was reached with only 0.3% remaining bases covered by potential issues. The missing Y chromosome was added in January 2022.

Since the late 1990s, computational biology has become an important part of biology, leading to numerous subfields. Today, the International Society for Computational Biology recognizes 21 different 'Communities of Special Interest', each representing a slice of the larger field. In addition to helping sequence the human genome, computational biology has helped create accurate models of the human brain, map the 3D structure of genomes, and model biological systems.

Global contributions:

Colombia

In 2000, despite a lack of initial expertise in programming and data management, Colombia began applying computational biology from an industrial perspective, focusing on plant diseases. This research has contributed to understanding how to counteract diseases in crops like potatoes and studying the genetic diversity of coffee plants. By 2007, concerns about alternative energy sources and global climate change prompted biologists to collaborate with systems and computer engineers. Together, they developed a robust computational network and database to address these challenges. In 2009, in partnership with the University of Los Angeles, Colombia also created a Virtual Learning Environment (VLE) to improve the integration of computational biology and bioinformatics.

Poland

In Poland, computational biology is closely linked to mathematics and computational science, serving as a foundation for bioinformatics and biological physics. The field is divided into two main areas: one focusing on physics and simulation and the other on biological sequences. The application of statistical models in Poland has advanced techniques for studying proteins and RNA, contributing to global scientific progress. Polish scientists have also been instrumental in evaluating protein prediction methods, significantly enhancing the field of computational biology. Over time, they have expanded their research to cover topics such as protein-coding analysis and hybrid structures, further solidifying Poland's influence on the development of bioinformatics worldwide.

Applications:

Anatomy

Computational anatomy is the study of anatomical shape and form at the visible or gross anatomical

scale of morphology. It involves the development of computational mathematical and data-analytical methods for modeling and simulating biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging, computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morpheme scale in 3D.

The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in

to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry.

Data and modeling

Mathematical biology is the use of mathematical models of living organisms to examine the systems that govern structure, development, and behavior in biological systems. This entails a more theoretical approach to problems, rather than its more empirically-minded counterpart of experimental biology. Mathematical biology draws on discrete mathematics, topology (also useful for computational modeling), Bayesian statistics, linear algebra and Boolean algebra.

These mathematical approaches have enabled the creation of databases and other methods for storing, retrieving, and analyzing biological data, a field known as bioinformatics. Usually, this process involves genetics and analyzing genes.

Gathering and analyzing large datasets have made room for growing research fields such as data mining, and computational biomodeling, which refers to building computer models and visual simulations of biological systems. This allows researchers to predict how such systems will react to different environments, which is useful for determining if a system can "maintain their state and functions against external and internal perturbations".[15] While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modeling approach is to use Petri nets via tools such as esyN.

Along similar lines, until recent decades theoretical ecology has largely dealt with analytic models that were detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses.

Systems biology

Systems biology consists of computing the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. This process usually involves networking cell signaling and metabolic pathways. Systems biology often uses computational techniques from biological modeling and graph theory to study these complex interactions at cellular levels.

Evolutionary biology

Computational biology has assisted evolutionary biology by:

* Using DNA data to reconstruct the tree of life with computational phylogenetics
* Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history
* Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve

Genomics

Computational genomics is the study of the genomes of cells and organisms. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life.

One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way.

Sequence alignment is another process for comparing and detecting similarities between biological sequences or genes. Sequence alignment is useful in a number of bioinformatics applications, such as computing the longest common subsequence of two genes or comparing variants of certain diseases.

An untouched project in computational genomics is the analysis of intergenic regions, which comprise roughly 97% of the human genome. Researchers are working to understand the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE and the Roadmap Epigenomics Project.

Understanding how individual genes contribute to the biology of an organism at the molecular, cellular, and organism levels is known as gene ontology. The Gene Ontology Consortium's mission is to develop an up-to-date, comprehensive, computational model of biological systems, from the molecular level to larger pathways, cellular, and organism-level systems. The Gene Ontology resource provides a computational representation of current scientific knowledge about the functions of genes (or, more properly, the protein and non-coding RNA molecules produced by genes) from many different organisms, from humans to bacteria.

3D genomics is a subsection in computational biology that focuses on the organization and interaction of genes within a eukaryotic cell. One method used to gather 3D genomic data is through Genome Architecture Mapping (GAM). GAM measures 3D distances of chromatin and DNA in the genome by combining cryosectioning, the process of cutting a strip from the nucleus to examine the DNA, with laser microdissection. A nuclear profile is simply this strip or slice that is taken from the nucleus. Each nuclear profile contains genomic windows, which are certain sequences of nucleotides - the base unit of DNA. GAM captures a genome network of complex, multi enhancer chromatin contacts throughout a cell.

Neuroscience

Computational neuroscience is the study of brain function in terms of the information processing properties of the nervous system. A subset of neuroscience, it looks to model the brain to examine specific aspects of the neurological system. Models of the brain include:

* Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement.
* Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model.

It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations.

Computational neuropsychiatry is an emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. Several initiatives have demonstrated that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions.

Pharmacology

Computational pharmacology is "the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data". The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.

Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.

Oncology

Computational biology plays a crucial role in discovering signs of new, previously unknown living creatures and in cancer research. This field involves large-scale measurements of cellular processes, including RNA, DNA, and proteins, which pose significant computational challenges. To overcome these, biologists rely on computational tools to accurately measure and analyze biological data. In cancer research, computational biology aids in the complex analysis of tumor samples, helping researchers develop new ways to characterize tumors and understand various cellular properties. The use of high-throughput measurements, involving millions of data points from DNA, RNA, and other biological structures, helps in diagnosing cancer at early stages and in understanding the key factors that contribute to cancer development. Areas of focus include analyzing molecules that are deterministic in causing cancer and understanding how the human genome relates to tumor causation.

Toxicology

Computational toxicology is a multidisciplinary area of study, which is employed in the early stages of drug discovery and development to predict the safety and potential toxicity of drug candidates.

1*8XSOdpRkyV4BC7E3yKqXZA.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2444 Today 00:17:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 49,153

Re: Miscellany

2343) Cloning

Gist:

The term cloning describes a number of different processes that can be used to produce genetically identical copies of a biological entity. The copied material, which has the same genetic makeup as the original, is referred to as a clone.

Cloning is the process of generating a genetically identical copy of a cell or an organism. Cloning happens often in nature—for example, when a cell replicates itself asexually without any genetic alteration or recombination.

Summary

Cloning is the process of generating a genetically identical copy of a cell or an organism. Cloning happens often in nature—for example, when a cell replicates itself asexually without any genetic alteration or recombination. Prokaryotic organisms (organisms lacking a cell nucleus) such as bacteria create genetically identical duplicates of themselves using binary fission or budding. In eukaryotic organisms (organisms possessing a cell nucleus) such as humans, all the cells that undergo mitosis, such as skin cells and cells lining the gastrointestinal tract, are clones; the only exceptions are gametes (eggs and sperm), which undergo meiosis and genetic recombination.

In biomedical research, cloning is broadly defined to mean the duplication of any kind of biological material for scientific study, such as a piece of DNA or an individual cell. For example, segments of DNA are replicated exponentially by a process known as polymerase chain reaction, or PCR, a technique that is used widely in basic biological research. The type of cloning that is the focus of much ethical controversy involves the generation of cloned embryos, particularly those of humans, which are genetically identical to the organisms from which they are derived, and the subsequent use of these embryos for research, therapeutic, or reproductive purposes.

Early cloning experiments

Reproductive cloning was originally carried out by artificial “twinning,” or embryo splitting, which was first performed on a salamander embryo in the early 1900s by German embryologist Hans Spemann. Later, Spemann, who was awarded the Nobel Prize for Physiology or Medicine (1935) for his research on embryonic development, theorized about another cloning procedure known as nuclear transfer. This procedure was performed in 1952 by American scientists Robert W. Briggs and Thomas J. King, who used DNA from embryonic cells of the frog Rana pipiens to generate cloned tadpoles. In 1958 British biologist John Bertrand Gurdon successfully carried out nuclear transfer using DNA from adult intestinal cells of African clawed frogs (Xenopus laevis). Gurdon was awarded a share of the 2012 Nobel Prize in Physiology or Medicine for this breakthrough.

Advancements in the field of molecular biology led to the development of techniques that allowed scientists to manipulate cells and to detect chemical markers that signal changes within cells. With the advent of recombinant DNA technology in the 1970s, it became possible for scientists to create transgenic clones—clones with genomes containing pieces of DNA from other organisms. Beginning in the 1980s mammals such as sheep were cloned from early and partially differentiated embryonic cells. In 1996 British developmental biologist Ian Wilmut generated a cloned sheep, named Dolly, by means of nuclear transfer involving an enucleated embryo and a differentiated cell nucleus. This technique, which was later refined and became known as somatic cell nuclear transfer (SCNT), represented an extraordinary advance in the science of cloning, because it resulted in the creation of a genetically identical clone of an already grown sheep. It also indicated that it was possible for the DNA in differentiated somatic (body) cells to revert to an undifferentiated embryonic stage, thereby reestablishing pluripotency—the potential of an embryonic cell to grow into any one of the numerous different types of mature body cells that make up a complete organism. The realization that the DNA of somatic cells could be reprogrammed to a pluripotent state significantly impacted research into therapeutic cloning and the development of stem cell therapies.

Soon after the generation of Dolly, a number of other animals were cloned by SCNT, including pigs, goats, rats, mice, dogs, horses, and mules. Despite those successes, the birth of a viable SCNT primate clone would not come to fruition until 2018, and scientists used other cloning processes in the meantime. In 2001 a team of scientists cloned a rhesus monkey through a process called embryonic cell nuclear transfer, which is similar to SCNT except that it uses DNA from an undifferentiated embryo. In 2007 macaque monkey embryos were cloned by SCNT, but those clones lived only to the blastocyst stage of embryonic development. It was more than 10 years later, after improvements to SCNT had been made, that scientists announced the live birth of two clones of the crab-eating macaque (Macaca fascicularis), the first primate clones using the SCNT process. (SCNT has been carried out with very limited success in humans, in part because of problems with human egg cells resulting from the mother’s age and environmental factors.)

Reproductive cloning

Reproductive cloning involves the implantation of a cloned embryo into a real or an artificial uterus. The embryo develops into a fetus that is then carried to term. Reproductive cloning experiments were performed for more than 40 years through the process of embryo splitting, in which a single early-stage two-cell embryo is manually divided into two individual cells and then grows as two identical embryos. Reproductive cloning techniques underwent significant change in the 1990s, following the birth of Dolly, who was generated through the process of SCNT. This process entails the removal of the entire nucleus from a somatic (body) cell of an organism, followed by insertion of the nucleus into an egg cell that has had its own nucleus removed (enucleation). Once the somatic nucleus is inside the egg, the egg is stimulated with a mild electrical current and begins dividing. Thus, a cloned embryo, essentially an embryo of an identical twin of the original organism, is created. The SCNT process has undergone significant refinement since the 1990s, and procedures have been developed to prevent damage to eggs during nuclear extraction and somatic cell nuclear insertion. For example, the use of polarized light to visualize an egg cell’s nucleus facilitates the extraction of the nucleus from the egg, resulting in a healthy, viable egg and thereby increasing the success rate of SCNT.

Reproductive cloning using SCNT is considered very harmful since the fetuses of embryos cloned through SCNT rarely survive gestation and usually are born with birth defects. Wilmut’s team of scientists needed 277 tries to create Dolly. Likewise, attempts to produce a macaque monkey clone in 2007 involved 100 cloned embryos, implanted into 50 female macaque monkeys, none of which gave rise to a viable pregnancy. In January 2008, scientists at Stemagen, a stem cell research and development company in California, announced that they had cloned five human embryos by means of SCNT and that the embryos had matured to the stage at which they could have been implanted in a womb. However, the scientists destroyed the embryos after five days, in the interest of performing molecular analyses on them.

Therapeutic cloning

Therapeutic cloning is intended to use cloned embryos for the purpose of extracting stem cells from them, without ever implanting the embryos in a womb. Therapeutic cloning enables the cultivation of stem cells that are genetically identical to a patient. The stem cells could be stimulated to differentiate into any of the more than 200 cell types in the human body. The differentiated cells then could be transplanted into the patient to replace diseased or damaged cells without the risk of rejection by the immune system. These cells could be used to treat a variety of conditions, including Alzheimer disease, Parkinson disease, diabetes mellitus, stroke, and spinal cord injury. In addition, stem cells could be used for in vitro (laboratory) studies of normal and abnormal embryo development or for testing drugs to see if they are toxic or cause birth defects.

Although stem cells have been derived from the cloned embryos of animals such as mice, the generation of stem cells from cloned primate embryos has proved exceptionally difficult. For example, in 2007 stem cells successfully derived from cloned macaque embryos were able to differentiate into mature heart cells and brain neurons. However, the experiment started with 304 egg cells and resulted in the development of only two lines of stem cells, one of which had an abnormal Y chromosome. Likewise, the production of stem cells from human embryos has been fraught with the challenge of maintaining embryo viability. In 2001 scientists at Advanced Cell Technology, a research company in Massachusetts, successfully transferred DNA from human cumulus cells, which are cells that cling to and nourish human eggs, into eight enucleated eggs. Of these eight eggs, three developed into early-stage embryos (containing four to six cells); however, the embryos survived only long enough to divide once or twice. In 2004 South Korean researcher Hwang Woo Suk claimed to have cloned human embryos using SCNT and to have extracted stem cells from the embryos. However, this later proved to be a fraud; Hwang had fabricated evidence and had actually carried out the process of parthenogenesis, in which an unfertilized egg begins to divide with only half a genome. The following year a team of researchers from the University of Newcastle upon Tyne was able to grow a cloned human embryo to the 100-cell blastocyst stage using DNA from embryonic stem cells, though they did not generate a line of stem cells from the blastocyst. Scientists have since successfully derived embryonic stem cells from SCNT human embryos.

Progress in research on therapeutic cloning in humans has been slow relative to the advances made in reproductive cloning in animals. This is primarily because of the technical challenges and ethical controversy arising from the procuring of human eggs solely for research purposes. In addition, the development of induced pluripotent stem cells, which are derived from somatic cells that have been reprogrammed to an embryonic state through the introduction of specific genetic factors into the cell nuclei, has challenged the use of cloning methods and of human eggs.

Ethical controversy

Human reproductive cloning remains universally condemned, primarily for the psychological, social, and physiological risks associated with cloning. A cloned embryo intended for implantation into a womb requires thorough molecular testing to fully determine whether an embryo is healthy and whether the cloning process is complete. In addition, as demonstrated by 100 failed attempts to generate a cloned macaque in 2007, a viable pregnancy is not guaranteed. Because the risks associated with reproductive cloning in humans introduce a very high likelihood of loss of life, the process is considered unethical. There are other philosophical issues that also have been raised concerning the nature of reproduction and human identity that reproductive cloning might violate. Concerns about eugenics, the once popular notion that the human species could be improved through the selection of individuals possessing desired traits, also have surfaced, since cloning could be used to breed “better” humans, thus violating principles of human dignity, freedom, and equality.

There also exists controversy over the ethics of therapeutic and research cloning. Some individuals and groups have an objection to therapeutic cloning, because it is considered the manufacture and destruction of a human life, even though that life has not developed past the embryonic stage. Those who are opposed to therapeutic cloning believe that the technique supports and encourages acceptance of the idea that human life can be created and expended for any purpose. However, those who support therapeutic cloning believe that there is a moral imperative to heal the sick and to seek greater scientific knowledge. Many of these supporters believe that therapeutic and research cloning should be not only allowed but also publicly funded, similar to other types of disease and therapeutics research. Most supporters also argue that the embryo demands special moral consideration, requiring regulation and oversight by funding agencies. In addition, it is important to many philosophers and policy makers that women and couples not be exploited for the purpose of obtaining their embryos or eggs.

There are laws and international conventions that attempt to uphold certain ethical principles and regulations concerning cloning. In 2005 the United Nations passed a nonbinding Declaration on Human Cloning that calls upon member states “to adopt all measures necessary to prohibit all forms of human cloning inasmuch as they are incompatible with human dignity and the protection of human life.” This does provide leeway for member countries to pursue therapeutic cloning. The United Kingdom, through its Human Fertilisation and Embryology Authority, issues licenses for creating human embryonic stem cells through nuclear transfer. These licenses ensure that human embryos are cloned for legitimate therapeutic and research purposes aimed at obtaining scientific knowledge about disease and human development. The licenses require the destruction of embryos by the 14th day of development, since this is when embryos begin to develop the primitive streak, the first indicator of an organism’s nervous system. The United States federal government has not passed any laws regarding human cloning due to disagreement within the legislative branch about whether to ban all cloning or to ban only reproductive cloning. The Dickinson-Wicker amendment, attached to U.S. appropriations bills since 1995, has prevented the use of federal dollars to fund the harm or destruction of human embryos for research. It is presumed that nuclear transfer and any other form of cloning is subject to this restriction.

Details

Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments.

The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules.

In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (e.g. cloning soldiers for warfare).

Etymology

Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word (klōn), twig, which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling clon was used until the early twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively.

Natural cloning

Natural cloning is the production of clones without the involvement of genetic engineering techniques or human intervention (i.e. artificial cloning). Natural cloning occurs through a variety of natural mechanisms, from single-celled organisms to complex multicellular organisms, and has allowed life forms to spread for hundreds of millions of years. Versions of this reproduction method are used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Some of the mechanisms are explored and used in plants and animals are binary fission, budding, fragmentation, and parthenogenesis. It can also occur during some forms of asexual reproduction, when a single parent organism produces genetically identical offspring by itself.

Many plants are well known for natural cloning ability, including blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, Myrica, and the American sweetgum.

It also occurs accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry identical DNA.

Molecular cloning

Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools.

Cloning of any DNA fragment essentially involves four steps:*

* fragmentation - breaking apart a strand of DNA
* ligation – gluing together pieces of DNA in a desired sequence
* transfection – inserting the newly formed pieces of DNA into cells
*. screening/selection – selecting out the cells that were successfully transfected with the new DNA

Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy.

Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing.

Cell cloning:

Cloning unicellular organisms

Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media.

A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth.

Cloning stem cells

Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source.

Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body.

The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus.

The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals.

SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day.

In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus.

Additional Information

Before we get into any specifics, it’s best we explain what cloning is. There are multiple working definitions of cloning, so let’s go to the root. The origin of the words for cloning comes from the plant biology field and was used to describe a similar process by which live animal cloning is facilitated.

We can thank plant physiologist Herbert J. Webber for coming up with the term “clone.” It comes from an Ancient Greek word for twig and was intended to refer to the process of burying a plant’s twig to grow a whole new plant.

This is still something that happens today in botanical and horticultural fields, even if the term clone has a whole new meaning in popular culture. If you know how cloning works, you can probably see already how this word can be used for animal cloning too. If not, don’t worry, we have explained everything below.

So how can we sum cloning up in a way that isn’t referring to a specific context? Perhaps the best way is this – cloning is the process of replicating a biological entity by using cells from that entity.

The result is genetically identical to the original. Cambridge Dictionary sums up cloning in a very similar way. Put simply, whether it’s a plant or an animal, cloning is where a part of one thing is used to make an identical genetic copy of that thing.

Is Cloning Possible?

Is  Cloning possible? If you couldn’t tell by how we’ve talked about cloning so far, yes! Cloning is absolutely possible.

Ignoring that plant cloning exists, and was the origin of the word, the animal cloning that comes to mind when most people hear the word is also possible.

Not only is it possible, but it has also happened, and happens in a wide variety of scientific fields. You see, when many people hear the word cloning, they think of the cloning of whole, living organisms.

While that is cloning taken to its logical end-goal, cloning can also refer to the replication of individual cells and tissues.

If you take just one cell of a being and use it to create just one more cell that is identical, that’s also cloning. In fact, that’s exactly what happens with stem cell research.

Stem cell research has often led to groundbreaking developments in the field of medicine, however, there are ethical concerns that come with the territory. To minimize these concerns, they often clone genetic material and make clones of those clones so that they can avoid many of the ethical concerns in the field.

It’s also easy to do so since stem cells can grow into many different cell types, but that’s a whole different topic. The short answer is yes, cloning is very possible because we have done so with plants, individual cells and organs, and even whole organisms, as we explain later below.

Do Clones Ever Occur Naturally?

Do clones ever occur naturally?

A common misconception is that cloning is something to be done in white, sterilized labs, with complicated test tubes and expensive equipment.

That’s not wrong, of course, we only just mentioned stem cell research above. That said, yes, cloning is a natural process.

When we use science to clone, we are only following the example set before us in the natural world. Single-celled organisms, once the origin of all life on this planet, reproduce asexually.

Bacteria is probably the most popular example of a single-celled organism, and they also replicate through asexual reproduction.

This means they only use their own genetic information to create another single-celled organism, which then has the same information.

That fits our definition of cloning! Not only that, but you don’t even need to break out a microscope to see examples of cloned life. You may have heard that invertebrates can be cloned, and this is true.

If you cut a flatworm in two, the two halves will regenerate so that there are now two distinct but genetically identical worms. The same cannot be said for earthworms, however, that’s a myth. Instead, it’s common that the head half survives and regenerates while the other dies.

Human identical twins are also natural clones and the process that makes identical twins can occur in other mammals, too. That would be where a fertilized egg splits so that two or more embryos are created with the same DNA signature. Maybe don’t go around calling twins clones though, that can appear rude.

Some plants asexually reproduce too. It’s hard to find a mate when you’re rooted in the ground all day, so some plant species have adapted to reproduce using their own genetic information.

This is most common with bulbs and tubers, like onions, ginger roots, and sweet potatoes. That’s right, you’ve probably eaten plants that could be described as clones by the popular definitions of the word. This just demonstrates how natural the process is when it is found in the wild.

If you were hoping for a pretty flower, the dahlia is a popular example of a flower that reproduces through its bulb, a process called vegetative propagation. This is what onion and ginger do too, making the dahlia an example of a flower that reproduces through natural cloning.

What Are The Types of Artificial Cloning?

You’re probably not here to talk about plant life, however, so let’s look at the different types of artificial cloning that are available to us.

Artificial cloning is cloning that is done by humans, outside of nature, so this is where the people in lab coats come in.

There are three main types of artificial cloning that work right now. There may be more in the future, who knows?

For now, though, we have these three to work with:

* Gene Cloning
* Reproductive Cloning
* Therapeutic Cloning.

Let’s tackle these, one by one.

First, gene cloning is also called molecular cloning in some fields. Don’t get confused, gene cloning and molecular cloning methods are one and the same. Both refer to the process of cloning a piece of genetic information instead of the whole organism.

To clone a gene, people much smarter than we isolate the gene that they wish to copy and place that gene into a vector. A vector is just something that spreads its genetic information, in this case, bacteria is a good example.

By placing the gene into bacteria or similar asexually-reproducing material, that gene is also cloned when the bacteria replicates itself. By piggybacking on this natural asexual process, scientists can copy genes to their heart’s content.

Reproductive cloning is how whole organisms, like living animals, are best cloned. Alongside reproductive cloning and animal cloning, this process is also referred to as adult cell cloning. As the dominant form of animal cloning today, reproductive cloning is the main focus of much of this guide, so we go into much more detail below.

For now, let’s move on and explain what therapeutic cloning is. We have technically mentioned therapeutic cloning already when we talked about stem cell research. That’s because therapeutic cloning is used to make a cloned embryo that creates stem cells.

By creating these embryos, it becomes possible to farm embryonic stem cells for further research. This is done during the first five days where the embryo’s stem cells have started to divide, after which the embryo is destroyed.

2.What-Is-Cloning.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB