Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2326 2024-09-29 00:10:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2326) Sand clock

Gist

An hourglass (or sandglass, sand timer, or sand clock) is a device used to measure the passage of time. It comprises two glass bulbs connected vertically by a narrow neck that allows a regulated flow of a substance (historically sand) from the upper bulb to the lower one due to gravity.

A sand clock works on the principle that all the sand from the upper chamber falls into the lower chamber in a fixed amount of time.

Which country invented sand clock?

Often referred to as the 'sand clock' the hourglass is not just a pretty ancient ornament tucked away on a modern shelf. Invented in the 8th century by a French monk called Liutprand, the hourglass was actually used as a timekeeping device.

An hourglass is an early device for measuring intervals of time. It is also known as a sandglass or a log glass when used in conjunction with the common log for ascertaining the speed of a ship. It consists of two pear-shaped bulbs of glass, united at their apexes and having a minute passage formed between them.

Summary

hourglass, an early device for measuring intervals of time. It is also known as a sandglass or a log glass when used in conjunction with the common log for ascertaining the speed of a ship. It consists of two pear-shaped bulbs of glass, united at their apexes and having a minute passage formed between them. A quantity of sand (or occasionally mercury) is enclosed in the bulbs, and the size of the passage is so proportioned that this media will completely run through from one bulb to another in the time it is desired to measure—e.g., an hour or a minute. Instruments of this kind, which have no great pretensions to accuracy, were formerly common in churches.

Details

An hourglass (or sandglass, sand timer, or sand clock) is a device used to measure the passage of time. It comprises two glass bulbs connected vertically by a narrow neck that allows a regulated flow of a substance (historically sand) from the upper bulb to the lower one due to gravity. Typically, the upper and lower bulbs are symmetric so that the hourglass will measure the same duration regardless of orientation. The specific duration of time a given hourglass measures is determined by factors including the quantity and coarseness of the particulate matter, the bulb size, and the neck width.

Depictions of an hourglass as a symbol of the passage of time are found in art, especially on tombstones or other monuments, from antiquity to the present day. The form of a winged hourglass has been used as a literal depiction of the Latin phrase tempus fugit ("time flies").

History:

Antiquity

The origin of the hourglass is unclear. Its predecessor the clepsydra, or water clock, is known to have existed in Babylon and Egypt as early as the 16th century BCE.

Middle Ages

There are no records of the hourglass existing in Europe prior to the Late Middle Ages; the first documented example dates from the 14th century, a depiction in the 1338 fresco Allegory of Good Government by Ambrogio Lorenzetti.

Use of the marine sandglass has been recorded since the 14th century. The written records about it were mostly from logbooks of European ships. In the same period it appears in other records and lists of ships stores. The earliest recorded reference that can be said with certainty to refer to a marine sandglass dates from c. 1345, in a receipt of Thomas de Stetesham, clerk of the King's ship La George, in the reign of Edward III of England; translated from the Latin, the receipt says: in 1345:

The same Thomas accounts to have paid at Lescluse, in Flanders, for twelve glass horologes (" pro xii. orlogiis vitreis "), price of each 4½ gross', in sterling 9s. Item, For four horologes of the same sort (" de eadem secta "), bought there, price of each five gross', making in sterling 3s. 4d.

Marine sandglasses were popular aboard ships, as they were the most dependable measurement of time while at sea. Unlike the clepsydra, hourglasses using granular materials were not affected by the motion of a ship and less affected by temperature changes (which could cause condensation inside a clepsydra). While hourglasses were insufficiently accurate to be compared against solar noon for the determination of a ship's longitude (as an error of just four minutes would correspond to one degree of longitude), they were sufficiently accurate to be used in conjunction with a chip log to enable the measurement of a ship's speed in knots.

The hourglass also found popularity on land as an inexpensive alternative to mechanical clocks. Hourglasses were commonly seen in use in churches, homes, and work places to measure sermons, cooking time, and time spent on breaks from labor. Because they were being used for more everyday tasks, the model of the hourglass began to shrink. The smaller models were more practical and very popular as they made timing more discreet.

After 1500, the hourglass was not as widespread as it had been. This was due to the development of the mechanical clock, which became more accurate, smaller and cheaper, and made keeping time easier. The hourglass, however, did not disappear entirely. Although they became relatively less useful as clock technology advanced, hourglasses remained desirable in their design. The oldest known surviving hourglass resides in the British Museum in London.

Not until the 18th century did John Harrison come up with a marine chronometer that significantly improved on the stability of the hourglass at sea. Taking elements from the design logic behind the hourglass, he made a marine chronometer in 1761 that was able to accurately measure the journey from England to Jamaica accurate within five seconds.

Design

Little written evidence exists to explain why its external form is the shape that it is. The glass bulbs used, however, have changed in style and design over time. While the main designs have always been ampoule in shape, the bulbs were not always connected. The first hourglasses were two separate bulbs with a cord wrapped at their union that was then coated in wax to hold the piece together and let sand flow in between. It was not until 1760 that both bulbs were blown together to keep moisture out of the bulbs and regulate the pressure within the bulb that varied the flow.

Material

While some early hourglasses actually did use silica sand as the granular material to measure time, many did not use sand at all. The material used in most bulbs was "powdered marble, tin/lead oxides, [or] pulverized, burnt eggshell". Over time, different textures of granule matter were tested to see which gave the most constant flow within the bulbs. It was later discovered that for the perfect flow to be achieved the ratio of granule bead to the width of the bulb neck needed to be 1/12 or more but not greater than 1/2 the neck of the bulb.

Practical uses

Hourglasses were an early dependable and accurate measure of time. The rate of flow of the sand is independent of the depth in the upper reservoir, and the instrument will not freeze in cold weather. From the 15th century onwards, hourglasses were being used in a range of applications at sea, in the church, in industry, and in cookery.

During the voyage of Ferdinand Magellan around the globe, 18 hourglasses from Barcelona were in the ship's inventory, after the trip had been authorized by King Charles I of Spain. It was the job of a ship's page to turn the hourglasses and thus provide the times for the ship's log. Noon was the reference time for navigation, which did not depend on the glass, as the sun would be at its zenith. A number of sandglasses could be fixed in a common frame, each with a different operating time, e.g. as in a four-way Italian sandglass likely from the 17th century, in the collections of the Science Museum, in South Kensington, London, which could measure intervals of quarter, half, three-quarters, and one hour (and which were used in churches, for priests and ministers to measure lengths of sermons).

Modern practical uses

While hourglasses are no longer widely used for keeping time, some institutions do maintain them. Both houses of the Australian Parliament use three hourglasses to time certain procedures, such as divisions.

Sand timers are sometimes included with boardgames such as Pictionary and Boggle that place time constraints on rounds of play.

Symbolic uses

Unlike most other methods of measuring time, the hourglass concretely represents the present as being between the past and the future, and this has made it an enduring symbol of time as a concept.

The hourglass, sometimes with the addition of metaphorical wings, is often used as a symbol that human existence is fleeting, and that the "sands of time" will run out for every human life. It was used thus on pirate flags, to evoke fear through imagery associated with death. In England, hourglasses were sometimes placed in coffins, and they have graced gravestones for centuries. The hourglass was also used in alchemy as a symbol for hour.

The former Metropolitan Borough of Greenwich in London used an hourglass on its coat of arms, symbolising Greenwich's role as the origin of Greenwich Mean Time (GMT). The district's successor, the Royal Borough of Greenwich, uses two hourglasses on its coat of arms.

Modern symbolic uses

Recognition of the hourglass as a symbol of time has survived its obsolescence as a timekeeper. For example, the American television soap opera Days of Our Lives (1965–present) displays an hourglass in its opening credits, with narration by Macdonald Carey: "Like sands through the hourglass, so are the days of our lives."

Various computer graphical user interfaces may change the pointer to an hourglass while the program is in the middle of a task, and may not accept user input. During that period of time, other programs, such as those open in other windows, may work normally. When such an hourglass does not disappear, it suggests a program is in an infinite loop and needs to be terminated, or is waiting for some external event (such as the user inserting a CD).

Unicode has an HOURGLASS symbol at U+231B.

In the 21st century, the Extinction symbol came into use as a symbol of the Holocene extinction and climate crisis. The symbol features an hourglass to represent time "running out" for extinct and endangered species, and also to represent time "running out" for climate change mitigation.

Hourglass motif

Because of its symmetry, graphic signs resembling an hourglass are seen in the art of cultures which never encountered such objects. Vertical pairs of triangles joined at the apex are common in Native American art; both in North America, where it can represent, for example, the body of the Thunderbird or (in more elongated form) an enemy scalp, and in South America, where it is believed to represent a Chuncho jungle dweller. In Zulu textiles they symbolise a married man, as opposed to a pair of triangles joined at the base, which symbolise a married woman. Neolithic examples can be seen among Spanish cave paintings. Observers have even given the name "hourglass motif" to shapes which have more complex symmetry, such as a repeating circle and cross pattern from the Solomon Islands. Both the members of Project Tic Toc, from television series the Time Tunnel and the Challengers of the Unknown use symbols of the hourglass representing either time travel or time running out.

Marketing-Hourglass.png.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2327 2024-09-30 00:02:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2327) Asteroid

Gist

Asteroids are small, rocky objects that orbit the Sun. Although asteroids orbit the Sun like planets, they are much smaller than planets. Asteroids are small, rocky objects that orbit the sun. Although asteroids orbit the sun like planets, they are much smaller than planets.

They probably consist of clay and silicate rocks, and are dark in appearance. They are among the most ancient objects in the solar system. The S-types ("stony") are made up of silicate materials and nickel-iron. The M-types are metallic (nickel-iron).

Most asteroids can be found orbiting the Sun between Mars and Jupiter within the main asteroid belt. Asteroids range in size from Vesta – the largest at about 329 miles (530 kilometers) in diameter – to bodies that are less than 33 feet (10 meters) across.

Summary

Asteroids come in a variety of shapes and sizes and teach us about the formation of the solar system.

Asteroids are the rocky remnants of material leftover from the formation of the solar system and its planets approximately 4.6 billion years ago.

The majority of asteroids originate from the main asteroid belt located between Mars and Jupiter, according to NASA. NASA's current asteroid count is over 1 million.

Asteroids orbit the sun in highly flattened, or "elliptical" circles, often rotating erratically, tumbling and falling through space.

Many large asteroids have one or more small companion moons. An example of this is Didymos, a half-mile (780 meters) wide asteroid that is orbited by the moonlet Dimorphos which measures just 525 feet (160 m) across.

Asteroids are also often referred to as "minor planets" and can range in size from the largest known example, Vesta, which has a diameter of around 326 miles (525 kilometers), to bodies that are less than 33 feet (10 meters) across.

Vesta recently snatched the "largest asteroid title" from Ceres, which NASA now classifies as a dwarf planet. Ceres is the largest object in the main asteroid belt while Vesta is the second largest.

As well as coming in a range of sizes, asteroids come in a variety of shapes from near spheres to irregular double-lobed peanut-shaped asteroids like Itokawa. Most asteroid surfaces are pitted with impact craters from collisions with other space rocks.

Though a majority of asteroids lurk in the asteroid belt, NASA says, the massive gravitational influence of Jupiter, the solar system's largest planet, can send them hurtling through in random directions, including through the inner solar system and thus towards Earth. But don't worry, NASA's Planetary Defense Coordination Office is keeping a watchful eye on near-Earth objects (NEOs), including asteroids, to assess the impact hazard and aid the U.S. government in planning for a response to a possible impact threat.

What is an asteroid?

Using NASA definitions, an asteroid is "A relatively small, inactive, rocky body orbiting the sun," while a comet is a "relatively small, at times active, object whose ices can vaporize in sunlight forming an atmosphere (coma) of dust and gas and, sometimes, a tail of dust and/or gas."

Additionally, a meteorite is a "meteoroid that survives its passage through the Earth's atmosphere and lands upon the Earth's surface" and a meteor is defined as a "light phenomenon which results when a meteoroid enters the Earth's atmosphere and vaporizes; a shooting star."

What are asteroids made of?

Before the formation of the planets of the solar system, the infant sun was surrounded by a disk of dust and gas, called a protoplanetary disk. While most of this disk collapsed to form the planets some material was left over.

"Simply put, asteroids are leftovers rocky material from the time of the solar system formation. They are the initial bricks that built the planets," Fred Jourdan, a planetary scientist at Curtin University told Space.com in an email "So all the material that formed all those asteroids is about 4.55 billion years old." 

In the chaotic conditions of the early solar system, this material repeatedly clashed together with small grains clustering to form small rocks, which clustered to form larger rocks and eventually planetesimals  —  bodies that don't grow large enough to form planets. Further collisions shattered apart these planetesimals, with these fragments and rocks forming the asteroids we see today.

"All that happened 4.5 billion years ago but the solar system has remained a very dynamic place since then," Jourdan added. "During the next few billions of years until the present, some asteroids smashed into each other and destroyed each other, and the debris recombined and formed what we call rubble pile asteroids."

This means asteroids can also differ by how solid they are. Some asteroids are one solid monolithic body, while others like Bennu are essentially floating rubble piles, made of smaller bodies loosely bound together gravitationally.

"I would say there are three types of asteroids. The first one is the monolith chondritic asteroid, so that's the real brick of the solar system," Jourdan explained. "These asteroids remained relatively unchanged since their formation. Some of them are rich in silicates, and some of them are rich in carbon with different tales to tell."

The second type is the differentiated asteroids which for a while behaved like they were tiny planets forming a metallic core, a mantle, and a volcanic crust. Jourdan said these asteroids would look layered like an egg if cut from the side, with the best example of this being Vesta, which he calls his "favorite asteroid."

"The last type is the rubble pile asteroids so it's just when asteroids smashed into each other and the fragment that is ejected reassemble together," Jourdan continued. These asteroids are made of boulders, rocks, pebbles, dust, and a lot of void spacing which makes them so resistant to any further impacts. In that regard, rubble piles are a bit like giant space cushions."

How often do asteroids hit Earth?

Asteroids large enough to cause damage on the ground hit Earth about once per century, as you go to larger and larger asteroids, the impacts are increasingly infrequent. At the extremely small end, desk-sized asteroids hit Earth about once a month, but they just produce bright fireballs as they burn up in the atmosphere. As you go to larger and larger asteroids, the impacts are increasingly infrequent.

What's the difference between asteroids, meteorites and comets?

Asteroids are the rocky/dusty small bodies orbiting the sun. Meteorites are pieces on the ground left over after an asteroid breaks up in the atmosphere. Most asteroids are not strong, and when they disintegrate in the atmosphere they often produce a shower of meteorites on the ground.

Comets are also small bodies orbiting the sun, but they also contain ices that produce a gas and dust atmosphere and tail when they get near the sun and heat up.

Details

An asteroid is a minor planet—an object that is neither a true planet nor an identified comet— that orbits within the inner Solar System. They are rocky, metallic, or icy bodies with no atmosphere, classified as C-type (carbonaceous), M-type (metallic), or S-type (silicaceous). The size and shape of asteroids vary significantly, ranging from small rubble piles under a kilometer across and larger than meteoroids, to Ceres, a dwarf planet almost 1000 km in diameter. A body is classified as a comet, not an asteroid, if it shows a coma (tail) when warmed by solar radiation, although recent observations suggest a continuum between these types of bodies.

Of the roughly one million known asteroids, the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in a region known as the main asteroid belt. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.

Asteroids have historically been observed from Earth. The first close-up observation of an asteroid was made by the Galileo spacecraft. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, is tasked with studying ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched October 2023, aims to study the metallic asteroid Psyche.

Near-Earth asteroids have the potential for catastrophic consequences if they strike Earth, with a notable example being the Chicxulub impact, widely thought to have induced the Cretaceous–Paleogene mass extinction. As an experiment to meet this danger, in September 2022 the Double Asteroid Redirection Test spacecraft successfully altered the orbit of the non-threatening asteroid Dimorphos by crashing into it.

Terminology

In 2006, the International Astronomical Union (IAU) introduced the currently preferred broad term small Solar System body, defined as an object in the Solar System that is neither a planet, a dwarf planet, nor a natural satellite; this includes asteroids, comets, and more recently discovered classes. According to IAU, "the term 'minor planet' may still be used, but generally, 'Small Solar System Body' will be preferred."

Historically, the first discovered asteroid, Ceres, was at first considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.

Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid, never officially defined, but can be informally used to mean "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions". The main difference between an asteroid and a comet is that a comet shows a coma (tail) due to sublimation of its near-surface ices by solar radiation. A few objects were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; highly eccentric asteroids are probably dormant or extinct comets.

The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.

For almost two centuries after the discovery of Ceres in 1801, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few, such as 944 Hidalgo, ventured farther for part of their orbit. Starting in 1977 with 2060 Chiron, astronomers discovered small bodies that permanently resided further out than Jupiter, now called centaurs. In 1992, 15760 Albion was discovered, the first object beyond the orbit of Neptune (other than Pluto); soon large numbers of similar objects were observed, now called trans-Neptunian object. Further out are Kuiper-belt objects, scattered-disc objects, and the much more distant Oort cloud, hypothesized to be the main reservoir of dormant comets. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies exhibit little cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets.

The Kuiper-belt bodies are called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.

In 2006, the IAU created the class of dwarf planets for the largest minor planets—those massive enough to have become ellipsoidal under their own gravity. Only the largest object in the asteroid belt has been placed in this category: Ceres, at about 975 km (606 mi) across.

Additional Information

Asteroids, sometimes called minor planets, are rocky, airless remnants left over from the early formation of our solar system about 4.6 billion years ago.

Most asteroids can be found orbiting the Sun between Mars and Jupiter within the main asteroid belt. Asteroids range in size from Vesta – the largest at about 329 miles (530 kilometers) in diameter – to bodies that are less than 33 feet (10 meters) across. The total mass of all the asteroids combined is less than that of Earth's Moon.

During the 18th century, astronomers were fascinated by a mathematical expression called Bode's law. It appeared to predict the locations of the known planets, but with one exception...

Bode's law suggested there should be a planet between Mars and Jupiter. When Sir William Herschel discovered Uranus, the seventh planet, in 1781, at a distance that corresponded to Bode's law, scientific excitement about the validity of this mathematical expression reached an all-time high. Many scientists were absolutely convinced that a planet must exist between Mars and Jupiter.

By the end of the century, a group of astronomers had banded together to use the observatory at Lilienthal, Germany, owned by Johann Hieronymous Schröter, to hunt down this missing planet. They called themselves, appropriately, the 'Celestial Police'.

Despite their efforts, they were beaten by Giuseppe Piazzi, who discovered what he believed to be the missing planet on New Year's Day, 1801, from the Palermo Observatory.

The new body was named Ceres but subsequent observations swiftly established that it could not be classed a major planet as its diameter is just 940 kilometres (Pluto, the smallest planet, has a diameter is just over 2300 kilometres). Instead, it was classified as a 'minor planet' and the search for the 'real' planet continued.

Between 1801 and 1808, astronomers tracked down a further three minor planets within this region of space: Pallas, Juno and Vesta, each smaller than Ceres. It became obvious that there was no single large planet out there and enthusiasm for the search waned.

A fifth asteroid, Astraea, was discovered in 1845 and interest in the asteroids as a new 'class' of celestial object began to build. In fact, since that time new asteroids have been discovered almost every year.

It soon became obvious that a 'belt' of asteroids existed between Mars and Jupiter. This collection of space debris was the 'missing planet'. It was almost certainly prevented from forming by the large gravitational field of adjacent Jupiter.

Now there are a number of telescopes dedicated to the task of finding new asteroids. Specifically, these instruments are geared towards finding any asteroids that cross the Earth's orbit and may therefore pose an impact hazard.

asteroid-10-things.jpg?1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2328 2024-10-01 00:04:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2328) Table

Gist

A table is an item of furniture with a raised flat top and is supported most commonly by 1 to 4 legs (although some can have more). It is used as a surface for working at, eating from or on which to place things.

Summary

Table, basic article of furniture, known and used in the Western world since at least the 7th century bce, consisting of a flat slab of stone, metal, wood, or glass supported by trestles, legs, or a pillar.

Egyptian tables were made of wood, Assyrian of metal, and Grecian usually of bronze. Roman tables took on quite elaborate forms, the legs carved in the shapes of animals, sphinxes, or grotesque figures. Cedar and other exotic woods with a decorative grain were employed for the tops, and the tripod legs were made of bronze or other metals.

Early medieval tables were of a fairly basic type, but there were certain notable exceptions; Charlemagne, for example, possessed two tables of silver and one of gold, probably constructed of wood covered with thin sheets of metal. With the growing formality of life in the feudal period, tables took on a greater social significance. Although small tables were used in private apartments, in the great hall of a feudal castle the necessity of feeding a host of retainers stimulated the development of an arrangement whereby the master and his guests sat at a rectangular table on a dais surmounted by a canopy, while the rest of the household sat at tables placed at right angles to this one.

One of the few surviving examples of a large (and much restored) round table dating from the 15th century is at Winchester Castle in Hampshire, Eng. For the most part, circular tables were intended for occasional uses. The most common type of large medieval dining table was of trestle construction, consisting of massive boards of oak or elm resting on a series of central supports to which they were affixed by pegs, which could be removed and the table dismantled. Tables with attached legs, joined by heavy stretchers fixed close to the floor, appeared in the 15th century. They were of fixed size and heavy to move, but in the 16th century an ingenious device known as a draw top made it possible to double the length of the table. The top was composed of three leaves, two of which could be placed under the third and extended on runners when required. Such tables were usually made of oak or elm but sometimes of walnut or cherry. The basic principle involved is still applied to some extending tables.

Growing technical sophistication meant that from the middle of the 16th century onward tables began to reflect far more closely than before the general design tendencies of their period and social context. The typical Elizabethan draw table, for instance, was supported on four vase-shaped legs terminating in Ionic capitals, reflecting perfectly the boisterous decorative atmosphere of the age. The despotic monarchies that yearned after the splendours of Louis XIV’s Versailles promoted a fashion for tables of conspicuous opulence. Often made in Italy, these tables, which were common between the late 17th and mid-18th century, were sometimes inlaid with elaborate patterns of marquetry or rare marbles; others, such as that presented by the City of London to Charles II on his restoration as king of England, were entirely covered in silver or were made of ebony with silver mountings.

Increasing contact with the East in the 18th century stimulated a taste for lacquered tables for occasional use. Indeed, the pattern of development in the history of the table that became apparent in this century was that, whereas the large dining table showed few stylistic changes, growing sophistication of taste and higher standards of living led to an increasing degree of specialization in occasional-table design. A whole range of particular functions was now being catered to, a tendency that persisted until at least the beginning of the 20th century. Social customs such as tea-drinking fueled the development of these specialized forms. The exploitation of man-made materials in the second half of the 20th century produced tables of such materials as plastic, metal, fibreglass, and even corrugated cardboard.

Details

A table is an item of furniture with a raised flat top and is supported most commonly by 1 to 4 legs (although some can have more). It is used as a surface for working at, eating from or on which to place things. Some common types of tables are the dining room tables, which are used for seated persons to eat meals; the coffee table, which is a low table used in living rooms to display items or serve refreshments; and the bedside table, which is commonly used to place an alarm clock and a lamp. There are also a range of specialized types of tables, such as drafting tables, used for doing architectural drawings, and sewing tables.

Common design elements include:

* Top surfaces of various shapes, including rectangular, square, rounded, semi-circular or oval
* Legs arranged in two or more similar pairs. It usually has four legs. However, some tables have three legs, use a single heavy pedestal, or are attached to a wall.
* Several geometries of folding table that can be collapsed into a smaller volume (e.g., a TV tray, which is a portable, folding table on a stand)
* Heights ranging up and down from the most common 18–30 inches (46–76 cm) range, often reflecting the height of chairs or bar stools used as seating for people making use of a table, as for eating or performing various manipulations of objects resting on a table
* A huge range of sizes, from small bedside tables to large dining room tables and huge conference room tables
* Presence or absence of drawers, shelves or other areas for storing items
* Expansion of the table surface by insertion of leaves or locking hinged drop leaf sections into a horizontal position (this is particularly common for dining tables)

Etymology

The word table is derived from Old English tabele, derived from the Latin word tabula ('a board, plank, flat top piece'), which replaced the Old English bord; its current spelling reflects the influence of the French table.

History

Some very early tables were made and used by the Ancient Egyptians around 2500 BC, using wood and alabaster. They were often little more than stone platforms used to keep objects off the floor, though a few examples of wooden tables have been found in tombs. Food and drinks were usually put on large plates deposed on a pedestal for eating. The Egyptians made use of various small tables and elevated playing boards. The Chinese also created very early tables in order to pursue the arts of writing and painting, as did people in Mesopotamia, where various metals were used.

The Greeks and Romans made more frequent use of tables, notably for eating, although Greek tables were pushed under a bed after use. The Greeks invented a piece of furniture very similar to the guéridon. Tables were made of marble or wood and metal (typically bronze or silver alloys), sometimes with richly ornate legs. Later, the larger rectangular tables were made of separate platforms and pillars. The Romans also introduced a large, semicircular table to Italy, the mensa lunata. Plutarch mentions use of "tables" by Persians.

Furniture during the Middle Ages is not as well known as that of earlier or later periods, and most sources show the types used by the nobility. In the Eastern Roman Empire, tables were made of metal or wood, usually with four feet and frequently linked by x-shaped stretchers. Tables for eating were large and often round or semicircular. A combination of a small round table and a lectern seemed very popular as a writing table.

In western Europe, although there was variety of form — the circular, semicircular, oval and oblong were all in use — tables appear to have been portable and supported upon trestles fixed or folding, which were cleared out of the way at the end of a meal. Thus Charlemagne possessed three tables of silver and one of gold, probably made of wood and covered with plates of the precious metals. The custom of serving dinner at several small tables, which is often supposed to be a modern refinement, was followed in the French châteaux, and probably also in the English castles, as early as the 13th century.

Refectory tables first appeared at least as early as the 17th century, as an advancement of the trestle table; these tables were typically quite long and wide and capable of supporting a sizeable banquet in the great hall or other reception room of a castle.

Shape, height, and function

Tables come in a wide variety of materials, shapes, and heights dependent upon their origin, style, intended use and cost. Many tables are made of wood or wood-based products; some are made of other materials including metal and glass. Most tables are composed of a flat surface and one or more supports (legs). A table with a single, central foot is a pedestal table. Long tables often have extra legs for support.

Table tops can be in virtually any shape, although rectangular, square, round (e.g. the round table), and oval tops are the most frequent. Others have higher surfaces for personal use while either standing or sitting on a tall stool.

Many tables have tops that can be adjusted to change their height, position, shape, or size, either with foldable, sliding or extensions parts that can alter the shape of the top. Some tables are entirely foldable for easy transportation, e.g. camping or storage, e.g., TV trays. Small tables in trains and aircraft may be fixed or foldable, although they are sometimes considered as simply convenient shelves rather than tables.

Tables can be freestanding or designed for placement against a wall. Tables designed to be placed against a wall are known as pier tables[9] or console tables (French: console, "support bracket") and may be bracket-mounted (traditionally), like a shelf, or have legs, which sometimes imitate the look of a bracket-mounted table.

Types

Tables of various shapes, heights, and sizes are designed for specific uses:

* Dining room tables are designed to be used for formal dining.
* Bedside tables, nightstands, or night tables are small tables used in a bedroom. They are often used for convenient placement of a small lamp, alarm clock, glasses, or other personal items.
* Drop-leaf tables have a fixed section in the middle and a hinged section (leaf) on either side that can be folded down.
* Gateleg tables have one or two hinged leaves supported by hinged legs.
* Coffee tables are low tables designed for use in a living room, in front of a sofa, for convenient placement of drinks, books, or other personal items.
* Refectory tables are long tables designed to seat many people for meals.
* Drafting tables usually have a top that can be tilted for making a large or technical drawing. They may also have a ruler or similar element integrated.
* Workbenches are sturdy tables, often elevated for use with a high stool or while standing, which are used for assembly, repairs, or other precision handwork.
* Nested tables are a set of small tables of graduated size that can be stacked together, each fitting within the one immediately larger. They are for occasional use (such as a tea party), hence the stackable design.

Specialized types

Historically, various types of tables have become popular for specific uses:

* Loo tables were very popular in the 18th and 19th centuries as candlestands, tea tables, or small dining tables, although they were originally made for the popular card game loo or lanterloo. Their typically round or oval tops have a tilting mechanism, which enables them to be stored out of the way (e.g. in room corners) when not in use. A further development in this direction was the "birdcage" table, the top of which could both revolve and tilt.

* Pembroke tables, first introduced during the 18th century, were popular throughout the 19th century. Their main characteristic was a rectangular or oval top with folding or drop leaves on each side. Most examples have one or more drawers and four legs, sometimes connected by stretchers. Their design meant they could easily be stored or moved about and conveniently opened for serving tea, dining, writing, or other occasional uses. One account attributes the design of the Pembroke table to Henry Herbert, 9th Earl of Pembroke (1693-1751).

* Sofa tables are similar to Pembroke tables and usually have longer and narrower tops. They were specifically designed for placement directly in front of sofas for serving tea, writing, dining, or other convenient uses. Generally speaking, a sofa table is a tall, narrow table used behind a sofa to hold lamps or decorative objects.

* Work tables were small tables designed to hold sewing materials and implements, providing a convenient work place for women who sewed. They appeared during the 18th century and were popular throughout the 19th century. Most examples have rectangular tops, sometimes with folding leaves, and usually one or more drawers fitted with partitions. Early examples typically have four legs, often standing on casters, while later examples sometimes have turned columns or other forms of support.

* Drum tables are round tables introduced for writing, with drawers around the platform.

* End tables are small tables typically placed beside couches or armchairs. Often lamps will be placed on an end table.

* Overbed tables are narrow rectangular tables whose top is designed for use above the bed, especially for hospital patients.

* Billiards tables are bounded tables on which billiards-type games are played. All provide a flat surface, usually composed of slate and covered with cloth, elevated above the ground.

* Chess tables are a type of games table that integrates a chessboard.

* Table tennis tables are usually masonite or a similar wood, layered with a smooth low-friction coating. they are divided into two halves by a low net, which separates opposing players.

* Poker tables or card tables are used to play poker or other card games.

KKCT003.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2329 2024-10-02 00:05:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2329) Calculus (Medicine)

Gist

A calculus ( pl. : calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body.

Calculus, renal: A stone in the kidney (or lower down in the urinary tract). Also called a kidney stone. The stones themselves are called renal caluli. The word "calculus" (plural: calculi) is the Latin word for pebble. Renal stones are a common cause of blood in the urine and pain in the abdomen, flank, or groin.

Treating a staghorn calculus usually means surgery. The entire stone, even small pieces, must be removed so they can't lead to infection or the formation of new stones. One way to remove staghorn stones is with a percutaneous nephrolithotomy (PCNL).

A urologist can remove the kidney stone or break it into small pieces with the following treatments: Shock wave lithotripsy. The doctor can use shock wave lithotripsy link to blast the kidney stone into small pieces. The smaller pieces of the kidney stone then pass through your urinary tract.

What is a calculi in the kidneys?

Kidney stones (also called renal calculi, nephrolithiasis or urolithiasis) are hard deposits made of minerals and salts that form inside your kidneys. Diet, excess body weight, some medical conditions, and certain supplements and medications are among the many causes of kidney stones.

Summary

Kidney stone is a common clinical problem faced by clinicians. The prevalence of the disease is increasing worldwide. As the affected population is getting younger and recurrence rates are high, dietary modifications, lifestyle changes, and medical management are essential. Patients with recurrent stone disease need careful evaluation for underlying metabolic disorder. Medical management should be used judiciously in all patients with kidney stones, with appropriate individualization. This chapter focuses on medical management of kidney stones.

Urinary tract stones (urolithiasis) are known to the mankind since antiquity. Kidney stone is not a true diagnosis; rather it suggests a broad variety of underlying diseases. Kidney stones are mainly composed of calcium salts, uric acid, cysteine, and struvite. Calcium oxalate and calcium phosphate are the most common types accounting for >80% of stones, followed by uric acid (8–10%) and cysteine, struvite in remainders.

The incidence of urolithiasis is increasing globally, with geographic, racial, and gender variation in its occurrence. Epidemiological study (1979) in the western population reveals the incidence of urolithiasis to be 124 per 100,000 in males and 36 per 100,000 in females. The lifetime risk of having urolithiasis is higher in the Middle East (20–25%) and western countries (10–15%) and less common in Africans and Asian population. Stone disease carries high risk of recurrence after the initial episode, of around 50% at 5 years and 70% at 9 years.

Positive family history of stone disease, young age at onset, recurrent urinary tract infections (UTIs), and underlying diseases like renal tubular acidosis (RTA) and hyperparathyroidism are the major risk factors for recurrence. High incidence and recurrence rate add enormous cost and loss of work days.

Though the pathogenesis of stone disease is not fully understood, systematic metabolic evaluation, medical treatment of underlying conditions, and patient-specific modification in diet and lifestyle are effective in reducing the incidence and recurrence of stone disease.

Details

A calculus (pl.: calculi), often called a stone, is a concretion of material, usually mineral salts, that forms in an organ or duct of the body. Formation of calculi is known as lithiasis. Stones can cause a number of medical conditions.

Some common principles (below) apply to stones at any location, but for specifics see the particular stone type in question.

Calculi are not to be confused with gastroliths, which are ingested rather than grown endogenously.

Types

* Calculi in the inner ear are called otoliths
* Calculi in the urinary system are called urinary calculi and include kidney stones (also called renal calculi or nephroliths) and bladder stones (also called vesical calculi or cystoliths). They can have any of several compositions, including mixed. Principal compositions include oxalate and urate.
* Calculi of the gallbladder and bile ducts are called gallstones and are primarily developed from bile salts and cholesterol derivatives.
* Calculi in the nasal passages (rhinoliths) are rare.
* Calculi in the gastrointestinal tract (enteroliths) can be enormous. Individual enteroliths weighing many pounds have been reported in horses.
* Calculi in the stomach are called gastric calculi (not to be confused with gastroliths which are exogenous in nature).
* Calculi in the salivary glands are called salivary calculi (sialoliths).
* Calculi in the tonsils are called tonsillar calculi (tonsilloliths).
* Calculi in the veins are called venous calculi (phleboliths).
* Calculi in the skin, such as in sweat glands, are not common but occasionally occur.
* Calculi in the navel are called omphaloliths.
* Calculi are usually asymptomatic, and large calculi may have required many years to grow to their large size.

Cause

* From an underlying abnormal excess of the mineral, e.g., with elevated levels of calcium (hypercalcaemia) that may cause kidney stones, dietary factors for gallstones.
* Local conditions at the site in question that promote their formation, e.g., local bacteria action (in kidney stones) or slower fluid flow rates, a possible explanation of the majority of salivary duct calculus occurring in the submandibular salivary gland.
* Enteroliths are a type of calculus found in the intestines of animals (mostly ruminants) and humans, and may be composed of inorganic or organic constituents.
* Bezoars are lumps of indigestible material in the stomach and/or intestines; most commonly, they consist of hair (in which case they are also known as hairballs). A bezoar may form the nidus of an enterolith.
* In kidney stones, calcium oxalate is the most common mineral type (see nephrolithiasis). Uric acid is the second most common mineral type, but an in vitro study showed uric acid stones and crystals can promote the formation of calcium oxalate stones.

Pathophysiology

Stones can cause disease by several mechanisms:

* Irritation of nearby tissues, causing pain, swelling, and inflammation
* Obstruction of an opening or duct, interfering with normal flow and disrupting the function of the organ in question
* Predisposition to infection (often due to disruption of normal flow)

A number of important medical conditions are caused by stones:

* Nephrolithiasis (kidney stones)
** Can cause hydronephrosis (swollen kidneys) and kidney failure
** Can predispose to pyelonephritis (kidney infections)
** Can progress to urolithiasis
* Urolithiasis (urinary bladder stones)
** Can progress to bladder outlet obstruction
* Cholelithiasis (gallstones)
** Can predispose to cholecystitis (gall bladder infections) and ascending cholangitis (biliary tree infection)
** Can progress to choledocholithiasis (gallstones in the bile duct) and gallstone pancreatitis (inflammation of the pancreas)
* Gastric calculi can cause colic, obstruction, torsion, and necrosis.

Diagnosis

Diagnostic workup varies by the stone type, but in general:

* Clinical history and physical examination
Imaging studies:
* Some stone types (mainly those with substantial calcium content) can be detected on X-ray and CT scan
* Many stone types can be detected by ultrasound

Factors contributing to stone formation (as in #Etiology) are often tested:
* Laboratory testing can give levels of relevant substances in blood or urine
* Some stones can be directly recovered (at surgery, or when they leave the body spontaneously) and sent to a laboratory for analysis of content

Treatment

Modification of predisposing factors can sometimes slow or reverse stone formation. Treatment varies by stone type, but, in general:

* Healthy diet and exercise (promotes flow of energy and nutrition)
* Drinking fluids (water and electrolytes like lemon juice, diluted vinegar e.g. in pickles, salad dressings, sauces, soups, shrubs mix)
* Surgery (lithotomy)
* Medication / antibiotics
* Extracorporeal shock wave lithotripsy (ESWL) for removal of calculi

History

The earliest operation for curing stones is given in the Sushruta Samhita (6th century BCE). The operation involved exposure and going up through the floor of the bladder.

The care of this disease was forbidden to the physicians that had taken the Hippocratic Oath because:

* There was a high probability of intraoperative and postoperative surgical complications like infection or bleeding
* The physicians would not perform surgery as in ancient cultures they were two different professions

Etymology

The word comes from Latin calculus "small stone", from calx "limestone, lime", probably related to Greek chalix "small stone, pebble, rubble", which many trace to a Proto-Indo-European language root for "split, break up". Calculus was a term used for various kinds of stones. In the 18th century it came to be used for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.

Additional Information

If your doctor suspects that you have a kidney stone, you may have diagnostic tests and procedures, such as:

* Blood testing. Blood tests may reveal too much calcium or uric acid in your blood. Blood test results help monitor the health of your kidneys and may lead your doctor to check for other medical conditions.
* Urine testing. The 24-hour urine collection test may show that you're excreting too many stone-forming minerals or too few stone-preventing substances. For this test, your doctor may request that you perform two urine collections over two consecutive days.
* Imaging. Imaging tests may show kidney stones in your urinary tract. High-speed or dual energy computerized tomography (CT) may reveal even tiny stones. Simple abdominal X-rays are used less frequently because this kind of imaging test can miss small kidney stones.

Ultrasound, a noninvasive test that is quick and easy to perform, is another imaging option to diagnose kidney stones.

* Analysis of passed stones. You may be asked to urinate through a strainer to catch stones that you pass. Lab analysis will reveal the makeup of your kidney stones. Your doctor uses this information to determine what's causing your kidney stones and to form a plan to prevent more kidney stones.

Right-renal-pelvic-cystine-stone.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2330 2024-10-02 20:04:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2330) Cafeteria

Gist

A cafeteria a restaurant in which the customers serve themselves or are served at a counter but carry their own food to their tables.

A cafeteria is a self-service restaurant in a large shop or workplace.

It is a restaurant, especially one for staff or workers, where people collect their meals themselves and carry them to their tables.

Summary

A cafeteria is a self-service restaurant in which customers select various dishes from an open-counter display. The food is usually placed on a tray, paid for at a cashier’s station, and carried to a dining table by the customer. The modern cafeteria, designed to facilitate a smooth flow of patrons, is particularly well adapted to the needs of institutions—schools, hospitals, corporations—attempting to serve large numbers of people efficiently and inexpensively. In addition to providing quick service, the cafeteria requires fewer service personnel than most other commercial eating establishments.

Early versions of self-service restaurants began to appear in the late 19th century in the United States. In 1891 the Young Women’s Christian Association (YWCA) of Kansas City, Missouri, established what some food-industry historians consider the first cafeteria. This institution, founded to provide low-cost meals for working women, was patterned after a Chicago luncheon club for women where some aspects of self-service were already in practice. Cafeterias catering to the public opened in several U.S. cities in the 1890s, but cafeteria service did not become widespread until shortly after the turn of the century, when it became the accepted method of providing food for employees of factories and other large businesses.

Details

A cafeteria, sometimes called a canteen outside the U.S. and Canada, is a type of food service location in which there is little or no waiting staff table service, whether in a restaurant or within an institution such as a large office building or school; a school dining location is also referred to as a dining hall or lunchroom (in American English). Cafeterias are different from coffeehouses, although the English term came from the Spanish term cafetería, which carries the same meaning.

Instead of table service, there are food-serving counters/stalls or booths, either in a line or allowing arbitrary walking paths. Customers take the food that they desire as they walk along, placing it on a tray. In addition, there are often stations where customers order food, particularly items such as hamburgers or tacos which must be served hot and can be immediately prepared with little waiting. Alternatively, the patron is given a number and the item is brought to their table. For some food items and drinks, such as sodas, water, or the like, customers collect an empty container, pay at check-out, and fill the container after check-out. Free unlimited-second servings are often allowed under this system. For legal purposes (and the consumption patterns of customers), this system is rarely, if at all, used for alcoholic drinks in the United States.

Customers are either charged a flat rate for admission (as in a buffet) or pay at check-out for each item. Some self-service cafeterias charge by the weight of items on a patron's plate. In universities and colleges, some students pay for three meals a day by making a single large payment for the entire semester.

As cafeterias require few employees, they are often found within a larger institution, catering to the employees or clientele of that institution. For example, schools, colleges and their residence halls, department stores, hospitals, museums, places of worship, amusement parks, military bases, prisons, factories, and office buildings often have cafeterias. Although some of such institutions self-operate their cafeterias, many outsource their cafeterias to a food service management company or lease space to independent businesses to operate food service facilities. The three largest food service management companies servicing institutions are Aramark, Compass Group, and Sodexo.

At one time, upscale cafeteria-style restaurants dominated the culture of the Southern United States, and to a lesser extent the Midwest. There were numerous prominent chains of them: Bickford's, Morrison's Cafeteria, Piccadilly Cafeteria, S&W Cafeteria, Apple House, Luby's, K&W, Britling, Wyatt's Cafeteria, and Blue Boar among them. Currently, two Midwestern chains still exist, Sloppy Jo's Lunchroom and Manny's, which are both located in Illinois. There were also several smaller chains, usually located in and around a single city. These institutions, except K&W, went into a decline in the 1960s with the rise of fast food and were largely finished off in the 1980s by the rise of all-you-can-eat buffets and other casual dining establishments. A few chains—particularly Luby's and Piccadilly Cafeterias (which took over the Morrison's chain in 1998)—continue to fill some of the gap left by the decline of the older chains. Some of the smaller Midwestern chains, such as MCL Cafeterias centered in Indianapolis, are still in business.

History

Perhaps the first self-service restaurant (not necessarily a cafeteria) in the U.S. was the Exchange Buffet in New York City, which opened September 4, 1885, and catered to an exclusively male clientele. Food was purchased at a counter and patrons ate standing up. This represents the predecessor of two formats: the cafeteria, described below, and the automat.

During the 1893 World's Columbian Exposition in Chicago, entrepreneur John Kruger built an American version of the smörgåsbords he had seen while traveling in Sweden. Emphasizing the simplicity and light fare, he called it the 'Cafeteria' - Spanish for 'coffee shop'. The exposition attracted over 27 million visitors (half the U.S. population at the time) in six months, and it was because of Kruger's operation that the United States first heard the term and experienced the self-service dining format.

Meanwhile, the chain of Childs Restaurants quickly grew from about 10 locations in New York City in 1890 to hundreds across the U.S. and Canada by 1920. Childs is credited with the innovation of adding trays and a "tray line" to the self-service format, introduced in 1898 at their 130 Broadway location. Childs did not change its format of sit-down dining, however. This was soon the standard design for most Childs Restaurants, and, ultimately, the dominant method for succeeding cafeterias.

It has been conjectured that the 'cafeteria craze' started in May 1905, when Helen Mosher opened a downtown L.A. restaurant where people chose their food at a long counter and carried their trays to their tables. California has a long history in the cafeteria format - notably the Boos Brothers Cafeterias, and the Clifton's and Schaber's. The earliest cafeterias in California were opened at least 12 years after Kruger's Cafeteria, and Childs already had many locations around the country. Horn & Hardart, an automat format chain (different from cafeterias), was well established in the mid-Atlantic region before 1900.

Between 1960 and 1981, the popularity of cafeterias was overcome by fast food restaurants and fast casual restaurant formats.

Outside the United States, the development of cafeterias can be observed in France as early as 1881 with the passing of the Ferry Law. This law mandated that public school education be available to all children. Accordingly, the government also encouraged schools to provide meals for students in need, thus resulting in the conception of cafeterias or cantine (in French). According to Abramson, before the creation of cafeterias, only some students could bring home-cooked meals and be properly fed in schools.

As cafeterias in France became more popular, their use spread beyond schools and into the workforce. Thus, due to pressure from workers and eventually new labor laws, sizable businesses had to, at minimum, provide established eating areas for their workers. Support for this practice was also reinforced by the effects of World War II when the importance of national health and nutrition came under great attention.

Other names

A cafeteria in a U.S. military installation is known as a chow hall, a mess hall, a galley, a mess deck, or, more formally, a dining facility, often abbreviated to DF, whereas in common British Armed Forces parlance, it is known as a cookhouse or mess. Students in the United States often refer to cafeterias as lunchrooms, which also often serve school breakfast. Some school cafeterias in the U.S. and Canada have stages and movable seating that allow use as auditoriums. These rooms are known as cafetoriums or All Purpose Rooms. In some older facilities, a school's gymnasium is also often used as a cafeteria, with the kitchen facility being hidden behind a rolling partition outside non-meal hours. Newer rooms which also act as the school's grand entrance hall for crowd control and are used for multiple purposes are often called the commons.

Cafeterias serving university dormitories are sometimes called dining halls or dining commons. A food court is a type of cafeteria found in many shopping malls and airports featuring multiple food vendors or concessions. However, a food court could equally be styled as a type of restaurant as well, being more aligned with the public, rather than institutionalized, dining. Some institutions, especially schools, have food courts with stations offering different types of food served by the institution itself (self-operation) or a single contract management company, rather than leasing space to numerous businesses. Some monasteries, boarding schools, and older universities refer to their cafeteria as a refectory. Modern-day British cathedrals and abbeys, notably in the Church of England, often use the phrase refectory to describe a cafeteria open to the public. Historically, the refectory was generally only used by monks and priests. For example, although the original 800-year-old refectory at Gloucester Cathedral (the stage setting for dining scenes in the Harry Potter movies) is now mostly used as a choir practice area, the relatively modern 300-year-old extension, now used as a cafeteria by staff and public alike, is today referred to as the refectory.

A cafeteria located within a movie or TV studio complex is often called a commissary.

College cafeteria

In American English, a college cafeteria is a cafeteria intended for college students. In British English, it is often called the refectory. These cafeterias can be a part of a residence hall or in a separate building.  Many of these colleges employ their students to work in the cafeteria.  The number of meals served to students varies from school to school but is normally around 21 meals per week.  Like normal cafeterias, a person will have a tray to select the food that they want, but (at some campuses) instead of paying money, pays beforehand by purchasing a meal plan.

The method of payment for college cafeterias is commonly in the form of a meal plan, whereby the patron pays a certain amount at the start of the semester and details of the plan are stored on a computer system. Student ID cards are then used to access the meal plan. Meal plans can vary widely in their details and are often not necessary to eat at a college cafeteria. Typically, the college tracks students' plan usage by counting the number of predefined meal servings, points, dollars, or buffet dinners. The plan may give the student a certain number of any of the above per week or semester and they may or may not roll over to the next week or semester.

Many schools offer several different options for using their meal plans. The main cafeteria is usually where most of the meal plan is used but smaller cafeterias, cafés, restaurants, bars, or even fast food chains located on campus, on nearby streets, or in the surrounding town or city may accept meal plans. A college cafeteria system often has a virtual monopoly on the students due to an isolated location or a requirement that residence contracts include a full meal plan.

Taiwanese cafeteria

There are many self-service bento shops in Taiwan. The store puts the dishes in the self-service area for the customers to pick them up by themselves. After the customers choose, they go to the cashier to check out; many stores use the staff to visually check the amount of food when assessing the price, and some stores use the method of weighing.

cafeteria-remodel.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2331 2024-10-03 16:49:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2331) Antenna

Gist

An antenna is a metallic structure that captures and/or transmits radio electromagnetic waves. Antennas come in all shapes and sizes from little ones that can be found on your roof to watch TV to really big ones that capture signals from satellites millions of miles away.

An antenna is a device that is made out of a conductive, metallic material and has the purpose of transmitting and/or receiving electromagnetic waves, usually radio wave signals. The purpose of transmitting and receiving radio waves is to communicate or broadcast information at the speed of light.

Summary

An antenna is a metallic structure that captures and/or transmits radio electromagnetic waves. Antennas come in all shapes and sizes from little ones that can be found on your roof to watch TV to really big ones that capture signals from satellites millions of miles away.

The antennas that Space Communications and Navigation (SCaN) uses are a special bowl shaped antenna that focuses signals at a single point called a parabolic antenna. The bowl shape is what allows the antennas to both capture and transmit electromagnetic waves. These antennas move horizontally (measured in hour angle/declination) and vertically (measured in azimuth/elevation) in order to capture and transmit the signal.

SCaN has over 65 antennas that help capture and transmit data to and from satellites in space.

Details

In radio engineering, an antenna (American English) or aerial (British English) is an electronic device that converts an alternating electric current into radio waves (transmitting), or radio waves into an electric current (receiving). It is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment.

An antenna is an array of conductors (elements), electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional, or high-gain, or "beam" antennas). An antenna may include components not connected to the transmitter, parabolic reflectors, horns, or parasitic elements, which serve to direct the radio waves into a beam or other desired radiation pattern. Strong directivity and good efficiency when transmitting are hard to achieve with antennas with dimensions that are much smaller than a half wavelength.

The first antennas were built in 1888 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the 1867 electromagnetic theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. Starting in 1895, Guglielmo Marconi began development of antennas practical for long-distance, wireless telegraphy, for which he received the 1909 Nobel Prize in physics.

Terminology

The words antenna and aerial are used interchangeably. Occasionally the equivalent term "aerial" is used to specifically mean an elevated horizontal wire antenna. The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials" suspended from a pole. In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as "terminals". Because of his prominence, Marconi's use of the word antenna spread among wireless researchers and enthusiasts, and later to the general public.

Antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc., in addition to the actual RF current-carrying components. A receiving antenna may include not only the passive metal receiving elements, but also an integrated preamplifier or mixer, especially at and above microwave frequencies.

Additional Information

antenna, component of radio, television, and radar systems that directs incoming and outgoing radio waves. Antennas are usually metal and have a wide variety of configurations, from the mastlike devices employed for radio and television broadcasting to the large parabolic reflectors used to receive satellite signals and the radio waves generated by distant astronomical objects.

The first antenna was devised by the German physicist Heinrich Hertz. During the late 1880s he carried out a landmark experiment to test the theory of the British mathematician-physicist James Clerk Maxwell that visible light is only one example of a larger class of electromagnetic effects that could pass through air (or empty space) as a succession of waves. Hertz built a transmitter for such waves consisting of two flat, square metallic plates, each attached to a rod, with the rods in turn connected to metal spheres spaced close together. An induction coil connected to the spheres caused a spark to jump across the gap, producing oscillating currents in the rods. The reception of waves at a distant point was indicated by a spark jumping across a gap in a loop of wire.

The Italian physicist Guglielmo Marconi, the principal inventor of wireless telegraphy, constructed various antennas for both sending and receiving, and he also discovered the importance of tall antenna structures in transmitting low-frequency signals. In the early antennas built by Marconi and others, operating frequencies were generally determined by antenna size and shape. In later antennas frequency was regulated by an oscillator, which generated the transmitted signal.

More powerful antennas were constructed during the 1920s by combining a number of elements in a systematic array. Metal horn antennas were devised during the subsequent decade following the development of waveguides that could direct the propagation of very high-frequency radio signals.

Over the years, many types of antennas have been developed for different purposes. An antenna may be designed specifically to transmit or to receive, although these functions may be performed by the same antenna. A transmitting antenna, in general, must be able to handle much more electrical energy than a receiving antenna. An antenna also may be designed to transmit at specific frequencies. In the United States, amplitude modulation (AM) radio broadcasting, for instance, is done at frequencies between 535 and 1,605 kilohertz (kHz); at these frequencies, a wavelength is hundreds of metres or yards long, and the size of the antenna is therefore not critical. Frequency modulation (FM) broadcasting, on the other hand, is carried out at a range from 88 to 108 megahertz (MHz). At these frequencies a typical wavelength is about 3 metres (10 feet) long, and the antenna must be adjusted more precisely to the electromagnetic wave, both in transmitting and in receiving. Antennas may consist of single lengths of wire or rods in various shapes (dipole, loop, and helical antennas), or of more elaborate arrangements of elements (linear, planar, or electronically steerable arrays). Reflectors and lens antennas use a parabolic dish to collect and focus the energy of radio waves, in much the same way that a parabolic mirror in a reflecting telescope collects light rays. Directional antennas are designed to be aimed directly at the signal source and are used in direction-finding.

More Information

An antenna or aerial is a metal device made to send or receive radio waves. Many electronic devices like radio, television, radar, wireless LAN, cell phone, and GPS need antennas to do their job. Antennas work both in air and outer space.

The word 'antenna' is from Guglielmo Marconi's test with wireless equipment in 1895. For the test, he used a 2.5 meter long pole antenna with a tent pole called ' l'antenna centrale ' in Italian. So his antenna was simply called ' l'antenna '. After that, the word 'antenna' became popular among people and had the meaning it has today. The plural of antenna is either antennas or antennae (U.S. and Canada tends to use antennas more than other places).

Types of antennas

Each one is made to work for a specific frequency range. The antenna's length or size usually depends on the wavelength (1/frequency) it uses.

Different kinds of antenna have different purposes. For example, the isotropic radiator is an imaginary antenna that sends signals equally in all directions. The dipole antenna is simply two wires with one end of each wire connected to the radio and the other end standing free in space. It sends or receives signals in all directions except where the wires are pointing. Some antennas are more directional. Horn is used where high gain is needed, the wavelength is short. Satellite television and radio telescopes mostly use dish antennas.

15M_Antenna.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2332 2024-10-04 16:27:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2332) Solar System

Gist

The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc.

Summary

The solar system, assemblage consisting of the Sun—an average star in the Milky Way Galaxy—and those bodies orbiting around it: 8 (formerly 9) planets with more than 210 known planetary satellites (moons); many asteroids, some with their own satellites; comets and other icy bodies; and vast reaches of highly tenuous gas and dust known as the interplanetary medium. The solar system is part of the "observable universe," the region of space that humans can actually or theoretically observe with the aid of technology. Unlike the observable universe, the universe is possibly infinite.

The Sun, Moon, and brightest planets were visible to the naked eyes of ancient astronomers, and their observations and calculations of the movements of these bodies gave rise to the science of astronomy. Today the amount of information on the motions, properties, and compositions of the planets and smaller bodies has grown to immense proportions, and the range of observational instruments has extended far beyond the solar system to other galaxies and the edge of the known universe. Yet the solar system and its immediate outer boundary still represent the limit of our physical reach, and they remain the core of our theoretical understanding of the cosmos as well. Earth-launched space probes and landers have gathered data on planets, moons, asteroids, and other bodies, and this data has been added to the measurements collected with telescopes and other instruments from below and above Earth’s atmosphere and to the information extracted from meteorites and from Moon rocks returned by astronauts. All this information is scrutinized in attempts to understand in detail the origin and evolution of the solar system—a goal toward which astronomers continue to make great strides.

Composition of the solar system

Located at the centre of the solar system and influencing the motion of all the other bodies through its gravitational force is the Sun, which in itself contains more than 99 percent of the mass of the system. The planets, in order of their distance outward from the Sun, are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Four planets—Jupiter through Neptune—have ring systems, and all but Mercury and Venus have one or more moons. Pluto had been officially listed among the planets since it was discovered in 1930 orbiting beyond Neptune, but in 1992 an icy object was discovered still farther from the Sun than Pluto. Many other such discoveries followed, including an object named Eris that appears to be at least as large as Pluto. It became apparent that Pluto was simply one of the larger members of this new group of objects, collectively known as the Kuiper belt. Accordingly, in August 2006 the International Astronomical Union (IAU), the organization charged by the scientific community with classifying astronomical objects, voted to revoke Pluto’s planetary status and place it under a new classification called dwarf planet. For a discussion of that action and of the definition of planet approved by the IAU, see planet.

Any natural solar system object other than the Sun, a planet, a dwarf planet, or a moon is called a small body; these include asteroids, meteoroids, and comets. Most of the more than one million asteroids, or minor planets, orbit between Mars and Jupiter in a nearly flat ring called the asteroid belt. The myriad fragments of asteroids and other small pieces of solid matter (smaller than a few tens of metres across) that populate interplanetary space are often termed meteoroids to distinguish them from the larger asteroidal bodies.

The solar system’s several billion comets are found mainly in two distinct reservoirs. The more-distant one, called the Oort cloud, is a spherical shell surrounding the solar system at a distance of approximately 50,000 astronomical units (AU)—more than 1,000 times the distance of Pluto’s orbit. The other reservoir, the Kuiper belt, is a thick disk-shaped zone whose main concentration extends 30–50 AU from the Sun, beyond the orbit of Neptune but including a portion of the orbit of Pluto. (One astronomical unit is the average distance from Earth to the Sun—about 150 million km [93 million miles].) Just as asteroids can be regarded as rocky debris left over from the formation of the inner planets, Pluto, its moon Charon, Eris, and the myriad other Kuiper belt objects can be seen as surviving representatives of the icy bodies that accreted to form the cores of Neptune and Uranus. As such, Pluto and Charon may also be considered to be very large comet nuclei. The Centaur objects, a population of comet nuclei having diameters as large as 200 km (125 miles), orbit the Sun between Jupiter and Neptune, probably having been gravitationally perturbed inward from the Kuiper belt. The interplanetary medium—an exceedingly tenuous plasma (ionized gas) laced with concentrations of dust particles—extends outward from the Sun to about 123 AU.

The solar system even contains objects from interstellar space that are just passing through. Two such interstellar objects have been observed. ‘Oumuamua had an unusual cigarlike or pancakelike shape and was possibly composed of nitrogen ice. Comet Borisov was much like the comets of the solar system but with a much higher abundance of carbon monoxide.

Details

The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc. The Sun is a typical star that maintains a balanced equilibrium by the fusion of hydrogen into helium at its core, releasing this energy from its outer photosphere. Astronomers classify it as a G-type main-sequence star.

The largest objects that orbit the Sun are the eight planets. In order from the Sun, they are four terrestrial planets (Mercury, Venus, Earth and Mars); two gas giants (Jupiter and Saturn); and two ice giants (Uranus and Neptune). All terrestrial planets have solid surfaces. Inversely, all giant planets do not have a definite surface, as they are mainly composed of gases and liquids. Over 99.86% of the Solar System's mass is in the Sun and nearly 90% of the remaining mass is in Jupiter and Saturn.

There is a strong consensus among astronomers[e] that the Solar System has at least nine dwarf planets: Ceres, Orcus, Pluto, Haumea, Quaoar, Makemake, Gonggong, Eris, and Sedna. There are a vast number of small Solar System bodies, such as asteroids, comets, centaurs, meteoroids, and interplanetary dust clouds. Some of these bodies are in the asteroid belt (between Mars's and Jupiter's orbit) and the Kuiper belt (just outside Neptune's orbit). Six planets, seven dwarf planets, and other bodies have orbiting natural satellites, which are commonly called 'moons'.

The Solar System is constantly flooded by the Sun's charged particles, the solar wind, forming the heliosphere. Around 75–90 astronomical units from the Sun, the solar wind is halted, resulting in the heliopause. This is the boundary of the Solar System to interstellar space. The outermost region of the Solar System is the theorized Oort cloud, the source for long-period comets, extending to a radius of 2,000–200,000 AU. The closest star to the Solar System, Proxima Centauri, is 4.25 light-years (269,000 AU) away. Both stars belong to the Milky Way galaxy.

Formation and evolution:

Past

The Solar System formed at least 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars. As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars.

As the pre-solar nebula collapsed, conservation of angular momentum caused it to rotate faster. The center, where most of the mass collected, became increasingly hotter than the surroundings. As the contracting nebula spun faster, it began to flatten into a protoplanetary disc with a diameter of roughly 200 AU and a hot, dense protostar at the center. The planets formed by accretion from this disc, in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed or ejected, leaving the planets, dwarf planets, and leftover minor bodies.

Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun (within the frost line). They would eventually form the rocky planets of Mercury, Venus, Earth, and Mars. Because these refractory materials only comprised a small fraction of the solar nebula, the terrestrial planets could not grow very large.

The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud.

Within 50 million years, the pressure and density of hydrogen in the center of the protostar became great enough for it to begin thermonuclear fusion. As helium accumulates at its core, the Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today. The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure counterbalancing the force of gravity. At this point, the Sun became a main-sequence star. Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space.

Following the dissipation of the protoplanetary disk, the Nice model proposes that gravitational encounters between planetisimals and the gas giants caused each to migrate into different orbits. This led to dynamical instability of the entire system, which scattered the planetisimals and ultimately placed the gas giants in their current positions. During this period, the grand tack hypothesis suggests that a final inward migration of Jupiter dispersed much of the asteroid belt, leading to the Late Heavy Bombardment of the inner planets.

Present and future

The Solar System remains in a relatively stable, slowly evolving state by following isolated, gravitationally bound orbits around the Sun. Although the Solar System has been fairly stable for billions of years, it is technically chaotic, and may eventually be disrupted. There is a small chance that another star will pass through the Solar System in the next few billion years. Although this could destabilize the system and eventually lead millions of years later to expulsion of planets, collisions of planets, or planets hitting the Sun, it would most likely leave the Solar System much as it is today.

The Sun's main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other subsequent phases of the Sun's pre-remnant life combined. The Solar System will remain roughly as it is known today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At that time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its increased surface area, the surface of the Sun will be cooler (2,600 K (4,220 °F) at its coolest) than it is on the main sequence.

The expanding Sun is expected to vaporize Mercury as well as Venus, and render Earth and Mars uninhabitable (possibly destroying Earth as well). Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will be ejected into space, leaving behind a dense white dwarf, half the original mass of the Sun but only the size of Earth. The ejected outer layers may form a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.

solar-system2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2333 2024-10-05 00:02:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2333) Solar Eclipse

Gist

A solar eclipse occurs when the moon “eclipses” the sun. This means that the moon, as it orbits the Earth, comes in between the sun and the Earth, thereby blocking the sun and preventing any sunlight from reaching us.

There are four types of solar eclipses

* Partial solar eclipse: The moon blocks the sun, but only partially. As a result, some part of the sun is visible, whereas the blocked part appears dark. A partial solar eclipse is the most common type of solar eclipse.

* Annular solar eclipse: The moon blocks out the sun in such a way that the periphery of the sun remains visible. The unobscured and glowing ring, or “annulus,” around the sun is also popularly known as the “ring of fire.” This is the second most common type of eclipse.

* Total solar eclipse: As the word "total" suggests, the moon totally blocks out the sun for a few minutes, leading to a period of darkness -- and the resulting eclipse is called a total solar eclipse. During this period of darkness, one can witness the solar corona, which is usually too dim to notice when the sun is at its full glory. Also noticeable is the diamond ring effect, or "Baily's beads," which occurs when some of the sunlight is able to reach us because the moon's surface is not perfectly round. These imperfections (in the form of craters and valleys) can allow sunlight to pass through, and this appears just like a bright, shining diamond.

* Hybrid solar eclipse: The rarest of all eclipses is a hybrid eclipse, which shifts between a total and annular eclipse. During a hybrid eclipse, some locations on Earth will witness the moon completely blocking the sun (a total eclipse), whereas other regions will observe an annular eclipse.

Summary

A solar eclipse occurs when the moon is positioned between Earth and the sun and casts a shadow over Earth.

Solar eclipses only occur during a new moon phase, usually about twice a year, when the moon aligns itself in such a way that it eclipses the sun, according to NASA.

A solar eclipse is caused by the moon passing between the sun and Earth, casting a shadow over Earth.

When the moon crosses the ecliptic — Earth's orbital plane — it is known as a lunar node. The distance at which the new moon approaches a node will determine the type of solar eclipse. The type of solar eclipse is also affected by the moon's distance from Earth and the distance between Earth and the sun.

A total solar eclipse occurs when the moon passes between the sun and Earth, completely obscuring the face of the sun. These solar eclipses are possible because the diameter of the sun is about 400 times that of the moon, but also approximately 400 times farther away, says the Natural History Museum.

An annular solar eclipse occurs when the moon passes between the sun and Earth when it is near its farthest point from Earth. At this distance, the moon appears smaller than the sun and doesn't cover the entire face of the sun. Instead, a ring of light is created around the moon.

A partial solar eclipse occurs when the moon passes between the sun and Earth when the trio is not perfectly aligned. As a result, only the penumbra (the partial shadow) passes over you, and the sun will be partially obscured.

A rare hybrid solar eclipse occurs when the moon's distance from Earth is near its limits for the inner shadow — the umbra — to reach Earth and because the planet is curved. Hybrid solar eclipses are also called annular-total (A-T) eclipses. In most cases, a hybrid eclipse starts as an annular eclipse because the tip of the umbra falls just short of making contact with Earth; then it becomes total because the roundness of the planet reaches up and intercepts the shadow's tip near the middle of the path, then finally it returns to annular toward the end of the path.

Approximately twice a year we experience an eclipse season. This is when the new moon aligns itself in such a way that it eclipses the sun. Solar eclipses do not occur every time there is a new moon phase because the moon's orbit is tilted about 5 degrees relative to Earth's orbit around the sun. For this reason, the moon's shadow usually passes either above or below Earth.

The type of solar eclipse will affect what happens and what observers will be able to see. According to the educational website SpaceEdge Academy, 28% of solar eclipses are total, 35% are partial, 32% are annular and only 5% are hybrid.

During a total solar eclipse the sky will darken and observers, with the correct safety equipment, may be able to see the sun's outer atmosphere, known as the corona. This makes for an exciting skywatching target for solar observers as the corona is usually obscured by the bright face of the sun.

During an annular solar eclipse, the moon doesn't fully obscure the face of the sun, as is the case in a total eclipse. Instead, it dramatically appears as a dark disk obscuring a larger bright disk, giving the appearance of a ring of light around the moon. These eclipses are aptly known as "ring of fire" eclipses.

Partial solar eclipses appear as if the moon is taking a "bite" out of the sun. As the trio of the sun, Earth and moon is not perfectly lined up, only part of the sun will appear to be obscured by the moon. When a total or annular solar eclipse occurs, observers outside the area covered by the moon's umbra (the inner shadow) will see a partial eclipse instead.

During a hybrid solar eclipse observers will be able to see either an annular or total solar eclipse depending on where they are located.

Details

A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby obscuring the view of the Sun from a small part of Earth, totally or partially. Such an alignment occurs approximately every six months, during the eclipse season in its new moon phase, when the Moon's orbital plane is closest to the plane of Earth's orbit. In a total eclipse, the disk of the Sun is fully obscured by the Moon. In partial and annular eclipses, only part of the Sun is obscured. Unlike a lunar eclipse, which may be viewed from anywhere on the night side of Earth, a solar eclipse can only be viewed from a relatively small area of the world. As such, although total solar eclipses occur somewhere on Earth every 18 months on average, they recur at any given place only once every 360 to 410 years.

If the Moon were in a perfectly circular orbit and in the same orbital plane as Earth, there would be total solar eclipses once a month, at every new moon. Instead, because the Moon's orbit is tilted at about 5 degrees to Earth's orbit, its shadow usually misses Earth. Solar (and lunar) eclipses therefore happen only during eclipse seasons, resulting in at least two, and up to five, solar eclipses each year, no more than two of which can be total. Total eclipses are rarer because they require a more precise alignment between the centers of the Sun and Moon, and because the Moon's apparent size in the sky is sometimes too small to fully cover the Sun.

An eclipse is a natural phenomenon. In some ancient and modern cultures, solar eclipses were attributed to supernatural causes or regarded as bad omens. Astronomers' predictions of eclipses began in China as early as the 4th century BC; eclipses hundreds of years into the future may now be predicted with high accuracy.

Looking directly at the Sun can lead to permanent eye damage, so special eye protection or indirect viewing techniques are used when viewing a solar eclipse. Only the total phase of a total solar eclipse is safe to view without protection. Enthusiasts known as eclipse chasers or umbraphiles travel to remote locations to see solar eclipses.

Types

The Sun's distance from Earth is about 400 times the Moon's distance, and the Sun's diameter is about 400 times the Moon's diameter. Because these ratios are approximately the same, the Sun and the Moon as seen from Earth appear to be approximately the same size: about 0.5 degree of arc in angular measure.

The Moon's orbit around Earth is slightly elliptical, as is Earth's orbit around the Sun. The apparent sizes of the Sun and Moon therefore vary. The magnitude of an eclipse is the ratio of the apparent size of the Moon to the apparent size of the Sun during an eclipse. An eclipse that occurs when the Moon is near its closest distance to Earth (i.e., near its perigee) can be a total eclipse because the Moon will appear to be large enough to completely cover the Sun's bright disk or photosphere; a total eclipse has a magnitude greater than or equal to 1.000. Conversely, an eclipse that occurs when the Moon is near its farthest distance from Earth (i.e., near its apogee) can be only an annular eclipse because the Moon will appear to be slightly smaller than the Sun; the magnitude of an annular eclipse is less than 1.

Because Earth's orbit around the Sun is also elliptical, Earth's distance from the Sun similarly varies throughout the year. This affects the apparent size of the Sun in the same way, but not as much as does the Moon's varying distance from Earth. When Earth approaches its farthest distance from the Sun in early July, a total eclipse is somewhat more likely, whereas conditions favour an annular eclipse when Earth approaches its closest distance to the Sun in early January.

There are three main types of solar eclipses:

Total eclipse

A total eclipse occurs on average every 18 months when the dark silhouette of the Moon completely obscures the bright light of the Sun, allowing the much fainter solar corona to be visible. During an eclipse, totality occurs only along a narrow track on the surface of Earth. This narrow track is called the path of totality.

Annular eclipse

An annular eclipse, like a total eclipse, occurs when the Sun and Moon are exactly in line with Earth. During an annular eclipse, however, the apparent size of the Moon is not large enough to completely block out the Sun. Totality thus does not occur; the Sun instead appears as a very bright ring, or annulus, surrounding the dark disk of the Moon. Annular eclipses occur once every one or two years, not annually. The term derives from the Latin root word anulus, meaning "ring", rather than annus, for "year".

Partial eclipse

A partial eclipse occurs about twice a year, when the Sun and Moon are not exactly in line with Earth and the Moon only partially obscures the Sun. This phenomenon can usually be seen from a large part of Earth outside of the track of an annular or total eclipse. However, some eclipses can be seen only as a partial eclipse, because the umbra passes above Earth's polar regions and never intersects Earth's surface. Partial eclipses are virtually unnoticeable in terms of the Sun's brightness, as it takes well over 90% coverage to notice any darkening at all. Even at 99%, it would be no darker than civil twilight.

Terminology:

Hybrid eclipse

A hybrid eclipse (also called annular/total eclipse) shifts between a total and annular eclipse. At certain points on the surface of Earth, it appears as a total eclipse, whereas at other points it appears as annular. Hybrid eclipses are comparatively rare.

A hybrid eclipse occurs when the magnitude of an eclipse changes during the event from less to greater than one, so the eclipse appears to be total at locations nearer the midpoint, and annular at other locations nearer the beginning and end, since the sides of Earth are slightly further away from the Moon. These eclipses are extremely narrow in their path width and relatively short in their duration at any point compared with fully total eclipses; the 2023 April 20 hybrid eclipse's totality is over a minute in duration at various points along the path of totality. Like a focal point, the width and duration of totality and annularity are near zero at the points where the changes between the two occur.

Central eclipse

Central eclipse is often used as a generic term for a total, annular, or hybrid eclipse. This is, however, not completely correct: the definition of a central eclipse is an eclipse during which the central line of the umbra touches Earth's surface. It is possible, though extremely rare, that part of the umbra intersects with Earth (thus creating an annular or total eclipse), but not its central line. This is then called a non-central total or annular eclipse. Gamma is a measure of how centrally the shadow strikes. The last (umbral yet) non-central solar eclipse was on April 29, 2014. This was an annular eclipse. The next non-central total solar eclipse will be on April 9, 2043.

Eclipse phases

The visual phases observed during a total eclipse are called:

* First contact—when the Moon's limb (edge) is exactly tangential to the Sun's limb.
* Second contact—starting with Baily's Beads (caused by light shining through valleys on the Moon's surface) and the diamond ring effect. Almost the entire disk is covered.
* Totality—the Moon obscures the entire disk of the Sun and only the solar corona is visible.
* Third contact—when the first bright light becomes visible and the Moon's shadow is moving away from the observer. Again a diamond ring may be observed.
* Fourth contact—when the trailing edge of the Moon ceases to overlap with the solar disk and the eclipse ends.

solar_eclipse.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2334 2024-10-06 00:03:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2334) Lunar Eclipse

Gist

Lunar eclipses occur at the full moon phase. When Earth is positioned precisely between the Moon and Sun, Earth’s shadow falls upon the surface of the Moon, dimming it and sometimes turning the lunar surface a striking red over the course of a few hours. Each lunar eclipse is visible from half of Earth.

A lunar eclipse occurs when the Sun, the Earth and the Moon all fall in a straight line. The Earth comes in between the Sun and the Moon and blocks the sunlight from reaching the Moon.

Summary

Lunar eclipses occur when Earth moves between the sun and the moon, casting a shadow across the lunar surface.

Lunar eclipses can only take place during a full moon and are a popular event for skywatchers around the world, as they can be enjoyed without any special equipment, unlike solar eclipses.

The next lunar eclipse will be a total lunar eclipse on March 13-14, 2025.

A lunar eclipse is caused by Earth blocking sunlight from reaching the moon and creating a shadow across the lunar surface.

The sun-blocking Earth casts two shadows that fall on the moon during a lunar eclipse: The umbra is a full, dark shadow, and the penumbra is a partial outer shadow.

There are three types of lunar eclipses depending on how the sun, Earth and moon are aligned at the time of the event.   

* Total lunar eclipse: Earth's shadow is cast across the entire lunar surface.
* Partial lunar eclipse: During a partial lunar eclipse, only part of the moon enters Earth's shadow, which may look like it is taking a "bite" out of the lunar surface. Earth's shadow will appear dark on the side of the moon facing Earth. How much of a "bite" we see depends on how the sun, Earth and moon align, according to NASA.
* Penumbral lunar eclipse: The faint outer part of Earth's shadow is cast across the lunar surface. This type of eclipse is not as dramatic as the other two and can be difficult to see. 

During a total lunar eclipse, the lunar surface turns a rusty red color, earning the nickname "blood moon". The eerie red appearance is caused by sunlight interacting with Earth's atmosphere.

When sunlight reaches Earth, our atmosphere scatters and filters different wavelengths. Shorter wavelengths such as blue light are scattered outward, while longer wavelengths like red are bent — or refracted — into Earth's umbra, according to the Natural History Museum. When the moon passes through Earth's umbra during a total lunar eclipse, the red light reflects off the lunar surface, giving the moon its blood-red appearance.

"How gold, orange, or red the moon appears during a total lunar eclipse depends on how much dust, water, and other particles are in Earth's atmosphere" according to NASA scientists. Other atmospheric factors such as temperature and humidity also affect the moon's appearance during a lunar eclipse.

Details

A lunar eclipse is an astronomical event that occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. Such an alignment occurs during an eclipse season, approximately every six months, during the full moon phase, when the Moon's orbital plane is closest to the plane of the Earth's orbit.

This can occur only when the Sun, Earth, and Moon are exactly or very closely aligned (in syzygy) with Earth between the other two, which can happen only on the night of a full moon when the Moon is near either lunar node. The type and length of a lunar eclipse depend on the Moon's proximity to the lunar node.

When the Moon is totally eclipsed by the Earth (a "deep eclipse"), it takes on a reddish color that is caused by the planet when it completely blocks direct sunlight from reaching the Moon's surface, as the only light that is reflected from the lunar surface is what has been refracted by the Earth's atmosphere. This light appears reddish due to the Rayleigh scattering of blue light, the same reason sunrises and sunsets are more orange than during the day.

Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours, while a total solar eclipse lasts only a few minutes at any given place, because the Moon's shadow is smaller. Also, unlike solar eclipses, lunar eclipses are safe to view without any eye protection or special precautions.

Types of lunar eclipse

Earth's shadow can be divided into two distinctive parts: the umbra and penumbra. Earth totally occludes direct solar radiation within the umbra, the central region of the shadow. However, since the Sun's diameter appears to be about one-quarter of Earth's in the lunar sky, the planet only partially blocks direct sunlight within the penumbra, the outer portion of the shadow.

Penumbral lunar eclipse

A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. No part of the moon is in the Earth's umbra during this event, meaning that on all or a part of the Moon's surface facing Earth, the sun is partially blocked. The penumbra causes a subtle dimming of the lunar surface, which is only visible to the naked eye when the majority of the Moon's diameter has immersed into Earth's penumbra. A special type of penumbral eclipse is a total penumbral lunar eclipse, during which the entire Moon lies exclusively within Earth's penumbra. Total penumbral eclipses are rare, and when these occur, the portion of the Moon closest to the umbra may appear slightly darker than the rest of the lunar disk.

Partial lunar eclipse

When the Moon's near side penetrates partially into the Earth's umbra, it is known as a partial lunar eclipse, while a total lunar eclipse occurs when the entire Moon enters the Earth's umbra. During this event, one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. The Moon's average orbital speed is about 1.03 km/s (2,300 mph), or a little more than its diameter per hour, so totality may last up to nearly 107 minutes. Nevertheless, the total time between the first and last contacts of the Moon's limb with Earth's shadow is much longer and could last up to 236 minutes.

Total lunar eclipse

When the Moon's near side entirely passes into the Earth's umbral shadow, a total lunar eclipse occurs. Just prior to complete entry, the brightness of the lunar limb—the curved edge of the Moon still being hit by direct sunlight—will cause the rest of the Moon to appear comparatively dim. The moment the Moon enters a complete eclipse, the entire surface will become more or less uniformly bright, being able to reveal stars surrounding it. Later, as the Moon's opposite limb is struck by sunlight, the overall disk will again become obscured. This is because, as viewed from the Earth, the brightness of a lunar limb is generally greater than that of the rest of the surface due to reflections from the many surface irregularities within the limb: sunlight striking these irregularities is always reflected back in greater quantities than that striking more central parts, which is why the edges of full moons generally appear brighter than the rest of the lunar surface. This is similar to the effect of velvet fabric over a convex curved surface, which, to an observer, will appear darkest at the center of the curve. It will be true of any planetary body with little or no atmosphere and an irregular cratered surface (e.g., Mercury) when viewed opposite the Sun.

Central lunar eclipse

Central lunar eclipse is a total lunar eclipse during which the Moon passes near and through the centre of Earth's shadow, contacting the antisolar point. This type of lunar eclipse is relatively rare.

The relative distance of the Moon from Earth at the time of an eclipse can affect the eclipse's duration. In particular, when the Moon is near apogee, the farthest point from Earth in its orbit, its orbital speed is the slowest. The diameter of Earth's umbra does not decrease appreciably within the changes in the Moon's orbital distance. Thus, the concurrence of a totally eclipsed Moon near apogee will lengthen the duration of totality.

Selenelion

A selenelion or selenehelion, also called a horizontal eclipse, occurs where and when both the Sun and an eclipsed Moon can be observed at the same time. The event can only be observed just before sunset or just after sunrise, when both bodies will appear just above opposite horizons at nearly opposite points in the sky. A selenelion occurs during every total lunar eclipse—it is an experience of the observer, not a planetary event separate from the lunar eclipse itself. Typically, observers on Earth located on high mountain ridges undergoing false sunrise or false sunset at the same moment of a total lunar eclipse will be able to experience it. Although during selenelion the Moon is completely within the Earth's umbra, both it and the Sun can be observed in the sky because atmospheric refraction causes each body to appear higher (i.e., more central) in the sky than its true geometric planetary positions.

Timing

The timing of total lunar eclipses is determined by what are known as its "contacts" (moments of contact with Earth's shadow):

* P1 (First contact): Beginning of the penumbral eclipse. Earth's penumbra touches the Moon's outer limb.
* U1 (Second contact): Beginning of the partial eclipse. Earth's umbra touches the Moon's outer limb.
* U2 (Third contact): Beginning of the total eclipse. The Moon's surface is entirely within Earth's umbra.
* Greatest eclipse: The peak stage of the total eclipse. The Moon is at its closest to the center of Earth's umbra.
* U3 (Fourth contact): End of the total eclipse. The Moon's outer limb exits Earth's umbra.
* U4 (Fifth contact): End of the partial eclipse. Earth's umbra leaves the Moon's surface.
* P4 (Sixth contact): End of the penumbral eclipse. Earth's penumbra no longer makes contact with the Moon.

3-s2.0-B9780128205853000028-f02-15-9780128205853.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2335 2024-10-07 00:06:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2335) Flower

Gist

A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants.

A flower, also known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower.

Flowers are composed of many distinct components: sepals, petals, stamens, and carpels. These components are arranged in whorls and attach to an area called the receptacle, which is at the end of the stem that leads to the flower. This stem is called the peduncle.

Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower. These petals attract pollinators, and reproductive organs that produce gametophytes, which in flowering plants produce gametes.

Summary

A flower, also known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers consist of a combination of vegetative organs – sepals that enclose and protect the developing flower. These petals attract pollinators, and reproductive organs that produce gametophytes, which in flowering plants produce gametes. The male gametophytes, which produce sperm, are enclosed within pollen grains produced in the anthers. The female gametophytes are contained within the ovules produced in the ovary.

Most flowering plants depend on animals, such as bees, moths, and butterflies, to transfer their pollen between different flowers, and have evolved to attract these pollinators by various strategies, including brightly colored, conspicuous petals, attractive scents, and the production of nectar, a food source for pollinators. In this way, many flowering plants have co-evolved with pollinators to be mutually dependent on services they provide to one another—in the plant's case, a means of reproduction; in the pollinator's case, a source of food.

When pollen from the anther of a flower is deposited on the stigma, this is called pollination. Some flowers may self-pollinate, producing seed using pollen from a different flower of the same plant, but others have mechanisms to prevent self-pollination and rely on cross-pollination, when pollen is transferred from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happens in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators. Some flowers produce diaspores without fertilization (parthenocarpy). After fertilization, the ovary of the flower develops into fruit containing seeds.

Flowers have long been appreciated for their beauty and pleasant scents, and also hold cultural significance as religious, ritual, or symbolic objects, or sources of medicine and food.

Details

A flower is the characteristic reproductive structure of angiosperms. As popularly used, the term “flower” especially applies when part or all of the reproductive structure is distinctive in colour and form.

In their range of colour, size, form, and anatomical arrangement, flowers present a seemingly endless variety of combinations. They range in size from minute blossoms to giant blooms. In some plants, such as poppy, magnolia, tulip, and petunia, each flower is relatively large and showy and is produced singly, while in other plants, such as aster, snapdragon, and lilac, the individual flowers may be very small and are borne in a distinctive cluster known as an inflorescence. Regardless of their variety, all flowers have a uniform function, the reproduction of the species through the production of seed.

Form and types

Basically, each flower consists of a floral axis upon which are borne the essential organs of reproduction (stamens and pistils) and usually accessory organs (sepals and petals); the latter may serve to both attract pollinating insects and protect the essential organs. The floral axis is a greatly modified stem; unlike vegetative stems, which bear leaves, it is usually contracted, so that the parts of the flower are crowded together on the stem tip, the receptacle. The flower parts are usually arrayed in whorls (or cycles) but may also be disposed spirally, especially if the axis is elongate. There are commonly four distinct whorls of flower parts: (1) an outer calyx consisting of sepals; within it lies (2) the corolla, consisting of petals; (3) the androecium, or group of stamens; and in the centre is (4) the gynoecium, consisting of the pistils.

The sepals and petals together make up the perianth, or floral envelope. The sepals are usually greenish and often resemble reduced leaves, while the petals are usually colourful and showy. Sepals and petals that are indistinguishable, as in lilies and tulips, are sometimes referred to as tepals. The androecium, or male parts of the flower, comprise the stamens, each of which consists of a supporting filament and an anther, in which pollen is produced. The gynoecium, or female parts of the flower, comprises one or more pistils, each of which consists of an ovary, with an upright extension, the style, on the top of which rests the stigma, the pollen-receptive surface. The ovary encloses the ovules, or potential seeds. A pistil may be simple, made up of a single carpel, or ovule-bearing modified leaf; or compound, formed from several carpels joined together.

A flower having sepals, petals, stamens, and pistils is complete; lacking one or more of such structures, it is said to be incomplete. Stamens and pistils are not present together in all flowers. When both are present the flower is said to be perfect, or bisexual, regardless of a lack of any other part that renders it incomplete (see photograph). A flower that lacks stamens is pistillate, or female, while one that lacks pistils is said to be staminate, or male. When the same plant bears unisexual flowers of both sexes, it is said to be monoecious (e.g., tuberous begonia, hazel, oak, corn); when the male and female flowers are on different plants, the plant is dioecious (e.g., date, holly, cottonwood, willow); when there are male, female, and bisexual flowers on the same plant, the plant is termed polygamous.

A flower may be radially symmetrical, as in roses and petunias, in which case it is termed regular or actinomorphic. A bilaterally symmetrical flower, as in orchids and snapdragons, is irregular or zygomorphic.

Pollination

The stamens and pistils are directly involved with the production of seed. The stamen bears microsporangia (spore cases) in which are developed numerous microspores (potential pollen grains); the pistil bears ovules, each enclosing an egg cell. When a microspore germinates, it is known as a pollen grain. When the pollen sacs in a stamen’s anther are ripe, the anther releases them and the pollen is shed. Fertilization can occur only if the pollen grains are transferred from the anther to the stigma of a pistil, a process known as pollination.

Self-pollination

There are two chief kinds of pollination: (1) self-pollination, the pollination of a stigma by pollen from the same flower or another flower on the same plant; and (2) cross-pollination, the transfer of pollen from the anther of a flower of one plant to the stigma of the flower of another plant of the same species. Self-pollination occurs in many species, but in the others, perhaps the majority, it is prevented by such adaptations as the structure of the flower, self-incompatibility, and the maturation of stamens and pistils of the same flower or plant at different times. Cross-pollination may be brought about by a number of agents, chiefly insects and wind. Wind-pollinated flowers generally can be recognized by their lack of colour, odour, or nectar, while animal-pollinated flowers (see photograph) are conspicuous by virtue of their structure, colour, or the production of scent or nectar.

After a pollen grain has reached the stigma, it germinates, and a pollen tube protrudes from it. This tube, containing two male gametes (sperms), extends into the ovary and reaches the ovule, discharging its gametes so that one fertilizes the egg cell, which becomes an embryo, and the other joins with two polar nuclei to form the endosperm. (Normally many pollen grains fall on a stigma; they all may germinate, but only one pollen tube enters any one ovule.) Following fertilization, the embryo is on its way to becoming a seed, and at this time the ovary itself enlarges to form the fruit.

Cultural significance

Flowers have been symbols of beauty in most civilizations of the world, and flower giving is still among the most popular of social amenities. As gifts, flowers serve as expressions of affection for spouses, other family members, and friends; as decorations at weddings and other ceremonies; as tokens of respect for the deceased; as cheering gifts to the bedridden; and as expressions of thanks or appreciation. Most flowers bought by the public are grown in commercial greenhouses or horticultural fields and then sold through wholesalers to retail florists.

Additional Information

Sexual reproductive structure of plants, especially of angiosperms (flowering plants).

Supplement

Flowers are plant structures involved in sexual reproduction. Thus, they are typically comprised of sexual reproductive structures (i.e. androecium and gynoecium) in addition to nonessential parts such as sepals and petals. And the presence/absence of these structures may be used to describe flowers and flowering plants (angiosperms).

Complete and incomplete flowers:

Flowers that have these four structures are called complete; those lacking in one or more of these structures are called incomplete. Many flowering plants produce conspicuous, colorfoul, scented petals in order to attract insect pollinators. There are plants, like grasses, that produce flowers that are less-conspicuous and lacking in petals. These plants do not require insects but rely on other agents of pollination, such as wind.

Perfect (bisexual) and imperfect (unisexual) flowers:

Flowers that have both male and female reproductive structures are called bisexual or perfect. Flowers that bear either male (androecium) or female reproductive structures (gynoecium) are referred to as unisexual or imperfect. With only one reproductive organ present, imperfect flowers are also described as incomplete flowers.

Regular and irregular flowers:

Flowers that display symmetry are described as regular flowers in contrast to irregular flowers that do not.

Monoecious and dioecious plants:

Plants may be described as monoecious or dioecious. A monoecious plant bears both male and female imperfect flowers. A dioecious plant is a plant producing only one type of imperfect flowers, i.e. male or female flowers. Therefore, a dioecious plant may either be a male or female plant depending on the flower they produce.

71af16119ea1f145ac43baff0c9ade2d.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2336 2024-10-08 00:03:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2336) Porter (Carrier)

Gist

Porters carry luggage for guests. Porters can be found in hotels and motels, on ships and in transport terminals such as wharves, airports and train stations. They not only look after luggage, but direct guests to their rooms, cabins or berths and provide concierge services when necessary.

Summary

According to the Britannica Dictionary, a porter is a person who carries luggage or bags, or performs other duties, at a hotel, airport, or train station:

* Hotel, airport, or train station: A porter's job is to carry luggage or bags for customers.
* Train: In the US, a porter's job is to assist passengers on a train.
* Hospital: In Britain, a porter's job is to move patients, equipment, and medical supplies around the hospital.
* College or hospital: In Britain, a porter's job is to let people into the college or hospital.

Details

A porter, also called a bearer, is a person who carries objects or cargo for others. The range of services conducted by porters is extensive, from shuttling luggage aboard a train (a railroad porter) to bearing heavy burdens at altitude in inclement weather on multi-month mountaineering expeditions. They can carry items on their backs (backpack) or on their heads. The word "porter" derives from the Latin portare (to carry).

The use of humans to transport cargo dates to the ancient world, prior to domesticating animals and development of the wheel. Historically it remained prevalent in areas where slavery was permitted, and exists today where modern forms of mechanical conveyance are impractical or impossible, such as in mountainous terrain, or thick jungle or forest cover.

Over time, slavery diminished and technology advanced, but the role of porter for specialized transporting services remains strong in the 21st century. Examples include bellhops at hotels, redcaps at railway stations, skycaps at airports, and bearers on adventure trips engaged by foreign travelers.

Expeditions

Porters, frequently called Sherpas in the Himalayas (after the ethnic group most Himalayan porters come from), are also an essential part of mountaineering: they are typically highly skilled professionals who specialize in the logistics of mountain climbing, not merely people paid to carry loads (although carrying is integral to the profession). Frequently, porters/Sherpas work for companies who hire them out to climbing groups, to serve both as porters and as mountain guides; the term "guide" is often used interchangeably with "Sherpa" or "porter", but there are certain differences. Porters are expected to prepare the route before and/or while the main expedition climbs, climbing up beforehand with tents, food, water, and equipment (enough for themselves and for the main expedition), which they place in carefully located deposits on the mountain. This preparation can take months of work before the main expedition starts. Doing this involves numerous trips up and down the mountain, until the last and smallest supply deposit is planted shortly below the peak. When the route is prepared, either entirely or in stages ahead of the expedition, the main body follows. The last stage is often done without the porters, they remaining at the last camp, a quarter mile or below the summit, meaning only the main expedition is given the credit for mounting the summit. In many cases, since the porters are going ahead, they are forced to freeclimb, driving spikes and laying safety lines for the main expedition to use as they follow. Porters (such as Sherpas for example), are frequently local ethnic types, well adapted to living in the rarified atmosphere and accustomed to life in the mountains. Although they receive little glory, porters or Sherpas are often considered among the most skilled of mountaineers, and are generally treated with respect, since the success of the entire expedition is only possible through their work. They are also often called upon to stage rescue expeditions when a part of the party is endangered or there is an injury; when a rescue attempt is successful, several porters are usually called upon to transport the injured climber(s) back down the mountain so the expedition can continue. A well known incident where porters attempted to rescue numerous stranded climbers, and often died as a result, is the 2008 K2 disaster. Sixteen Sherpas were killed in the 2014 Mount Everest ice avalanche, inciting the entire Sherpa guide community to refuse to undertake any more ascents for the remainder of the year, making any further expeditions impossible.

History

Human adaptability and flexibility led to the early use of humans for transporting gear. Porters were commonly used as beasts of burden in the ancient world, when labor was generally cheap and slavery widespread. The ancient Sumerians, for example, enslaved women to shift wool and flax.

In the early Americas, where there were few native beasts of burden, all goods were carried by porters called Tlamemes in the Nahuatl language of Mesoamerica. In colonial times, some areas of the Andes employed porters called silleros to carry persons, particularly Europeans, as well as their luggage across the difficult mountain passes. Throughout the globe porters served, and in some areas continue to, as such littermen, particularly in crowded urban areas.

Many great works of engineering were created solely by muscle power in the days before machinery or even wheelbarrows and wagons; massive workforces of workers and bearers would complete impressive earthworks by manually lugging the earth, stones, or bricks in baskets on their backs.

Porters were very important to the local economies of many large cities in Brazil during the 1800s, where they were known as ganhadores. In 1857, ganhadores in Salvador, Bahia, went on strike in the first general strike in the country's history.

Contribution to mountain climbing expeditions

The contributions of porters can often go overlooked. Amir Mehdi was a Pakistani mountaineer and porter known for being part of the team which managed the first successful ascent of Nanga Parbat in 1953, and of K2 in 1954 with an Italian expedition. He, along with the Italian mountaineer Walter Bonatti, are also known for having survived a night at the highest open bivouac - 8,100 metres (26,600 ft) - on K2 in 1954. Fazal Ali, who was born in the Shimshal Valley in Pakistan North, is – according to the Guinness Book of World Records – the only man ever to have scaled K2 (8611 m) three times, in 2014, 2017 and 2018, all without oxygen, but his achievements have gone largely unrecognised.

Today

Porters are still paid to shift burdens in many third-world countries where motorized transport is impractical or unavailable, often alongside pack animals.

The Sherpa people of Nepal are so renowned as mountaineering porters that their ethnonym is synonymous with that profession. Their skill, knowledge of the mountains and local culture, and ability to perform at altitude make them indispensable for the highest Himalayan expeditions.

Porters at Indian railway stations are called coolies, a term for unskilled Asian labourer derived from the Chinese word for porter.

Mountain porters are also still in use in a handful of more developed countries, including Slovakia and Japan (bokka). These men (and more rarely women) regularly resupply mountain huts and tourist chalets at high-altitude mountain ranges.

In North America

Certain trade-specific terms are used for forms of porters in North America, including bellhop (hotel porter), redcap (railway station porter), and skycap (airport porter).

The practice of railroad station porters wearing red-colored caps to distinguish them from blue-capped train personnel with other duties was begun on Labor Day of 1890 by an African-American porter in order to stand out from the crowds at Grand Central Terminal in New York City. The tactic immediately caught on, over time adapted by other forms of porters for their specialties.

istanbul-airport-greeeting-ist.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2337 2024-10-09 00:02:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2337) Electrum

Gist

Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".

Summary

Electrum was called ‘djaam’. It was believed to be a natural alloy of gold and silver and the terminology is mentioned only in ancient civilisation. It was brought from the mountains of the eastern desert and Nubia. It is one of the seven known metals since prehistoric times, along with gold, copper, mercury, tin, iron and lead. Aristotle suggested that the six non-noble metals would eventually become gold to attain perfection by transmutation.

Electrum, from the Greek ‘elektron’ (a substance that develops electricity under friction) is a common name given to all intermediate varieties in the isomorphous Au-Ag series. The physico-chemical properties of electrum vary with the silver content. Physically, with increasing silver the colour changes from yellow to near-white, the metal becomes less dense and decreases from about 800 to around 550 fine. Chemically with increasing silver, the lower fineness mixture becomes less stable than higher fineness gold and is thus more prone to alteration by weathering. Electrum is sometimes coated with halogens and sulphide compounds, which yield a thin film of native silver under suitably reducing conditions.

Additional Information

Electrum, natural or artificial alloy of gold with at least 20 percent silver, which was used to make the first known coins in the Western world. Most natural electrum contains copper, iron, palladium, bismuth, and perhaps other metals. The colour varies from white-gold to brassy, depending on the percentages of the major constituents and copper. In the ancient world the chief source was Lydia, in Asia Minor, where the alloy was found in the area of the Pactolus River, a small tributary of the Hermus (modern Gediz Nehri, in Turkey). The first Occidental coinage, possibly begun by King Gyges (7th century bc) of Lydia, consisted of irregular ingots of electrum bearing his stamp as a guarantee of negotiability at a predetermined value.

Details

Electrum is a naturally occurring alloy of gold and silver, with trace amounts of copper and other metals. Its color ranges from pale to bright yellow, depending on the proportions of gold and silver. It has been produced artificially and is also known as "green gold".

Electrum was used as early as the third millennium BC in the Old Kingdom of Egypt, sometimes as an exterior coating to the pyramidions atop ancient Egyptian pyramids and obelisks. It was also used in the making of ancient drinking vessels. The first known metal coins made were of electrum, dating back to the end of the 7th century or the beginning of the 6th century BC.

Etymology

The name electrum is the Latinized form of the Greek word (ḗlektron), mentioned in the Odyssey, referring to a metallic substance consisting of gold alloyed with silver. The same word was also used for the substance amber, likely because of the pale yellow color of certain varieties. (It is from amber’s electrostatic properties that the modern English words electron and electricity are derived.) Electrum was often referred to as "white gold" in ancient times but could be more accurately described as pale gold because it is usually pale yellow or yellowish-white in color. The modern use of the term white gold usually concerns gold alloyed with any one or a combination of nickel, silver, platinum and palladium to produce a silver-colored gold.

Composition

Electrum consists primarily of gold and silver but is sometimes found with traces of platinum, copper and other metals. The name is mostly applied informally to compositions between 20–80% gold and 80–20% silver, but these are strictly called gold or silver depending on the dominant element. Analysis of the composition of electrum in ancient Greek coinage dating from about 600 BC shows that the gold content was about 55.5% in the coinage issued by Phocaea. In the early classical period the gold content of electrum ranged from 46% in Phokaia to 43% in Mytilene. In later coinage from these areas, dating to 326 BC, the gold content averaged 40% to 41%. In the Hellenistic period electrum coins with a regularly decreasing proportion of gold were issued by the Carthaginians. In the later Eastern Roman Empire controlled from Constantinople the purity of the gold coinage was reduced.

History

Electrum is mentioned in an account of an expedition sent by Pharaoh Sahure of the Fifth Dynasty of Egypt. It is also discussed by Pliny the Elder in his Naturalis Historia. It is also mentioned in the Bible, in the first chapter of the book of the prophet Ezekiel.

Early coinage

The earliest known electrum coins, Lydian coins and East Greek coins found under the Temple of Artemis at Ephesus, are currently dated to the last quarter of the 7th century BC (625–600 BC). Electrum is believed to have been used in coins c. 600 BC in Lydia during the reign of Alyattes.

Electrum was much better for coinage than gold, mostly because it was harder and more durable, but also because techniques for refining gold were not widespread at the time. The gold content of naturally occurring electrum in modern western Anatolia ranges from 70% to 90%, in contrast to the 45–55% of gold in electrum used in ancient Lydian coinage of the same geographical area. This suggests that the Lydians had already solved the refining technology for silver and were adding refined silver to the local native electrum some decades before introducing pure silver coins.

In Lydia, electrum was minted into coins weighing 4.7 grams (0.17 oz), each valued at 1⁄3 stater (meaning "standard"). Three of these coins—with a weight of about 14.1 grams (0.50 oz)—totaled one stater, about one month's pay for a soldier. To complement the stater, fractions were made: the trite (third), the hekte (sixth), and so forth, including 1/24 of a stater, and even down to 1/48 and 1/96 of a stater. The 1⁄96 stater was about 0.14 grams (0.0049 oz) to 0.15 grams (0.0053 oz). Larger denominations, such as a one stater coin, were minted as well.

Because of variation in the composition of electrum, it was difficult to determine the exact worth of each coin. Widespread trading was hampered by this problem, as the intrinsic value of each electrum coin could not be easily determined. This suggests that one reason for the invention of coinage in that area was to increase the profits from seigniorage by issuing currency with a lower gold content than the commonly circulating metal.

These difficulties were eliminated circa 570 BC when the Croeseids, coins of pure gold and silver, were introduced. However, electrum currency remained common until approximately 350 BC. The simplest reason for this was that, because of the gold content, one 14.1 gram stater was worth as much as ten 14.1 gram silver pieces.

tumblr_n6e46d2hVi1tscbfqo1_640.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2338 2024-10-10 00:04:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2338) Electric shaver

Summary

Razor, keen-edged cutting implement for shaving or cutting hair. Prehistoric cave drawings show that clam shells, shark’s teeth, and sharpened flints were used as shaving implements. Solid gold and copper razors have been found in Egyptian tombs of the 4th millennium bce. According to the Roman historian Livy, the razor was introduced in Rome in the 6th century bce by Lucius Tarquinius Priscus, legendary king of Rome; but shaving did not become customary until the 5th century bce.

Steel razors with ornamented handles and individually hollow-ground blades were crafted in Sheffield, England, the centre of the cutlery industry, in the 18th and 19th centuries. The hard crucible steel produced there by Benjamin Huntsman in 1740 was first rejected and then, later, after its adoption in France, deemed superior by the local manufacturers.

In the United States a hoe-shaped safety razor, a steel blade with a guard along one edge, was produced in 1880, and, at the beginning of the 20th century, King Camp Gillette combined the hoe shape with the double-edged replaceable blade. In the early 1960s several countries began to manufacture stainless steel blades for safety razors, with the advantage of longer use.

The popularity of the long-wearing double-edged blade was greatly eclipsed by the development of inexpensive cartridge-style injector blades, designed to fit into disposable plastic handles. The cartridge had only one cutting edge, but many manufacturers produced a “double-edged” instrument by placing two blades on one side. By the early 21st century, safety razors with up to five blades were also common.

Electric razors were patented as early as 1900 in the United States, but the first to be successfully manufactured was that on which Jacob Schick, a retired U.S. Army colonel, applied for a patent in 1928 and that he placed on the market in 1931. Competitive models soon appeared. In the electric razor a shearing head, driven by a small motor, is divided into two sections: the outer consists of a series of slots to grip the hairs and the inner of a series of saw blades. Models vary in the number and design of the blades, in the shape of the shearing head (round or flat), and in auxiliary devices such as clippers for sideburns.

Details

An electric shaver (also known as the dry razor, electric razor, or simply shaver) is a razor with an electrically powered rotating or oscillating blade. The electric shaver usually does not require the use of shaving cream, soap, or water. The razor may be powered by a small DC motor, which is either powered by batteries or mains electricity. Many modern ones are powered using rechargeable batteries. Alternatively, an electro-mechanical oscillator driven by an AC-energized solenoid may be used. Some very early mechanical shavers had no electric motor and had to be powered by hand, for example by pulling a cord to drive a flywheel.

Electric shavers fall into two main categories: foil or rotary-style. Users tend to prefer one or the other. Many modern shavers are cordless; they are charged up with a plug charger or they are placed within a cleaning and charging unit.

History

The first person to receive a patent for a razor powered by electricity was John Francis O'Rourke, a New York civil engineer, with his US patent 616554 filed in 1898. The first working electric razor was invented in 1915 by German engineer Johann Bruecker. Others followed suit, such as the American Col. Jacob Schick, considered to be the father of the modern electric razor, who patented the first electric razor in 1930. The Remington Rand Corporation developed the electric razor further, first producing the electric razor in 1937. Another important inventor was Prof. Alexandre Horowitz, from Philips Laboratories in the Netherlands, who designed one of the first rotary razors. It has a shaving head consisting of cutters that cut off the hair entering the head of the razor at skin level. Roland Ullmann from Braun in Germany was another inventor who was decisive for development of the modern electric razor. He was the first to fuse rubber and metal elements on shavers and developed more than 100 electrical razors for Braun. In the course of his career Ullmann filed well over 100 patents for innovations in the context of dry shavers. The major manufacturers introduce new improvements to the hair-cutting mechanism of their products every few years. Each manufacturer sells several different generations of cutting mechanism at the same time, and for each generation, several models with different features and accessories to reach various price points. The improvements to the cutting mechanisms tend to 'trickle-down' to lower-priced models over time.

Early versions of electric razors were meant to be used on dry skin only. Many recent electric razors have been designed to allow for wet/dry use, which also allows them to be cleaned using running water or an included cleaning machine, reducing cleaning effort. Some patience is necessary when starting to use a razor of this type, as the skin usually takes some time to adjust to the way that the electric razor lifts and cuts the hairs. Moisturizers designed specifically for electric shaving are available.

Battery-powered electric razors

In the late 1940s, the first electric razors that were battery-powered entered the market. In 1960, Remington introduced the first rechargeable battery-powered electric razor. Battery-operated electric razors have been available using rechargeable batteries sealed inside the razor's case, previously nickel cadmium or, more recently, nickel metal hydride. Some modern shavers use Lithium-ion batteries (which do not suffer from memory effect). Sealed battery shavers either have built-in or external charging devices. Some shavers may be designed to plug directly into a wall outlet with a swing-out or pop-up plug, or have a detachable AC cord. Other shavers have recharging base units that plug into an AC outlet and provide DC power at the base contacts (eliminating the need for the AC-to-DC converter to be inside the razor, reducing the risk of electric shock). In order to prevent any risk of electric shock, shavers designed for wet use usually do not allow corded use and will not turn on until the charging adapter cord is disconnected or the shaver is removed from the charging base.

Razor vs. trimmer

An electric razor and an electric trimmer are essentially the same devices by build, the major difference coming in terms of their usage and the blades that they come with.

Electric razors are made specifically for providing a clean shave. It has lesser battery power but more aggression towards clipping hair. Electric Trimmers, on the other hand, are not meant for clean shaves. They come with special combs fixed onto them that aid in proper grooming and trimming of the beard stubs to desired shapes and sizes.

General

Some models, generally marketed as "travel razors" (or "travel shavers"), use removable rechargeable or disposable batteries, usually size AA or AAA. This offers the option of purchasing batteries while traveling instead of carrying a charging device.

Water-resistance and wet/dry electric shavers

Many modern electric shavers are water-resistant, allowing the user to clean the shaver in water. In order to ensure electrical safety, the charging/power cord for the shaver must be unplugged from it before the unit is cleaned using water.

Some shavers are labeled as "Wet/Dry" which means the unit can be used in wet environments, for wet shaving. Such models are always battery-powered and usually the electronics will not allow turning the unit on while the charging adapter is plugged-in. This is necessary to ensure electrical safety, as it would be unsafe to use a plugged-in shaver in bathtub or shower.

Lady shaver

A lady shaver is a device designed to shave a woman's body hair. The design is usually similar to a man's foil shaver. Often a shaving attachment is a feature of an epilator which is supplied as a separate head-attachment (different from the epilating one).

Body hair shaver

Traditional men's shavers are designed for shaving facial hair. However, other shaver products are made specifically to facilitate shaving of body hair.

618DXJnNDIL-scaled.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2339 2024-10-11 00:03:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2239) Fire Extinguisher

Gist

There are five different fire extinguishers, which are:

* Water, water mist or water spray fire extinguishers.
* Foam fire extinguishers.
* Dry Powder – standard or specialist fire extinguishers.
* Carbon Dioxide ('CO2') fire extinguishers.
* Wet Chemical fire extinguishers.

Fire extinguishers apply an agent that will cool burning heat, smother fuel or remove oxygen so the fire cannot continue to burn. A portable fire extinguisher can quickly control a small fire if applied by an individual properly trained. Fire extinguishers are located throughout every building on campus.

Summary

Fire extinguisher is a portable or movable apparatus used to put out a small fire by directing onto it a substance that cools the burning material, deprives the flame of oxygen, or interferes with the chemical reactions occurring in the flame. Water performs two of these functions: its conversion to steam absorbs heat, and the steam displaces the air from the vicinity of the flame. Many simple fire extinguishers, therefore, are small tanks equipped with hand pumps or sources of compressed gas to propel water through a nozzle. The water may contain a wetting agent to make it more effective against fires in upholstery, an additive to produce a stable foam that acts as a barrier against oxygen, or an antifreeze. Carbon dioxide is a common propellant, brought into play by removing the locking pin of the cylinder valve containing the liquefied gas; this method has superseded the process, used in the soda-acid fire extinguisher, of generating carbon dioxide by mixing sulfuric acid with a solution of sodium bicarbonate.

Numerous agents besides water are used; the selection of the most appropriate one depends primarily on the nature of the materials that are burning. Secondary considerations include cost, stability, toxicity, ease of cleanup, and the presence of electrical hazard.

Small fires are classified according to the nature of the burning material. Class A fires involve wood, paper, and the like; Class B fires involve flammable liquids, such as cooking fats and paint thinners; Class C fires are those in electrical equipment; Class D fires involve highly reactive metals, such as sodium and magnesium. Water is suitable for putting out fires of only one of these classes (A), though these are the most common. Fires of classes A, B, and C can be controlled by carbon dioxide, halogenated hydrocarbons such as halons, or dry chemicals such as sodium bicarbonate or ammonium dihydrogen phosphate. Class D fires ordinarily are combated with dry chemicals.

A primitive hand pump for directing water at a fire was invented by Ctesibius of Alexandria about 200 bce, and similar devices were employed during the Middle Ages. In the early 1700s devices created independently by English chemists Ambrose Godfrey and French C. Hoppfer used explosive charges to disperse fire-suppressing solutions. English inventor Capt. George Manby introduced a handheld fire extinguisher—a three-gallon tank containing a pressurized solution of potassium carbonate—in 1817. Modern incarnations employing a variety of chemical solutions are essentially modifications of Manby’s design.

Details

A fire extinguisher is a handheld active fire protection device usually filled with a dry or wet chemical used to extinguish or control small fires, often in emergencies. It is not intended for use on an out-of-control fire, such as one which has reached the ceiling, endangers the user (i.e., no escape route, smoke, explosion hazard, etc.), or otherwise requires the equipment, personnel, resources or expertise of a fire brigade. Typically, a fire extinguisher consists of a hand-held cylindrical pressure vessel containing an agent that can be discharged to extinguish a fire. Fire extinguishers manufactured with non-cylindrical pressure vessels also exist but are less common.

There are two main types of fire extinguishers: stored-pressure and cartridge-operated. In stored pressure units, the expellant is stored in the same chamber as the firefighting agent itself. Depending on the agent used, different propellants are used. With dry chemical extinguishers, nitrogen is typically used; water and foam extinguishers typically use air. Stored pressure fire extinguishers are the most common type. Cartridge-operated extinguishers contain the expellant gas in a separate cartridge that is punctured before discharge, exposing the propellant to the extinguishing agent. This type is not as common, used primarily in areas such as industrial facilities, where they receive higher-than-average use. They have the advantage of simple and prompt recharge, allowing an operator to discharge the extinguisher, recharge it, and return to the fire in a reasonable amount of time. Unlike stored pressure types, these extinguishers use compressed carbon dioxide instead of nitrogen, although nitrogen cartridges are used on low-temperature (–60 rated) models. Cartridge-operated extinguishers are available in dry chemical and dry powder types in the U.S. and water, wetting agent, foam, dry chemical (classes ABC and B.C.), and dry powder (class D) types in the rest of the world.

Fire extinguishers are further divided into handheld and cart-mounted (also called wheeled extinguishers). Handheld extinguishers weigh from 0.5 to 14 kilograms (1.1 to 30.9 lb), and are hence, easily portable by hand. Cart-mounted units typically weigh more than 23 kilograms (51 lb). These wheeled models are most commonly found at construction sites, airport runways, heliports, as well as docks and marinas.

History

The first fire extinguisher of which there is any record was patented in England in 1723 by Ambrose Godfrey, a celebrated chemist at that time. It consisted of a cask of fire-extinguishing liquid containing a pewter chamber of gunpowder. This was connected with a system of fuses which were ignited, exploding the gunpowder and scattering the solution. This device was probably used to a limited extent, as Bradley's Weekly Messenger for November 7, 1729, refers to its efficiency in stopping a fire in London.

A portable pressurised fire extinguisher, the 'Extincteur' was invented by British Captain George William Manby and demonstrated in 1816 to the 'Commissioners for the affairs of Barracks'; it consisted of a copper vessel of 3 gallons (13.6 liters) of pearl ash (potassium carbonate) solution contained within compressed air. When operated it expelled liquid onto the fire.

One of the first fire extinguisher patents was issued to Alanson Crane of Virginia on Feb. 10, 1863.

Thomas J. Martin, an American inventor, was awarded a patent for an improvement in the Fire Extinguishers on March 26, 1872. His invention is listed in the U. S. Patent Office in Washington, DC under patent number 125,603.

The soda-acid extinguisher was first patented in 1866 by Francois Carlier of France, which mixed a solution of water and sodium bicarbonate with tartaric acid, producing the propellant carbon dioxide (CO2) gas. A soda-acid extinguisher was patented in the U.S. in 1880 by Almon M. Granger. His extinguisher used the reaction between sodium bicarbonate solution and sulfuric acid to expel pressurized water onto a fire. A vial of concentrated sulfuric acid was suspended in the cylinder. Depending on the type of extinguisher, the vial of acid could be broken in one of two ways. One used a plunger to break the acid vial, while the second released a lead stopple that held the vial closed. Once the acid was mixed with the bicarbonate solution, carbon dioxide gas was expelled and thereby pressurized the water. The pressurized water was forced from the canister through a nozzle or short length of hose.

The cartridge-operated extinguisher was invented by Read & Campbell of England in 1881, which used water or water-based solutions. They later invented a carbon tetrachloride model called the "Petrolex" which was marketed toward automotive use.

The chemical foam extinguisher was invented in 1904 by Aleksandr Loran in Russia, based on his previous invention of fire fighting foam. Loran first used it to extinguish a pan of burning naphtha. It worked and looked similar to the soda-acid type, but the inner parts were slightly different. The main tank contained a solution of sodium bicarbonate in water, whilst the inner container (somewhat larger than the equivalent in a soda-acid unit) contained a solution of aluminium sulphate. When the solutions were mixed, usually by inverting the unit, the two liquids reacted to create a frothy foam, and carbon dioxide gas. The gas expelled the foam in the form of a jet. Although liquorice-root extracts and similar compounds were used as additives (stabilizing the foam by reinforcing the bubble-walls), there was no "foam compound" in these units. The foam was a combination of the products of the chemical reactions: sodium and aluminium salt-gels inflated by the carbon dioxide. Because of this, the foam was discharged directly from the unit, with no need for an aspirating branchpipe (as in newer mechanical foam types). Special versions were made for rough service, and vehicle mounting, known as apparatus of fire department types. Key features were a screw-down stopper that kept the liquids from mixing until it was manually opened, carrying straps, a longer hose, and a shut-off nozzle. Fire department types were often private label versions of major brands, sold by apparatus manufacturers to match their vehicles. Examples are Pirsch, Ward LaFrance, Mack, Seagrave, etc. These types are some of the most collectable extinguishers as they cross into both the apparatus restoration and fire extinguisher areas of interest.

In 1910, The Pyrene Manufacturing Company of Delaware filed a patent for using carbon tetrachloride (CTC, or CCl4) to extinguish fires.[8] The liquid vaporized and extinguished the flames by inhibiting the chemical chain reaction of the combustion process (it was an early 20th-century presupposition that the fire suppression ability of carbon tetrachloride relied on oxygen removal). In 1911, they patented a small, portable extinguisher that used the chemical. This consisted of a brass or chrome container with an integrated handpump, which was used to expel a jet of liquid towards the fire. It was usually of 1 imperial quart (1.1 L) or 1 imperial pint (0.57 L) capacity but was also available in up to 2 imperial gallons (9.1 L) size. As the container was unpressurized, it could be refilled after use through a filling plug with a fresh supply of CTC.

Another type of carbon tetrachloride extinguisher was the fire grenade. This consisted of a glass sphere filled with CTC, that was intended to be hurled at the base of a fire (early ones used salt-water, but CTC was more effective). Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were fitted to motor vehicles. Carbon tetrachloride extinguishers were withdrawn in the 1950s because of the chemical's toxicity – exposure to high concentrations damages the nervous system and internal organs. Additionally, when used on a fire, the heat can convert CTC to phosgene gas, formerly used as a chemical weapon.

The carbon dioxide extinguisher was invented (at least in the US) by the Walter Kidde Company in 1924 in response to Bell Telephone's request for an electrically non-conductive chemical for extinguishing the previously difficult-to-extinguish fires in telephone switchboards. It consisted of a tall metal cylinder containing 7.5 pounds (3.4 kg) of CO2 with a wheel valve and a woven brass, cotton-covered hose, with a composite funnel-like horn as a nozzle. CO2 is still popular today as it is an ozone-friendly clean agent and is used heavily in film and television production to extinguish burning stuntmen. Carbon dioxide extinguishes fire mainly by displacing oxygen. It was once thought that it worked by cooling, although this effect on most fires is negligible. An anecdotal report of a carbon dioxide fire extinguisher was published in Scientific American in 1887 which describes the case of a basement fire at a Louisville, Kentucky pharmacy which melted a lead pipe charge with CO2 (called carbonic acid gas at the time) intended for a soda fountain which immediately extinguished the flames thus saving the building. Also in 1887, carbonic acid gas was described as a fire extinguisher for engine chemical fires at sea and ashore.

In 1928, DuGas (later bought by ANSUL) came out with a cartridge-operated dry chemical extinguisher, which used sodium bicarbonate specially treated with chemicals to render it free-flowing and moisture-resistant. It consisted of a copper cylinder with an internal CO2 cartridge. The operator turned a wheel valve on top to puncture the cartridge and squeezed a lever on the valve at the end of the hose to discharge the chemical. This was the first agent available for large-scale three-dimensional liquid and pressurized gas fires, but remained largely a specialty type until the 1950s, when small dry chemical units were marketed for home use. ABC dry chemical came over from Europe in the 1950s, with Super-K being invented in the early 1960s and Purple-K being developed by the United States Navy in the late 1960s. Manually applied dry agents such as graphite for class D (metal) fires had existed since World War II, but it was not until 1949 that Ansul introduced a pressurized extinguisher using an external CO2 cartridge to discharge the agent. Met-L-X (sodium chloride) was the first extinguisher developed in the US, with graphite, copper, and several other types being developed later.

In the 1940s, Germany invented the liquid chlorobromomethane (CBM) for use in aircraft. It was more effective and slightly less toxic than carbon tetrachloride and was used until 1969. Methyl bromide was discovered as an extinguishing agent in the 1920s and was used extensively in Europe. It is a low-pressure gas that works by inhibiting the chain reaction of the fire and is the most toxic of the vaporizing liquids, used until the 1960s. The vapor and combustion by-products of all vaporizing liquids were highly toxic and could cause death in confined spaces.

In the 1970s, Halon 1211 came over to the United States from Europe where it had been used since the late 1940s or early 1950s. Halon 1301 had been developed by DuPont and the United States Army in 1954. Both 1211 and 1301 work by inhibiting the chain reaction of the fire, and in the case of Halon 1211, cooling class A fuels as well. Halon is still in use today but is falling out of favor for many uses due to its environmental impact. Europe and Australia have severely restricted its use, since the Montreal Protocol of 1987. Less severe restrictions have been implemented in the United States, the Middle East, and Asia.

398-front_240cd8e4-67c9-406a-ab4f-3c8592727fae_2048x2048.jpg?v=1627080388


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2340 2024-10-12 00:02:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2240) Ink Cartridge

Gist

An ink cartridge or inkjet cartridge is the component of an inkjet printer that contains the ink to be deposited onto paper during printing.

Is an ink cartridge refillable?

Yes, in many cases you can refill ink cartridges. However, if the cartridge has dried ink inside it, don't try refilling it as it will likely not work properly.

Details

An ink cartridge or inkjet cartridge is the component of an inkjet printer that contains the ink to be deposited onto paper during printing. It consists of one or more ink reservoirs and can include electronic contacts and a chip to exchange information with the printer.

Design:

Thermal

Most consumer inkjet printers use a thermal inkjet. Inside each partition of the ink reservoir is a heating element with a tiny metal plate or resistor. In response to a signal given by the printer, a tiny current flows through the metal or resistor making it warm, and the ink in contact with the heated resistor is vaporized into a tiny steam bubble inside the nozzle. As a result, an ink droplet is forced out of the cartridge nozzle onto the paper. This process takes a fraction of a millisecond.

The printing depends on the smooth flow of ink, which can be hindered if the ink begins to dry at the print head, as can happen when an ink level becomes low. Dried ink can be cleaned from a cartridge print head using 91% denatured isopropyl alcohol (not rubbing alcohol). Tap water contains contaminants that may clog the print head, so distilled water and a lint-free cloth is recommended.

The ink also acts as a coolant to protect the metal-plate heating elements − when the ink supply is depleted, and printing is attempted, the heating elements in thermal cartridges often burn out, permanently damaging the print head. When the ink first begins to run low, the cartridge should be refilled or replaced, to avoid overheating damage to the print head.

Piezoelectric

Piezoelectric printers use a piezoelectric crystal in each nozzle instead of a heating element. When current is applied, the crystal changes shape or size, increasing the pressure in the ink channel and thus forcing a droplet of ink from the nozzle. There are two types of crystals used: those that elongate when subjected to electricity or bi-morphs which bend. The ink channels in a piezoelectric ink jet print head can be formed using a variety of techniques, but one common method is lamination of a stack of metal plates, each of which includes precision micro-fabricated features of various shapes (i.e. containing an ink channel, orifice, reservoir and crystal). This cool environment allows the use of inks which react badly when heated. For example, roughly 1/1000 of every ink jet is vaporized due to the intense heat, and ink must be designed to not clog the printer with the products of thermal decomposition. Piezoelectric printers can in some circumstances make a smaller ink drop than thermal inkjets.

Parts

Cartridge body

Stores the ink of the ink cartridge. May contain hydrophobic foam that prevents refilling.

Printhead

Some ink cartridges combine ink storage and printheads into one assembly with four main additional parts:

* Nozzle Plate: Expels ink onto the paper.
* Cover Plate: Protects the nozzles.
* Common Ink Chamber: A reservoir holding a small amount of ink prior to being 'jetted' onto the paper.
* Piezoelectric Substrate (in Piezoelectric printers) : houses the piezoelectric crystal.
* Metallic plate / resistor (in Thermal printers): Heats the ink with a small current.

Variants

* Color inkjets use the CMYK color model: cyan, magenta, yellow, and the key, black. Over the years, two distinct forms of black have become available: one that blends readily with other colors for graphical printing, and a near-waterproof variant for text.

* Most modern inkjets carry a black cartridge for text, and either a single CMYK combined or a discrete cartridge for each color; while keeping colors separate was initially rare, it has become common in more recent years. Some higher-end inkjets offer cartridges for extra colors.

* Some cartridges contain ink specially formulated for printing photographs.

* All printer suppliers produce their own type of ink cartridges. Cartridges for different printers are often incompatible — either physically or electrically.

* Some manufacturers incorporate the printer's head into the cartridge (examples include HP, Dell, and Lexmark), while others such as Epson keep the print head a part of the printer itself. Both sides make claims regarding their approach leading to lower costs for the consumer.

* In 2014, Epson introduced a range of printers that use refillable ink tanks, providing a major reduction in printing cost. This operates similar to continuous ink system printers. Epson does not subsidize the cost of these printers termed its "EcoTank" range.

Pricing

Ink cartridges are typically priced at $13 to $75/US fl oz ($1,664 to $9,600/US gal; $440 to $2,536/L) of ink, meaning that refill cartridges sometimes cost a substantial fraction of the cost of the printer. To save money, many people use compatible ink cartridges from a vendor other than the printer manufacturer. A study by British consumer watchdog Which? found that in some cases, printer ink from the manufacturer is more expensive than champagne. Others use aftermarket inks, refilling their own ink cartridges using a kit that includes bulk ink. The high cost of cartridges has also provided an incentive for counterfeiters to supply cartridges falsely claiming to be made by the original manufacturer. The print cartridge industry failed to earn $3 billion in 2009 due to this, according to an International Data Corporation estimate.

Another alternative involves modifications of an original cartridge allowing use of continuous ink systems with external ink tanks. Some manufacturers, including Canon and Epson, have introduced new models featuring in-built continuous ink systems. Overall, This was seen as a welcome move by users.

Consumer exploitation lawsuits

It can sometimes be cheaper to buy a new printer than to replace the set of ink cartridges supplied with the printer. The major printer manufacturers − Hewlett Packard, Lexmark, Dell, Canon, Epson and Brother − use a "razor and blades" business model, often breaking even or losing money selling printers while expecting to make a profit by selling cartridges over the life of the printer. Since much of the printer manufacturers' profits are from ink and toner cartridge sales, some of these companies have taken various actions against aftermarket cartridges.

Some printer manufacturers set up their cartridges to interact with the printer, preventing operation when the ink level is low, or when the cartridge has been refilled. One researcher with the magazine Which? overrode such an interlocked system and found that in one case he could print up to 38% more good quality pages after the chip stated that the cartridge was empty. In the United Kingdom, in 2003, the cost of ink has been the subject of an Office of Fair Trading investigation, as Which? magazine has accused manufacturers of a lack of transparency about the price of ink and called for an industry standard for measuring ink cartridge performance. Which? stated that color HP cartridges cost over seven times more per milliliter than 1985 Dom Perignon.

In 2006, Epson lost a class action lawsuit that claimed their inkjet printers and ink cartridges stop printer operation due to "empty" cartridge notifications even when usable ink still remains. Epson settled the case by giving $45 e-coupons in their online stores for people who bought Epson inkjet printers and ink cartridges from April 8, 1999, to May 8, 2006.

In 2010, HP lost three class action lawsuits: 1.) claims of HP inkjet printers giving false low ink notifications, 2.) claims of cyan ink being spent when printing with black ink, 3.) claims of ink cartridges being disabled by printers upon being detected as "empty" even if they are not yet empty. HP paid $5 million in settlement.

In 2017, Halte à L’Obsolescence Programmêe (HOP) — End Planned Obsolescence — filed a lawsuit and won against Brother, Canon, Epson, HP and other companies for intentionally shortening product life spans - inkjet printers and ink cartridges included. The companies were fined €15,000.

In September 2018, HP lost a class action lawsuit where plaintiffs claim HP printer firmware updates caused fake error messages upon using third party ink cartridges. HP settled the case with $1.5 million.

In October 2019, Epson had a class action complaint filed against it for printer firmware updates that allegedly prevented printer operation upon detection of third-party ink cartridges.

inkjet-cartridge-is-a-component-of-an-inkjet-printer-p3fesexizi0t39rscecczo6x4h9qqsdqy7lozopohs.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2341 2024-10-13 00:03:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2241) Fruit

Gist

Fruits and vegetables contain important vitamins, minerals and plant chemicals. They also contain fibre. There are many varieties of fruit and vegetables available and many ways to prepare, cook and serve them. A diet high in fruit and vegetables can help protect you against cancer, diabetes and heart disease.

In a botanical sense, a fruit is the fleshy or dry ripened ovary of a flowering plant, enclosing the seed or seeds. Apricots, bananas, and grapes, as well as bean pods, corn grains, tomatoes, cucumbers, and (in their shells) acorns and almonds, are all technically fruits.

Fruits are an excellent source of essential vitamins and minerals, and they are high in fiber. Fruits also provide a wide range of health-boosting antioxidants, including flavonoids. Eating a diet high in fruits and vegetables can reduce a person's risk of developing heart disease, cancer, inflammation, and diabetes.

Summary

In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering.

Fruits are the means by which flowering plants (also known as angiosperms) disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and other animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.

In common language usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.

Details

A fruit is the fleshy or dry ripened ovary of a flowering plant, enclosing the seed or seeds. Thus, apricots, bananas, and grapes, as well as bean pods, corn grains, tomatoes, cucumbers, and (in their shells) acorns and almonds, are all technically fruits. Popularly, however, the term is restricted to the ripened ovaries that are sweet and either succulent or pulpy. For treatment of the cultivation of fruits, see fruit farming. For treatment of the nutrient composition and processing of fruits, see fruit processing.

Botanically, a fruit is a mature ovary and its associated parts. It usually contains seeds, which have developed from the enclosed ovule after fertilization, although development without fertilization, called parthenocarpy, is known, for example, in bananas. Fertilization induces various changes in a flower: the anthers and stigma wither, the petals drop off, and the sepals may be shed or undergo modifications; the ovary enlarges, and the ovules develop into seeds, each containing an embryo plant. The principal purpose of the fruit is the protection and dissemination of the seed.

Fruits are important sources of dietary fibre, vitamins (especially vitamin C), and antioxidants. Although fresh fruits are subject to spoilage, their shelf life can be extended by refrigeration or by the removal of oxygen from their storage or packaging containers. Fruits can be processed into juices, jams, and jellies and preserved by dehydration, canning, fermentation, and pickling. Waxes, such as those from bayberries (wax myrtles), and vegetable ivory from the hard fruits of a South American palm species (Phytelephas macrocarpa) are important fruit-derived products. Various drugs come from fruits, such as morphine from the fruit of the opium poppy.

Types of fruits

The concept of “fruit” is based on such an odd mixture of practical and theoretical considerations that it accommodates cases in which one flower gives rise to several fruits (larkspur) as well as cases in which several flowers cooperate in producing one fruit (mulberry). Pea and bean plants, exemplifying the simplest situation, show in each flower a single pistil (female structure), traditionally thought of as a megasporophyll or carpel. The carpel is believed to be the evolutionary product of an originally leaflike organ bearing ovules along its margin. This organ was somehow folded along the median line, with a meeting and coalescing of the margins of each half, the result being a miniature closed but hollow pod with one row of ovules along the suture. In many members of the rose and buttercup families, each flower contains a number of similar single-carpelled pistils, separate and distinct, which together represent what is known as an apocarpous gynoecium. In other cases, two to several carpels (still thought of as megasporophylls, although perhaps not always justifiably) are assumed to have fused to produce a single compound gynoecium (pistil), whose basal part, or ovary, may be uniloculate (with one cavity) or pluriloculate (with several compartments), depending on the method of carpel fusion.

Most fruits develop from a single pistil. A fruit resulting from the apocarpous gynoecium (several pistils) of a single flower may be referred to as an aggregate fruit. A multiple fruit represents the gynoecia of several flowers. When additional flower parts, such as the stem axis or floral tube, are retained or participate in fruit formation, as in the apple or strawberry, an accessory fruit results.

Certain plants, mostly cultivated varieties, spontaneously produce fruits in the absence of pollination and fertilization; such natural parthenocarpy leads to seedless fruits such as bananas, oranges, grapes, and cucumbers. Since 1934, seedless fruits of tomato, cucumber, peppers, holly, and others have been obtained for commercial use by administering plant growth substances, such as indoleacetic acid, indolebutyric acid, naphthalene acetic acid, and β-naphthoxyacetic acid, to the ovaries in flowers (induced parthenocarpy).

Classification systems for mature fruits take into account the number of carpels constituting the original ovary, dehiscence (opening) versus indehiscence, and dryness versus fleshiness. The properties of the ripened ovary wall, or pericarp, which may develop entirely or in part into fleshy, fibrous, or stony tissue, are important. Often three distinct pericarp layers can be identified: the outer (exocarp), the middle (mesocarp), and the inner (endocarp). All purely morphological systems (i.e., classification schemes based on structural features) are artificial. They ignore the fact that fruits can be understood only functionally and dynamically.

There are two broad categories of fruits: fleshy fruits, in which the pericarp and accessory parts develop into succulent tissues, as in eggplants, oranges, and strawberries; and dry fruits, in which the entire pericarp becomes dry at maturity. Fleshy fruits include (1) the berries, such as tomatoes, blueberries, and cherries, in which the entire pericarp and the accessory parts are succulent tissue, (2) aggregate fruits, such as blackberries and strawberries, which form from a single flower with many pistils, each of which develops into fruitlets, and (3) multiple fruits, such as pineapples and mulberries, which develop from the mature ovaries of an entire inflorescence. Dry fruits include the legumes, cereal grains, capsulate fruits, and nuts.

Brazil nut

As strikingly exemplified by the word nut, popular terms often do not properly describe the botanical nature of certain fruits. A Brazil nut, for example, is a thick-walled seed enclosed in a likewise thick-walled capsule along with several sister seeds. A coconut is a drupe (a stony-seeded fruit) with a fibrous outer part. A walnut is a drupe in which the pericarp has differentiated into a fleshy outer husk and an inner hard “shell”; the “meat” represents the seed—two large convoluted cotyledons, a minute epicotyl and hypocotyl, and a thin papery seed coat. A peanut is an indehiscent legume fruit. An almond is a drupe “stone”; i.e., the hardened endocarp usually contains a single seed. Botanically speaking, blackberries and raspberries are not true berries but aggregates of tiny drupes. A juniper “berry” is not a fruit at all but the cone of a gymnosperm. A mulberry is a multiple fruit made up of small nutlets surrounded by fleshy sepals. And strawberry represents a much-swollen receptacle (the tip of the flower stalk bearing the flower parts) bearing on its convex surface an aggregation of tiny brown achenes (small single-seeded fruits).

Dispersal

Fruits play an important role in the seed dispersal of many plant species. In dehiscent fruits, such as poppy capsules, the seeds are usually dispersed directly from the fruits, which may remain on the plant. In fleshy or indehiscent fruits, the seeds and fruit are commonly moved away from the parent plant together. In many plants, such as grasses and lettuce, the outer integument and ovary wall are completely fused, so seed and fruit form one entity; such seeds and fruits can logically be described together as “dispersal units,” or diaspores. For further discussion on seed dispersal, see seed: agents of dispersal.

Animal dispersal

A wide variety of animals aid in the dispersal of seeds, fruits, and diaspores. Many birds and mammals, ranging in size from mice and kangaroo rats to elephants, act as dispersers when they eat fruits and diaspores. In the tropics, chiropterochory (dispersal by large bats such as flying foxes, Pteropus) is particularly important. Fruits adapted to these animals are relatively large and drab in colour with large seeds and a striking (often rank) odour. Such fruits are accessible to bats because of the pagoda-like structure of the tree canopy, fruit placement on the main trunk, or suspension from long stalks that hang free of the foliage. Examples include mangoes, guavas, breadfruit, carob, and several fig species. In South Africa a desert melon (Cucumis humifructus) participates in a symbiotic relationship with aardvarks—the animals eat the fruit for its water content and bury their own dung, which contains the seeds, near their burrows.

Additionally, furry terrestrial mammals are the agents most frequently involved in epizoochory, the inadvertent carrying by animals of dispersal units. Burlike fruits, or those diaspores provided with spines, hooks, claws, bristles, barbs, grapples, and prickles, are genuine hitchhikers, clinging tenaciously to their carriers. Their functional shape is achieved in various ways: in cleavers, or goose grass (Galium aparine), and in enchanter’s nightshade (Circaea lutetiana), the hooks are part of the fruit itself; in common agrimony (Agrimonia eupatoria), the fruit is covered by a persistent calyx (the sepals, parts of the flower, which remain attached beyond the usual period) equipped with hooks; and in wood avens (Geum urbanum), the persistent styles have hooked tips. Other examples are bur marigolds, or beggar’s-ticks (Bidens species); buffalo bur (Solanum rostratum); burdock (Arctium); Acaena; and many Medicago species. The last-named, with dispersal units highly resistant to damage from hot water and certain chemicals (dyes), have achieved wide global distribution through the wool trade. A somewhat different principle is employed by the so-called trample burrs, said to lodge themselves between the hooves of large grazing mammals. Examples are mule grab (Proboscidea) and the African grapple plant (Harpagophytum). In water burrs, such as those of the water chestnut Trapa, the spines should probably be considered as anchoring devices.

Birds, being preening animals, rarely carry burlike diaspores on their bodies. They do, however, transport the very sticky (viscid) fruits of Pisonia, a tropical tree of the four-o’clock family, to distant Pacific islands in this way. Small diaspores, such as those of sedges and certain grasses, may also be carried in the mud sticking to waterfowl and terrestrial birds.

Synzoochory, deliberate carrying of diaspores by animals, is practiced when birds carry diaspores in their beaks. The European mistle thrush (Turdus viscivorus) deposits the viscid seeds of mistletoe (Viscum album) on potential host plants when, after a meal of the berries, it whets its bill on branches or simply regurgitates the seeds. The North American (Phoradendron) and Australian (Amyema) mistletoes are dispersed by various birds, and the comparable tropical species of the plant family Loranthaceae by flower-peckers (of the bird family Dicaeidae), which have a highly specialized gizzard that allows seeds to pass through but retains insects. Plants may also profit from the forgetfulness and sloppy habits of certain nut-eating birds that cache part of their food but neglect to recover everything or that drop units on their way to a hiding place. Best known in this respect are the nutcrackers (Nucifraga), which feed largely on the “nuts” of beech, oak, walnut, chestnut, and hazelnut; the jays (Garrulus), which hide hazelnuts and acorns; the nuthatches; and the California woodpecker (Melanerpes formicivorus), which may embed literally thousands of acorns, almonds, and pecan nuts in bark fissures or holes of trees. Rodents may aid in dispersal by stealing the embedded diaspores and burying them. In Germany, an average jay may transport about 4,600 acorns per season, over distances of up to 4 km (2.5 miles).

Most ornithochores (plants with bird-dispersed seeds) have conspicuous diaspores attractive to such fruit-eating birds as thrushes, pigeons, barbets (members of the bird family Capitonidae), toucans (family Ramphastidae), and hornbills (family Bucerotidae), all of which either excrete or regurgitate the hard part undamaged. Such diaspores have a fleshy, sweet, or oil-containing edible part; a striking colour (often red or orange); no pronounced smell; protection against being eaten prematurely, in the form of acids and tannins that are present only in the green fruit; protection of the seed against digestion, afforded by bitterness, hardness, or the presence of poisonous compounds; permanent attachment; and, finally, absence of a hard outer cover. In contrast to bat-dispersed diaspores, they occupy no special position on the plant. Examples are rose hips, plums, dogwood fruits, barberry, red currant, mulberry, nutmeg fruits, figs, blackberries, and others. The natural and abundant occurrence of Euonymus, which is a largely tropical genus, in temperate Europe and Asia, can be understood only in connection with the activities of birds. Birds also contributed substantially to the repopulation with plants of the Krakatoa island group in Indonesia after the catastrophic volcanic eruption there in 1883. Birds have made Lantana (originally American) a pest in Indonesia and Australia; the same is true of black cherries (Prunus serotina) in parts of Europe, Rubus species in Brazil and New Zealand, and olives (Olea europaea) in Australia.

Many intact fruits and seeds can serve as fish bait—those of Sonneratia, for example, for the catfish Arius maculatus. Certain Amazon River fishes react positively to the audible “explosions” of the ripe fruits of Eperua rubiginosa. The largest freshwater wetlands in the world, found in Brazil’s Pantanal, become inundated with seasonal floods at a time when many plants are releasing their fruits. Pacu fish (Metynnis) feed on submerged and floating fruits and disperse the seeds when they defecate. It is thought that at least one plant species (Bactris glaucescens) relies exclusively on pacu for seed dispersal.

Fossil evidence indicates that saurochory, dispersal by reptiles, is very ancient. The giant Galapagos tortoise is important for the dispersal of local cacti and tomatoes, and iguanas are known to eat and disperse a number of smaller fruits, including the iguana hackberry (Celtis iguanaea). The name alligator apple, for Annona glabra, refers to its method of dispersal, an example of saurochory.

Wind dispersal

Winged fruits are most common in trees and shrubs, such as maple, ash, elm, birch, alder, and dipterocarps (a family of about 600 species of Old World tropical trees). The one-winged propeller type, as found in maple, is called a samara. When fruits have several wings on their sides, rotation may result, as in rhubarb and dock species. Sometimes accessory parts form the wings—for example, the bracts (small green leaflike structures that grow just below flowers) in linden (Tilia).

Many fruits form plumes, some derived from persisting and ultimately hairy styles, as in clematis, avens, and anemones; some from the perianth, as in the sedge family (Cyperaceae); and some from the pappus, a calyx structure, as in dandelion and Jack-go-to-bed-at-noon (Tragopogon). In woolly fruits and seeds, the pericarp or the seed coat is covered with cottonlike hairs—e.g., willow, poplar or cottonwood, cotton, and balsa. In some cases, the hairs may serve double duty in that they function in water dispersal as well as in wind dispersal.

Poppies have a mechanism in which the wind has to swing the slender fruitstalk back and forth before the seeds are thrown out through pores near the top of the capsule. The inflated indehiscent pods of Colutea arborea, a steppe plant, represent balloons capable of limited air travel before they hit the ground and become windblown tumbleweeds.

Other forms of dispersal

Geocarpy is defined as either the production of fruits underground, as in the arum lilies (Stylochiton and Biarum), in which the flowers are already subterranean, or the active burying of fruits by the mother plant, as in the peanut (Arachis hypogaea). In the American hog peanut (Amphicarpa bracteata), pods of a special type are buried by the plant and are cached by squirrels later on. Kenilworth ivy (Cymbalaria), which normally grows on stone or brick walls, stashes its fruits away in crevices after strikingly extending the flower stalks. Not surprisingly, geocarpy is most often encountered in desert plants; however, it also occurs in violet species, in subterranean clover (Trifolium subterraneum), and in begonias (Begonia hypogaea) of the African rainforest.

Barochory, the dispersal of seeds and fruits by gravity alone, is demonstrated by the heavy fruits of horse chestnut.

2-2-2-3foodgroups_fruits_detailfeature.jpg?sfvrsn=64942d53_4


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2342 2024-10-14 00:12:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2242) Auditor

Gist

An auditor is a person or a firm appointed by a company to execute an audit. To act as an auditor, a person should be certified by the regulatory authority of accounting and auditing or possess certain specified qualifications.

An auditor is an independent professional who examines and verifies the accuracy of a company's financial records and reports. Auditors are responsible for ensuring that financial statements are accurate and in compliance with various laws and regulations.

The auditor's objectives are to obtain reasonable assurance about whether the financial statements as a whole are free from material misstatement, whether due to fraud or error, and to issue an auditor's report that includes the auditor's opinion.

Summary

An auditor is a person or a firm appointed by a company to execute an audit. To act as an auditor, a person should be certified by the regulatory authority of accounting and auditing or possess certain specified qualifications. Generally, to act as an external auditor of the company, a person should have a certificate of practice from the regulatory authority.

Types of auditors

* External auditor/ Statutory auditor is an independent firm engaged by the client subject to the audit, to express an opinion on whether the company's financial statements are free of material misstatements, whether due to fraud or error. For publicly traded companies, external auditors may also be required to express an opinion over the effectiveness of internal controls over financial reporting. External auditors may also be engaged to perform other agreed-upon procedures, related or unrelated to financial statements. Most importantly, external auditors, though engaged and paid by the company being audited, should be regarded as independent.
* Internal Auditors are employed by the organizations they audit. They work for government agencies (federal, state and local); for publicly traded companies; and for non-profit companies across all industries. The internationally recognised standard setting body for the profession is the Institute of Internal Auditors - IIA (www.theiia.org). The IIA has defined internal auditing as follows: "Internal auditing is an independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes".

Details:

What Is an Auditor?

An auditor is a person authorized to review and verify the accuracy of financial records and ensure that companies comply with tax laws. They protect businesses from fraud, point out discrepancies in accounting methods and, on occasion, work on a consultancy basis, helping organizations to spot ways to boost operational efficiency. Auditors work in various capacities within different industries.

Key Takeaways

* The main duty of an auditor is to determine whether financial statements follow generally accepted accounting principles (GAAP).
* The Securities and Exchange Commission (SEC) requires all public companies to conduct regular reviews by external auditors, in compliance with official auditing procedures.
* There are several different types of auditors, including those hired to work in-house for companies and those who work for an outside audit firm.
* The final judgment of an audit report can be either qualified or unqualified.

Understanding an Auditor

Auditors assess financial operations and ensure that organizations are run efficiently. They are tasked with tracking cash flow from beginning to end and verifying that an organization’s funds are properly accounted for.

In the case of public companies, the main duty of an auditor is to determine whether financial statements follow generally accepted accounting principles (GAAP). To meet this requirement, auditors inspect accounting data, financial records, and operational aspects of a business and take detailed notes on each step of the process, known as an audit trail.

Once complete, the auditor’s findings are presented in a report that appears as a preface in financial statements. Separate, private reports may also be issued to company management and regulatory authorities as well.

The Securities and Exchange Commission (SEC) demands that the books of all public companies are regularly examined by external, independent auditors, in compliance with official auditing procedures.

Official procedures are established by the International Auditing and Assurance Standards Board (IAASB), a committee of the International Federation of Accountants (IFAC).

Unqualified Opinion vs. Qualified Opinion

Auditor reports are usually accompanied by an unqualified opinion. These statements confirm that the company's financial statements conform to GAAP, without providing judgment or an interpretation.

When an auditor is unable to give an unqualified opinion, they will issue a qualified opinion, a statement suggesting that the information provided is limited in scope and/or the company being audited has not maintained GAAP accounting principles.

Auditors assure potential investors that a company’s finances are in order and accurate, as well as provide a clear picture of a company’s worth to help investors make informed decisions.

Types of Auditors

* Internal auditors are hired by organizations to provide in-house, independent, and objective evaluations of financial and operational business activities, including corporate governance. They report their findings, including tips on how to better run the business, back to senior management.
* External auditors usually work in conjunction with government agencies. They are tasked with providing objective and public opinions about the organization's financial statements and whether they fairly and accurately represent the organization's financial position.
* Government auditors maintain and examine records of government agencies and of private businesses or individuals performing activities subject to government regulations or taxation. Auditors employed through the government ensure revenues are received and spent according to laws and regulations. They detect embezzlement and fraud, analyze agency accounting controls, and evaluate risk management.
* Forensic auditors specialize in crime and are used by law enforcement organizations.

Auditor Qualifications

External auditors working for public accounting firms require a Certified Public Accountant (CPA) license, a professional certification awarded by the American Institute of Certified Public Accountants. In addition to this certification, these auditors also need to obtain state CPA certification. Requirements vary, although most states do demand a CPA designation and two years of professional work experience in public accounting.

Qualifications for internal auditors are less rigorous. Internal auditors are encouraged to get CPA accreditation, although it is not always mandatory. Instead, a bachelor's degree in subjects such as finance and other business disciplines, together with appropriate experience and skills, is often acceptable.

Special Considerations

Auditors are not responsible for transactions that occur after the date of their reports. Moreover, they are not necessarily required to detect all instances of fraud or financial misrepresentation; that responsibility primarily lies with an organization's management team.

Audits are mainly designed to determine whether a company’s financial statements are “reasonably stated.” In other words, this means that audits do not always cover enough ground to identify cases of fraud. In short, a clean audit offers no guarantee that an organization’s accounting is completely above board.

Trade on the Go. Anywhere, Anytime

One of the world's largest crypto-asset exchanges is ready for you. Enjoy competitive fees and dedicated customer support while trading securely. You'll also have access to Binance tools that make it easier than ever to view your trade history, manage auto-investments, view price charts, and make conversions with zero fees. Make an account for free and join millions of traders and investors on the global crypto market.

Additional Information

Auditing, examination of the records and reports of an enterprise by specialists other than those responsible for their preparation. Public auditing by independent, impartial accountants has acquired professional status and become increasingly common with the rise of large business units and the separation of ownership from managerial control. The public accountant performs tests to determine whether the management’s statements were prepared in accord with generally accepted accounting principles and fairly present the firm’s financial position and operating results; such independent evaluations of management reports are of interest to actual and prospective shareholders, bankers, suppliers, lessors, and government agencies.

Standardization of audit procedures

In English-speaking countries, public auditors are usually certified, and high standards are encouraged by professional societies. Most European and Commonwealth nations follow the example of the United Kingdom, where government-chartered organizations of accountants have developed their own admission standards. Other countries follow the pattern in the United States, where the states have set legal requirements for licensing. Most national governments have specific agencies or departments charged with the auditing of their public accounts—e.g., the General Accounting Office in the United States and the Court of Accounts (Cour des Comptes) in France.

Internal auditing, designed to evaluate the effectiveness of a company’s accounting system, is relatively new. Perhaps the most familiar type of auditing is the administrative audit, or pre-audit, in which individual vouchers, invoices, or other documents are investigated for accuracy and proper authorization before they are paid or entered in the books.

In addition, the assurance services of professionally certified accountants include all of the following: financial, compliance, and assurance audits; less-formal review of financial information; attestation about the reliability of another party’s written assertion; and other assurance services not strictly requiring formal audits (e.g., forward-looking information and quality assertions).

Origins of the audit

Historians of accounting have noted biblical references to common auditing practices, such as dual custody of assets and segregation of duties, among others. In addition, there is evidence that the government accounting system in China during the Zhao dynasty (1122–256 bc) included audits of official departments. As early as the 5th and 4th centuries bc, both the Romans and Greeks devised careful systems of checks and counterchecks to ensure the accuracy of their reports. In English-speaking countries, records from the Exchequers of England and Scotland (1130) have provided the earliest written references to auditing.

Despite these early developments, it was not until the late 19th century, with the innovation of the joint-stock company (whose managers were not necessarily the company’s owners) and the growth of railroads (with the challenge of transporting and accounting for significant volumes of goods), that auditing became a necessary part of modern business. Since the owners of the corporations were not the ones making the day-to-day business decisions, they demanded assurances that the managers were providing reliable and accurate information. The auditing profession developed to meet this growing need, and in 1892 Lawrence R. Dickinson published A Practical Manual for Auditors, the first textbook on auditing. Audit failures occur from time to time, however, drawing public attention to the practice of accounting and auditing while also leading to a refinement of the standards that guide the audit process.

Legal liabilities

Given the nature of the audit function, auditors increasingly find themselves subject to legal and other disciplinary sanctions. Unlike other professionals, however, their liability is not limited to the clients who hire them. Auditors are increasingly held liable to third parties, including investors and creditors, who rely on the audited financial statements in making investment decisions.

Objectives and standards

A company’s internal accountants are primarily responsible for preparing financial statements. In contrast, the purpose of the auditor is to express an opinion on the assertions of management found in financial statements. The auditor arrives at an objective opinion by systematically obtaining and evaluating evidence in conformity with professional auditing standards. Audits increase the reliability of financial information and consequently improve the efficiency of capital markets. Auditing standards require that all audits be conducted by persons having adequate technical training. This includes formal education, field experience, and continuing professional training.

In addition, auditors must exhibit an independence in mental attitude. This standard requires auditors to maintain a stance of neutrality toward their clients, and it further implies that auditors must be perceived by the public as being independent. In other words, it mandates independence in fact and in appearance. Thus, any auditor who holds a substantial financial interest in the activities of the client is not seen as independent even if, in fact, the auditor is unbiased.

The issue of auditor independence grew more difficult toward the end of the 20th century, especially as auditing firms began offering nonattestation functions (such as consulting services) to new and existing clients—particularly in the areas of taxation, information systems, and management. While there was no legal reason for preventing accounting firms from extending their business services, the possibilities for a conflict of interest made it increasingly necessary for auditors to indicate the nature of the work performed and their degree of responsibility.

Inaccurate financial reporting can be the result of deliberate misrepresentation, or it can be the result of unintended errors. One of the most egregious recent examples of a financial reporting failure occurred in 1995 in the Singapore office of Barings PLC, a 233-year-old British bank. In this case fraud resulted from a lack of sufficient internal controls at Barings over a five-year period, during which time Nicholas Leeson, a back-office clerk responsible for the accounting and settlement of transactions, was promoted to chief trader at Barings’s Singapore office. With his promotion, Leeson enjoyed an unusual degree of independence; he was in the exceptional position of being both chief trader and the employee responsible for settling (ensuring payment for) all his trades, a situation that allowed him to engage in rogue (unauthorized) trades that went undetected. As if to condone Leeson’s actions, his managers at Barings had given him access to funds that could cover margin calls (purchases made with borrowed money) for his clients. Although Leeson was losing huge sums of money for the bank, his dual responsibilities allowed him to conceal his losses and to continue trading. When the collapse of the Japanese stock market led to a $1 billion loss for Barings, Leeson’s actions were finally discovered. Barings never recovered from the loss, however, and it was acquired by Dutch insurance company ING Groep NV in 1995 (sold again in 2004). Interestingly, in this case internal auditors did warn management about the risk at the Singapore office months before the collapse, but the warnings went unheeded by top executives, and the audit report was ignored.

In 2001 the scandal surrounding the Barings collapse was dwarfed by discoveries of corruption in large American corporations. Enron Corp.—an energy trading firm that had hidden losses in off-the-books partnerships and engaged in predatory pricing schemes—declared bankruptcy in December 2002. Soon after Enron became the subject of a Securities and Exchange Commission (SEC) inquiry, Enron’s auditing firm, Arthur Andersen LLP, was also named in an SEC investigation; Arthur Andersen ultimately went out of business in 2002. In roughly the same period, the telecommunications firm WorldCom Inc. used misleading accounting techniques to hide expenses and overstate profits by $11 billion. Instances of accounting fraud uncovered in Europe in the early 21st century included Dutch grocery chain Royal Ahold NV, which in 2003 was found to have overstated profits by roughly $500 million.

In the United States, auditing standards require the auditor to state whether the financial reports are presented in accordance with generally accepted accounting principles (GAAP). Many other countries have adopted the standards supported by the International Accounting Standards Board (IASB) in London. The IASB standards, often less lenient than GAAP, have been increasingly seen as more-effective deterrents to large-scale auditing failures such as those that took place at Enron and WorldCom.

No auditing technique can be foolproof, and misstatements can exist even when auditors apply the appropriate techniques. The auditor’s opinion is, after all, based on samples of data. A management team that engages in organized fraud by concealing and falsifying documents may be able to mislead auditors and other users and go undetected. The best any auditor can provide, even under the most-favourable circumstances, is a reasonable assurance of the accuracy of the financial reports.

auditor.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2343 2024-10-15 00:01:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2243) Flood

Gist

Flooding is a temporary overflow of water onto land that is normally dry. Floods can result from rain, snow, coastal storms, storm surges, overflows of rivers, and dam failure. Floods can be dangerous.

Floods are part of the natural variability in flow rates and water levels of rivers. This means that flood management plays an important role in protecting people and incorporating flood risks into the management of water resources provides a rationale to shift towards an integrated flood management approach. 

During the last decades the trend in flood damages has been growing exponentially. This is a consequence of the increasing frequency of heavy precipitation, changes in upstream land-use and a continuously increasing concentration of population and assets in flood-prone areas. This is often exacerbated by inadequate flood planning and management practices.

Sea level rise has increased vulnerability to storm surge and related coastal flooding, as well.

Summary

Flood, high-water stage in which water overflows its natural or artificial banks onto normally dry land, such as a river inundating its floodplain. The effects of floods on human well-being range from unqualified blessings to catastrophes. The regular seasonal spring floods of the Nile River prior to construction of the Aswān High Dam, for example, were depended upon to provide moisture and soil enrichment for the fertile floodplains of its delta. The uncontrolled floods of the Yangtze River (Chang Jiang) and the Huang He in China, however, have repeatedly wrought disaster when these rivers habitually rechart their courses. Uncontrollable floods likely to cause considerable damage commonly result from excessive rainfall over brief periods of time, as, for example, the floods of Paris (1658 and 1910), of Warsaw (1861 and 1964), of Frankfurt am Main (1854 and 1930), and of Rome (1530 and 1557). Potentially disastrous floods may, however, also result from ice jams during the spring rise, as with the Danube River (1342, 1402, 1501, and 1830) and the Neva River (in Russia, 1824); from storm surges such as those of 1099 and 1953 that flooded the coasts of England, Belgium, and the Netherlands; and from tsunamis, the mountainous sea waves caused by earthquakes, as in Lisbon (1755) and Hawaii (Hilo, 1946).

Floods can be measured for height, peak discharge, area inundated, and volume of flow. These factors are important to judicious land use, construction of bridges and dams, and prediction and control of floods. Common measures of flood control include the improvement of channels, the construction of protective levees and storage reservoirs, and, indirectly, the implementation of programs of soil and forest conservation to retard and absorb runoff from storms.

The discharge volume of an individual stream is often highly variable from month to month and year to year. A particularly striking example of this variability is the flash flood, a sudden, unexpected torrent of muddy and turbulent water rushing down a canyon or gulch. It is uncommon, of relatively brief duration, and generally the result of summer thunderstorms or the rapid melting of snow and ice in mountains. A flash flood can take place in a single tributary while the rest of the drainage basin remains dry. The suddenness of its occurrence causes a flash flood to be extremely dangerous.

A flood of such magnitude that it might be expected to occur only once in 100 years is called a 100-year flood. The magnitudes of 100-, 500-, and 1,000-year floods are calculated by extrapolating existing records of stream flow, and the results are used in the design engineering of many water resources projects, including dams and reservoirs, and other structures that may be affected by catastrophic floods.

Details

There are few places on Earth where flooding is not a concern. Any area where rain falls is vulnerable to floods, though rain is not the only cause.

How floods form

A flood occurs when water inundates land that's normally dry, which can happen in a multitude of ways.

Excessive rain, a ruptured dam or levee, rapid melting of snow or ice, or even an unfortunately placed beaver dam can overwhelm a river, spreading over the adjacent land, called a floodplain. Coastal flooding occurs when a large storm or tsunami causes the sea to surge inland.

Most floods take hours or even days to develop, giving residents time to prepare or evacuate. Others generate quickly and with little warning. So-called flash floods can be extremely dangerous, instantly turning a babbling brook or even a dry wash into rushing rapids that sweep everything in their path downstream.

Climate change is increasing the risk of floods worldwide, particularly in coastal and low-lying areas, because of its role in extreme weather events and rising seas. The increase in temperatures that accompanies global warming can contribute to hurricanes that move more slowly and drop more rain, funneling moisture into atmospheric rivers like the ones that led to heavy rains and flooding in California in early 2019.

Meanwhile, melting glaciers and other factors are contributing to a rise in sea levels that has created long-term, chronic flooding risks for places ranging from Venice, Italy to the Marshall Islands. More than 670 U.S. communities will face repeated flooding by the end of this century, according to a 2017 analysis; it's happening in more than 90 coastal communities already.

Impacts of flooding

Floods cause more than $40 billion in damage worldwide annually, according to the Organization for Economic Cooperation and Development. In the U.S., losses average close to $8 billion a year. Death tolls have increased in recent decades to more than 100 people a year. In China's Yellow River Valley some of the world's worst floods have killed millions of people.

When floodwaters recede, affected areas are often blanketed in silt and mud. The water and landscape can be contaminated with hazardous materials such as sharp debris, pesticides, fuel, and untreated sewage. Potentially dangerous mold blooms can quickly overwhelm water-soaked structures.

Residents of flooded areas can be left without power and clean drinking water, leading to outbreaks of deadly waterborne diseases like typhoid, hepatitis A, and cholera.

Flood prevention

Flooding, particularly in river floodplains, is as natural as rain and has been occurring for millions of years. Famously fertile floodplains such as the Mississippi Valley, the Nile River Valley in Egypt, and the Tigris-Euphrates in the Middle East have supported agriculture for millennia because annual flooding has left tons of nutrient-rich silt deposits behind. Humans have increased the risk of death and damage by increasingly building homes, businesses, and infrastructure in vulnerable floodplains.

To try to mitigate the risk, many governments mandate that residents of flood-prone areas purchase flood insurance and set construction requirements aimed at making buildings more flood resistant—with varying degrees of success.

Massive efforts to mitigate and redirect inevitable floods have resulted in some of the most ambitious engineering efforts ever seen, including New Orleans's extensive levee system and massive dikes and dams in the Netherlands. Such efforts continue today as climate change continues to put pressure on vulnerable areas. Some flood-prone cities in the U.S. are even going beyond federal estimates and setting higher local standards for protection.

Additional Information

A flood is an overflow of water (or rarely other fluids) that submerges land that is usually dry. In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are of significant concern in agriculture, civil engineering and public health. Human changes to the environment often increase the intensity and frequency of flooding. Examples for human changes are land use changes such as deforestation and removal of wetlands, changes in waterway course or flood controls such as with levees. Global environmental issues also influence causes of floods, namely climate change which causes an intensification of the water cycle and sea level rise. For example, climate change makes extreme weather events more frequent and stronger. This leads to more intense floods and increased flood risk.

Natural types of floods include river flooding, groundwater flooding coastal flooding and urban flooding sometimes known as flash flooding. Tidal flooding may include elements of both river and coastal flooding processes in estuary areas. There is also the intentional flooding of land that would otherwise remain dry. This may take place for agricultural, military, or river-management purposes. For example, agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries.

Flooding may occur as an overflow of water from water bodies, such as a river, lake, sea or ocean. In these cases, the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries. Flooding may also occur due to an accumulation of rainwater on saturated ground. This is called an areal flood. The size of a lake or other body of water naturally varies with seasonal changes in precipitation and snow melt. Those changes in size are however not considered a flood unless they flood property or drown domestic animals.

Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if these buildings are in the natural flood plains of rivers. People could avoid riverine flood damage by moving away from rivers. However, people in many countries have traditionally lived and worked by rivers because the land is usually flat and fertile. Also, the rivers provide easy travel and access to commerce and industry.

Flooding can damage property and also lead to secondary impacts. These include in the short term an increased spread of waterborne diseases and vector-bourne disesases, for example those diseases transmitted by mosquitos. Flooding can also lead to long-term displacement of residents. Floods are an area of study of hydrology and hydraulic engineering.

A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions.

Floods_1-1152x648-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2344 2024-10-16 00:11:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2244) Silica

Gist

Silica is a substance composed of silicon dioxide, SiO2, and is also known as silicic acid anhydride. Silica is the main component of quartz and silica sand, but it is also found in large amounts in biominerals that make the skeletal structure of ferns, grasses, and diatoms.

Summary

Silica is a substance composed of silicon dioxide, SiO2, and is also known as silicic acid anhydride. Silica is the main component of quartz and silica sand, but it is also found in large amounts in biominerals that make the skeletal structure of ferns, grasses, and diatoms. However, most of the silica used in cosmetics is formulated with wet methods such as dehydrating sodium silicate, hydrolysis of alkoxysilane, or acidic reaction of calcium silicate, or with dry methods such as high-temperature hydrolysis of silicon halide or using electric arc methods with quartz. Siloxane, triple bond Si single bond O single bond Si triple bond, and four types of silanols, triple bond Si single bond OH, are found on the surface of silica, and the physical property of the silica surface changes depending on the content of these groups. Additionally, surface-modified silica is made by reacting the silanol groups on the silica surface with alkoxysilane, silazane, and/or siloxane, and is widely used to control the surface's hydrophilic (hydrophobic) property. Since they make cosmetics smoother and more transparent, they are used as extender pigments for makeup cosmetics. They also absorb or thicken water and/or oils and are also used for skin care cosmetics.

Silica is used in many commercial products, such as bricks, glass and ceramics, plaster, granite, concrete, cleansers, skin care products, and talcum powder. Some forms of amorphous silica are used as food additives, food wrappings, toothpaste and cosmetics.

Details

Silicon dioxide, also known as silica, is an oxide of silicon with the chemical formula SiO2, commonly found in nature as quartz. In many parts of the world, silica is the major constituent of sand. Silica is one of the most complex and abundant families of materials, existing as a compound of several minerals and as a synthetic product. Examples include fused quartz, fumed silica, opal, and aerogels. It is used in structural materials, microelectronics, and as components in the food and pharmaceutical industries. All forms are white or colorless, although impure samples can be colored.

Silicon dioxide is a common fundamental constituent of glass.

Structure

In the majority of silicon dioxides, the silicon atom shows tetrahedral coordination, with four oxygen atoms surrounding a central Si atom  Thus, SiO2 forms 3-dimensional network solids in which each silicon atom is covalently bonded in a tetrahedral manner to 4 oxygen atoms. In contrast, CO2 is a linear molecule. The starkly different structures of the dioxides of carbon and silicon are a manifestation of the double bond rule.

Based on the crystal structural differences, silicon dioxide can be divided into two categories: crystalline and non-crystalline (amorphous). In crystalline form, this substance can be found naturally occurring as quartz, tridymite (high-temperature form), cristobalite (high-temperature form), stishovite (high-pressure form), and coesite (high-pressure form). On the other hand, amorphous silica can be found in nature as opal and diatomaceous earth. Quartz glass is a form of intermediate state between these structures.

All of these distinct crystalline forms always have the same local structure around Si and O. In α-quartz the Si–O bond length is 161 pm, whereas in α-tridymite it is in the range 154–171 pm. The Si–O–Si angle also varies between a low value of 140° in α-tridymite, up to 180° in β-tridymite. In α-quartz, the Si–O–Si angle is 144°.

Polymorphism

Alpha quartz is the most stable form of solid SiO2 at room temperature. The high-temperature minerals, cristobalite and tridymite, have both lower densities and indices of refraction than quartz. The transformation from α-quartz to beta-quartz takes place abruptly at 573 °C. Since the transformation is accompanied by a significant change in volume, it can easily induce fracturing of ceramics or rocks passing through this temperature limit. The high-pressure minerals, seifertite, stishovite, and coesite, though, have higher densities and indices of refraction than quartz. Stishovite has a rutile-like structure where silicon is 6-coordinate. The density of stishovite is 4.287 g/{cm}^3, which compares to α-quartz, the densest of the low-pressure forms, which has a density of 2.648 g/{cm}^3. The difference in density can be ascribed to the increase in coordination as the six shortest Si–O bond lengths in stishovite (four Si–O bond lengths of 176 pm and two others of 181 pm) are greater than the Si–O bond length (161 pm) in α-quartz. The change in the coordination increases the ionicity of the Si–O bond.

Faujasite silica, another polymorph, is obtained by the dealumination of a low-sodium, ultra-stable Y zeolite with combined acid and thermal treatment. The resulting product contains over 99% silica, and has high crystallinity and specific surface area (over 800 {m^2}/g). Faujasite-silica has very high thermal and acid stability. For example, it maintains a high degree of long-range molecular order or crystallinity even after boiling in concentrated hydrochloric acid.

Molten SiO2

Molten silica exhibits several peculiar physical characteristics that are similar to those observed in liquid water: negative temperature expansion, density maximum at temperatures ~5000 °C, and a heat capacity minimum. Its density decreases from 2.08 g/{cm}^3 at 1950 °C to 2.03 g/{cm}^3 at 2200 °C.

Molecular SiO2

The molecular SiO2 has a linear structure like CO2. It has been produced by combining silicon monoxide (SiO) with oxygen in an argon matrix. The dimeric silicon dioxide, (SiO2)2 has been obtained by reacting O2 with matrix isolated dimeric silicon monoxide, (Si2O2). In dimeric silicon dioxide there are two oxygen atoms bridging between the silicon atoms with an Si–O–Si angle of 94° and bond length of 164.6 pm and the terminal Si–O bond length is 150.2 pm. The Si–O bond length is 148.3 pm, which compares with the length of 161 pm in α-quartz. The bond energy is estimated at 621.7 kJ/mol.

Additional Information

Silica, compound of the two most abundant elements in Earth’s crust, silicon and oxygen, SiO2. The mass of Earth’s crust is 59 percent silica, the main constituent of more than 95 percent of the known rocks. Silica has three main crystalline varieties: quartz (by far the most abundant), tridymite, and cristobalite. Other varieties include coesite, keatite, and lechatelierite. Silica sand is used in buildings and roads in the form of portland cement, concrete, and mortar, as well as sandstone. Silica also is used in grinding and polishing glass and stone; in foundry molds; in the manufacture of glass, ceramics, silicon carbide, ferrosilicon, and silicones; as a refractory material; and as gemstones. Silica gel is often used as a desiccant to remove moisture.

quartz1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2345 2024-10-17 00:06:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2245) Advertising Agency

Gist

An advertising agency is an enterprise that helps businesses reach their target audience, sell offerings and increase sales revenue. They do this by creating, implementing and managing advertising campaigns that promote the business's products and services.

Advertising agencies offer services such as developing advertising strategies, creating creative content, planning and buying media, managing social media, optimizing search engines, and analyzing performance.

Advertising agencies create and manage advertisements for clients on various digital channels and they have several departments that support that work. They usually have an accounts team that does sales and customer service and an advertising or creative team that does the work for the customer.

Details

An advertising agency, often referred to as a creative agency or an ad agency, is a business dedicated to creating, planning, and handling advertising and sometimes other forms of promotion and marketing for its clients. An ad agency is generally independent of the client; it may be an internal department or agency that provides an outside point of view to the effort of selling the client's products or services, or an outside firm. An agency can also handle overall marketing and branding strategies promotions for its clients, which may include sales as well.

Typical ad agency clients include businesses and corporations, non-profit organizations and private agencies. Agencies may be hired to produce television advertisements, radio advertisements, online advertising, out-of-home advertising, mobile marketing, and AR advertising, as part of an advertising campaign.

History

The first acknowledged advertising agency was William Taylor in 1786. Another early agency, started by James 'Jem' White in 1800 at Fleet Street, London, eventually evolved into White Bull Holmes, a recruitment advertising agency, that went out of business in the late 1980s. In 1812 George Reynell, an officer at the London Gazette, set up another of the early advertising agencies, also in London. This remained a family business until 1993, as 'Reynell & Son,' and is now part of the TMP Worldwide agency (UK and Ireland) under the brand TMP Reynell. Another early agency that traded until recently, was founded by Charles Barker, and the firm he established traded as 'Barkers' until 2009 when it went into Administration.

Volney B. Palmer opened the first American advertising agency, in Philadelphia in 1850. This agency placed ads produced by its clients in various newspapers.

In 1856, Mathew Brady created the first modern advertisement when he placed an ad in the New York Herald paper offering to produce "photographs, ambrotypes, and daguerreotypes." His ads were the first whose typeface and fonts were distinct from the text of the publication and from that of other advertisements. At that time all newspaper ads were set in agate and only agate. His use of larger distinctive fonts caused a sensation.[3] Later that same year Robert E. Bonner ran the first full-page ad in a newspaper.

In 1864, William James Carlton began selling advertising space in religious magazines. In 1869, Francis Ayer, at the age of 20, created the first full-service advertising agency in Philadelphia, called N.W. Ayer & Son. It was the oldest advertising agency in America and dissolved in 2002. James Walter Thompson joined Carlton's firm in 1868. Thompson rapidly became their best salesman, purchasing the company in 1877 and renaming it the James Walter Thompson Company. Realizing that he could sell more space if the company provided the service of developing content for advertisers, Thompson hired writers and artists to form the first known Creative Department in an advertising agency. He is credited as the "father of modern magazine advertising" in the US. Advertising Age commemorated the first 100 years of the agency in 1964, noting that its "history and expansion overseas seems peculiarly to match the whole history of modern advertising."

Global advertising agency

Globalization of advertising originates in earlier days of the twentieth century. American advertising agencies began as the process of opening overseas offices before the two World Wars and accelerated their globalization throughout the latter part of the twentieth century.

McCann, an agency established in New York City in 1902, opened its first European offices by 1927. It was followed up with offices opening in South America in 1935 and in Australia in 1959.

Companies such as J. Walter Thompson adopted a strategy to expand in order to provide advertising services wherever clients operated.

In the 1960s and 1970s, English agencies began to realize the overseas opportunities associated with globalization. Expanding overseas gives potential to wider markets.

In the early 21st century, management consulting firms such as PwC Digital and Deloitte Digital began competing with ad agencies by emphasizing data analytics. As of 2017, Accenture Interactive was the world's sixth-largest ad agency, behind WPP, Omnicom, Publicis, Interpublic, and Dentsu. In 2019, it purchased David Droga's Droga5 agency, the first major consultant acquisition of an ad agency.

Client relationships

Studies show that successful advertising agencies tend to have a shared sense of purpose with their clients through collaboration. This includes a common set of client objectives where agencies feel a shared sense of ownership of the strategic process. Successful advertisements start with clients building a good relationship with the agencies and work together to figure out what their objectives are. Clients must trust the agencies to do their jobs correctly and accordingly with the resources they have provided. Breakdowns in relationships were more likely to occur when agencies felt undermined, subjugated, or even feel they do not have equal status. Traditionally advertising agencies tend to be in a position to take the lead on projects but results are best when there is a more collaborative relationship.

Stronger collaboration happens in situations where a personal chemistry has been established between both parties. Finding out similar likes and dislikes points of view, and even hobbies and passions. Personal chemistry builds with the length of the client relationship, frequency of meetings, and how far mutual respect goes between parties. This was one trait that advertising agencies were perceived to not always have. It was suggested that on occasions media planners and researchers were more closely involved in the project because of their personal relationships with their clients. Successful strategic planning is best when both parties are involved due to the bond between sides by understanding each other's views and mindset.

Involved advertising account planners are seen to contribute towards successful agency-client collaboration. Planners of advertising agencies tend to be capable of creating a very powerful, trusting relationship with their clients because they were seen as intellectual prowess, seniority, and have empathy in the creative process.

Agencies

All advertising agencies are called that because they are acting as agents for their principals which were the media. They were then, and are now, paid by the media to sell advertising space to clients. Originally, in the 18th century, and the first half of the 19th, advertising agencies made all of their income from commissions paid by the media for selling space to the client.

Although it is still the case that the majority of their income comes from the media, in the middle of the 19th century, agencies began to offer additional services which they sold directly to the client. Such services can include writing the text of the advertisement.

Creativity

The use of creativity by agencies is "unexpected" because so much advertising today is expected. This will capture the attention of audiences, therefore the message is more likely to get through. There have been many advertisements that have surprised audiences because it was not normal for them to see that in an advertisement of that nature. The best use of creativity is when the agencies make consumers think about the product or brand. The type of creativity is distinctive communication which is breaking through the clutter.

creative-advertising-agencies-follow-instagram.webp?width=893&height=600&name=creative-advertising-agencies-follow-instagram.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2346 2024-10-18 00:03:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2246) Waxing and Waning phase

Gist

The eight lunar phases are, in order: new moon, waxing crescent, first quarter, waxing gibbous, full moon, waning gibbous, third quarter and waning crescent. The cycle repeats once a month (every 29.5 days).

A waxing moon is any phase of the moon during the lunar cycle between the new moon and the full moon. A moon that is waxing is one that is getting larger each night. The lunar cycle is a period of about 29 days as the moon's shape seems to change from our vantage point on Earth.

As the moon starts to move around the earth, the illuminated side becomes more and more visible. From our vantage point it seems to grow and get bigger. This period of getting bigger is called waxing. As the moon waxes from a new moon to the first quarter (when half of the moon is visible), it has a crescent shape.

Summary

f you have looked into the night sky, you may have noticed the Moon appears to change shape each night. Some nights, the Moon might look like a narrow crescent. Other nights, the Moon might look like a bright circle. And on other nights, you might not be able to see the Moon at all. The different shapes of the Moon that we see at different times of the month are called the Moon’s phases.

Why does this happen? The shape of the Moon isn’t changing throughout the month. However, our view of the Moon does change.

The Moon does not produce its own light. There is only one source of light in our solar system, and that is the Sun. Without the Sun, our Moon would be completely dark. What you may have heard referred to as “moonlight” is actually just sunlight reflecting off of the Moon’s surface.

The Sun’s light comes from one direction, and it always illuminates, or lights up, one half of the Moon – the side of the Moon that is facing the Sun. The other side of the Moon is dark.

On Earth, our view of the illuminated part of the Moon changes each night, depending on where the Moon is in its orbit, or path, around Earth. When we have a full view of the completely illuminated side of the Moon, that phase is known as a full moon.

But following the night of each full moon, as the Moon orbits around Earth, we start to see less of the Moon lit by the Sun. Eventually, the Moon reaches a point in its orbit when we don’t see any of the Moon illuminated. At that point, the far side of the Moon is facing the Sun. This phase is called a new moon. During the new moon, the side facing Earth is dark.

The eight Moon phases:

? New: We cannot see the Moon when it is a new moon.

? Waxing Crescent: In the Northern Hemisphere, we see the waxing crescent phase as a thin crescent of light on the right.

? First Quarter: We see the first quarter phase as a half moon.

? Waxing Gibbous: The waxing gibbous phase is between a half moon and full moon. Waxing means it is getting bigger.

? Full: We can see the Moon completely illuminated during full moons.

? Waning Gibbous: The waning gibbous phase is between a full moon and a half moon. Waning means it is getting smaller.

? Third Quarter: We see the third quarter moon as a half moon, too. It is the opposite half as illuminated in the first quarter moon.

? Waning Crescent: In the Northern Hemisphere, we see the waning crescent phase as a thin crescent of light on the left.

The Moon displays these eight phases one after the other as it moves through its cycle each month. It takes about 27.3 days for the Moon to orbit Earth. However, because of how sunlight hits the Moon, it takes about 29.5 days to go from one new moon to the next new moon.

Details

A lunar phase or Moon phase is the apparent shape of the Moon's directly sunlit portion as viewed from the Earth (because the Moon is tidally locked with the Earth, the same hemisphere is always facing the Earth). In common usage, the four major phases are the new moon, the first quarter, the full moon and the last quarter; the four minor phases are waxing crescent, waxing gibbous, waning gibbous, and waning crescent. A lunar month is the time between successive recurrences of the same phase: due to the eccentricity of the Moon's orbit, this duration is not perfectly constant but averages about 29.5 days.

The appearance of the Moon (its phase) gradually changes over a lunar month as the relative orbital positions of the Moon around Earth, and Earth around the Sun, shift. The visible side of the Moon is sunlit to varying extents, depending on the position of the Moon in its orbit, with the sunlit portion varying from 0% (at new moon) to nearly 100% (at full moon).

Phases of the Moon

The phases of the Moon as viewed looking southward from the Northern Hemisphere. Each phase would be rotated 180° if seen looking northward from the Southern Hemisphere. The upper part of the diagram is not to scale, as the Moon, the Earth, and the Moon's orbit are all much smaller relative to the Earth's orbit than shown here.

There are four principal (primary, or major) lunar phases: the new moon, first quarter, full moon, and last quarter (also known as third or final quarter), when the Moon's ecliptic longitude is at an angle to the Sun (as viewed from the center of the Earth) of 0°, 90°, 180°, and 270° respectively. Each of these phases appears at slightly different times at different locations on Earth, and tabulated times are therefore always geocentric (calculated for the Earth's center).

Between the principal phases are intermediate phases, during which the apparent shape of the illuminated Moon is either crescent or gibbous. On average, the intermediate phases last one-quarter of a synodic month, or 7.38 days.

The term waxing is used for an intermediate phase when the Moon's apparent shape is thickening, from new to a full moon; and waning when the shape is thinning. The duration from full moon to new moon (or new moon to full moon) varies from approximately 13 days 22+1⁄2 hours to about 15 days 14+1⁄2 hours.

Due to lunar motion relative to the meridian and the ecliptic, in Earth's northern hemisphere:

* A new moon appears highest at the summer solstice and lowest at the winter solstice.
* A first-quarter moon appears highest at the spring equinox and lowest at the autumn equinox.
* A full moon appears highest at the winter solstice and lowest at the summer solstice.
* A last-quarter moon appears highest at the autumn equinox and lowest at the spring equinox.

Non-Western cultures may use a different number of lunar phases; for example, traditional Hawaiian culture has a total of 30 phases (one per day).

Lunar libration

As seen from Earth, the Moon's eccentric orbit makes it both slightly change its apparent size, and to be seen from slightly different angles. The effect is subtle to the naked eye, from night to night, yet somewhat obvious in time-lapse photography.

Lunar libration causes part of the back side of the Moon to be visible to a terrestrial observer some of the time. Because of this, around 59% of the Moon's surface has been imaged from the ground.

moon-phases.en.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2347 2024-10-19 00:02:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2247) Visual Magnitude

Gist

Visual Magnitude is the brightness of a celestial body determined by eye estimation with or without optical aid or by other instrumentation equivalent to the eye in spectral sensitivity.

A magnitude is a unit of measurement that is used to specify the size or intensity of an event. It has been defined as the “amount by which something out of its usual quantity exceeds another”.

On a relatively clear sky, the limiting visibility will be about 6th magnitude. However, the limiting visibility is 7th magnitude for faint stars visible from dark rural areas located 200 km (120 mi) from major cities. There is even variation within metropolitan areas.

For example, the Sun has an apparent magnitude of -26.74, as measured in the visual filter, but if we were able to move the Sun to a location 10 parsecs from Earth, then we would see it as a star with an apparent magnitude of 4.83. So the absolute magnitude of the Sun is 4.83.

Apparent magnitude is the brightness of an object as it appears to an observer on Earth. The Sun's apparent magnitude is -26.7, that of the full Moon is about -11, and that of the bright star Sirius, -1.5. The faintest objects visible through the Hubble Space Telescope are of (approximately) apparent magnitude 30.

Summary

magnitude, in astronomy, measure of the brightness of a star or other celestial body. The brighter the object, the lower the number assigned as a magnitude. In ancient times, stars were ranked in six magnitude classes, the first magnitude class containing the brightest stars. In 1850 the English astronomer Norman Robert Pogson proposed the system presently in use. One magnitude is defined as a ratio of brightness of 2.512 times; e.g., a star of magnitude 5.0 is 2.512 times as bright as one of magnitude 6.0. Thus, a difference of five magnitudes corresponds to a brightness ratio of 100 to 1. After standardization and assignment of the zero point, the brightest class was found to contain too great a range of luminosities, and negative magnitudes were introduced to spread the range.

Apparent magnitude is the brightness of an object as it appears to an observer on Earth. The Sun’s apparent magnitude is −26.7, that of the full Moon is about −11, and that of the bright star Sirius, −1.5. The faintest objects visible through the Hubble Space Telescope are of (approximately) apparent magnitude 30. Absolute magnitude is the brightness an object would exhibit if viewed from a distance of 10 parsecs (32.6 light-years). The Sun’s absolute magnitude is 4.8.

Bolometric magnitude is that measured by including a star’s entire radiation, not just the portion visible as light. Monochromatic magnitude is that measured only in some very narrow segment of the spectrum. Narrow-band magnitudes are based on slightly wider segments of the spectrum and broad-band magnitudes on areas wider still. Visual magnitude may be called yellow magnitude because the eye is most sensitive to light of that colour.

Details

Apparent magnitude (m) is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.

Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856.

The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0.

The brightest astronomical objects have negative apparent magnitudes: for example, Venus at -4.2 or Sirius at -1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions.[2] The apparent magnitudes of known objects range from the Sun at -26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5.

The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude.

Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of 10 parsecs (33 light-years; 3.1×{10}^{14} kilometres; 1.9×{10}^{14} miles). Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude.

Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution.

Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux.

Additional Information

Magnitude when it comes to astronomy refers to the brightness of an object. However, unlike earthquakes for example where the higher the number means bigger, in astronomy, the LOWER number means the brighter. The brightest objects in the sky are actually in the negative numerical values!

Long before modern astronomy took over, the ancient Greeks noticed the different levels of brightness in the stars, and measured them by their size as seen with the naked eye.

Later on, when astronomers figured out the stars’ brightness had to do with their distances, they classified them in an order of “classes.” The brightest stars would be the “first class” and then the next level set would be “second class” and so on until the dimmest stars seen were labeled “sixth class.”

With the help of mathematics using more logarithmic scales, they were able to figure out that first magnitude stars were 100 times brighter than sixth magnitude. This scale also made it possible to see that some stars were brighter than first magnitude, such as Sirius being mag. -1.46 and Vega being a magnitude 0 star.

Using this math when it comes to telescopes, one can see that

There are two types of Magnitude

Apparent Magnitude – how bright the object appears from your location.

The Sun has an apparent magnitude of -27 as seen from Earth. That number increases the more the brightness decreases, as you get further away from the Sun. From the Kuiper Belt, the Sun’s apparent magnitude is anywhere between -18 to -16. From Alpha Centauri, the Sun is as bright as a magnitude 0.5 star!

Absolute Magnitude – has to do with the level of luminosity or reflected light of an object. It’s essentially how bright the object would be if it was 10 parsecs away (32.6 light years).

Pollux in Gemini is a star that’s ~10 pc away, so we can use that as an example. 

The Sun’s absolute magnitude would be 4.83. That means from Pollux at 33.72 light years away, our Sun appears as a dim star barely visible to the naked eye to an exo-astronomer living on an exoplanet orbiting that star – and would be washed out by light pollution from a city on that exoplanet.

As soon as one travels beyond 50 light years away, our Sun would not only need a telescope to bring it into view, but our sun would look just like any dim background star and thus be hard to identify under the best conditions!

Quasar 3C-273 is 2.6 billion light years away and appears as a magnitude 12.9 point of light. However, it’s actually so luminous that if this quasar was the same distance as Pollux is from Earth, it would appear as bright as our Sun!

Even if this quasar replaced Sagittarius A* at the center of our galaxy at 23.000 light years away, it would STILL be so bright that it would feel like having a second but dimmer “sun” in the sky. Oh… and if that were also to happen, life on earth wouldn’t be able to survive the x ray and gamma radiation!

apparent-magnitude-gcseastronomy-e1552598463363.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2348 2024-10-20 00:11:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2248) Herbivore

Gist

Herbivores are animals whose primary food source is plant-based. Examples of herbivores include vertebrates like deer, koalas, and some bird species, as well as invertebrates such as crickets and caterpillars.

An herbivore is an organism that mostly feeds on plants. Herbivores range in size from tiny insects such as aphids to large, lumbering elephants. Herbivores are a major part of the food web, a description of which organisms eat other organisms in the wild.

Details

A herbivore is an animal anatomically and physiologically evolved to feed on plants, especially upon vascular tissues such as foliage, fruits or seeds, as the main component of its diet. These more broadly also encompass animals that eat non-vascular autotrophs such as mosses, algae and lichens, but do not include those feeding on decomposed plant matters (i.e. detritivores) or macrofungi (i.e. fungivores).

As a result of their plant-based diet, herbivorous animals typically have mouth structures (jaws or mouthparts) well adapted to mechanically break down plant materials, and their digestive systems have special enzymes (e.g. amylase and cellulase) to digest polysaccharides. Grazing herbivores such as horses and cattles have wide flat-crowned teeth that are better adapted for grinding grass, tree bark and other tougher lignin-containing materials, and many of them evolved rumination or cecotropic behaviors to better extract nutrients from plants. A large percentage of herbivores also have mutualistic gut flora made up of bacteria and protozoans that help to degrade the cellulose in plants, whose heavily cross-linking polymer structure makes it far more difficult to digest than the protein- and fat-rich animal tissues that carnivores eat.

Etymology

Herbivore is the anglicized form of a modern Latin coinage, herbivora, cited in Charles Lyell's 1830 Principles of Geology. Richard Owen employed the anglicized term in an 1854 work on fossil teeth and skeletons. Herbivora is derived from Latin herba 'small plant, herb' and vora, from vorare 'to eat, devour'.

Definition and related terms

Herbivory is a form of consumption in which an organism principally eats autotrophs such as plants, algae and photosynthesizing bacteria. More generally, organisms that feed on autotrophs in general are known as primary consumers. Herbivory is usually limited to animals that eat plants. Insect herbivory can cause a variety of physical and metabolic alterations in the way the host plant interacts with itself and other surrounding biotic factors. Fungi, bacteria, and protists that feed on living plants are usually termed plant pathogens (plant diseases), while fungi and microbes that feed on dead plants are described as saprotrophs. Flowering plants that obtain nutrition from other living plants are usually termed parasitic plants. There is, however, no single exclusive and definitive ecological classification of consumption patterns; each textbook has its own variations on the theme.

Evolution of herbivory

The understanding of herbivory in geological time comes from three sources: fossilized plants, which may preserve evidence of defence (such as spines), or herbivory-related damage; the observation of plant debris in fossilised animal faeces; and the construction of herbivore mouthparts.

Although herbivory was long thought to be a Mesozoic phenomenon, fossils have shown that plants were being consumed by arthropods within less than 20 million years after the first land plants evolved. Insects fed on the spores of early Devonian plants, and the Rhynie chert also provides evidence that organisms fed on plants using a "pierce and drag" technique.

During the next 75 million years, plants evolved a range of more complex organs, such as roots and seeds. There is no evidence of any organism being fed upon until the middle-late Mississippian, 330.9 million years ago. There was a gap of 50 to 100 million years between the time each organ evolved and the time organisms evolved to feed upon them; this may be due to the low levels of oxygen during this period, which may have suppressed evolution. Further than their arthropod status, the identity of these early herbivores is uncertain. Hole feeding and skeletonization are recorded in the early Permian, with surface fluid feeding evolving by the end of that period.

Herbivory among four-limbed terrestrial vertebrates, the tetrapods, developed in the Late Carboniferous (307–299 million years ago). Early tetrapods were large amphibious piscivores. While amphibians continued to feed on fish and insects, some reptiles began exploring two new food types, tetrapods (carnivory) and plants (herbivory). The entire dinosaur order ornithischia was composed of herbivorous dinosaurs. Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation. In contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials.

Arthropods evolved herbivory in four phases, changing their approach to it in response to changing plant communities. Tetrapod herbivores made their first appearance in the fossil record of their jaws near the Permio-Carboniferous boundary, approximately 300 million years ago. The earliest evidence of their herbivory has been attributed to dental occlusion, the process in which teeth from the upper jaw come in contact with teeth in the lower jaw is present. The evolution of dental occlusion led to a drastic increase in plant food processing and provides evidence about feeding strategies based on tooth wear patterns. Examination of phylogenetic frameworks of tooth and jaw morphologes has revealed that dental occlusion developed independently in several lineages tetrapod herbivores. This suggests that evolution and spread occurred simultaneously within various lineages.

Food chain

Herbivores form an important link in the food chain because they consume plants to digest the carbohydrates photosynthetically produced by a plant. Carnivores in turn consume herbivores for the same reason, while omnivores can obtain their nutrients from either plants or animals. Due to a herbivore's ability to survive solely on tough and fibrous plant matter, they are termed the primary consumers in the food cycle (chain). Herbivory, carnivory, and omnivory can be regarded as special cases of consumer–resource interactions.

Feeding strategies

Two herbivore feeding strategies are grazing (e.g. cows) and browsing (e.g. moose). For a terrestrial mammal to be called a grazer, at least 90% of the forage has to be grass, and for a browser at least 90% tree leaves and twigs. An intermediate feeding strategy is called "mixed-feeding". In their daily need to take up energy from forage, herbivores of different body mass may be selective in choosing their food. "Selective" means that herbivores may choose their forage source depending on, e.g., season or food availability, but also that they may choose high quality (and consequently highly nutritious) forage before lower quality. The latter especially is determined by the body mass of the herbivore, with small herbivores selecting for high-quality forage, and with increasing body mass animals are less selective. Several theories attempt to explain and quantify the relationship between animals and their food, such as Kleiber's law, Holling's disk equation and the marginal value theorem.

Kleiber's law describes the relationship between an animal's size and its feeding strategy, saying that larger animals need to eat less food per unit weight than smaller animals.

Therefore, the mass of the animal increases at a faster rate than the metabolic rate.

Herbivores employ numerous types of feeding strategies. Many herbivores do not fall into one specific feeding strategy, but employ several strategies and eat a variety of plant parts.

Additional Information

An herbivore is an organism that mostly feeds on plants. Herbivores range in size from tiny insects such as aphids to large, lumbering elephants.

Herbivores are a major part of the food web, a description of which organisms eat other organisms in the wild. Organisms in the food web are grouped into trophic, or nutritional, levels. There are three trophic levels. Autotrophs, organisms that produce their own food, are the first trophic level. These include plants and algae. Herbivores, which eat autotrophs, are the second trophic level. Carnivores, organisms that consume animals, and omnivores, organisms that consume both plants and animals, are the third trophic level.

Autotrophs are called producers, because they produce their own food. Herbivores, carnivores, and omnivores are consumers. Herbivores are primary consumers. Carnivores and omnivores are secondary consumers.

Herbivores often have physical features that help them eat tough, fiberous plant matter. Unlike herbivores and other consumers, autotrophs have tough cell walls throughout their physical structure. Cell walls can make plant material difficult to digest.

Many herbivorous mammals have wide molars. These big teeth help them grind up leaves and grasses. Carnivorous mammals, on the other hand, usually have long, sharp teeth that help them grab prey and rip it apart.

A group of herbivores called ruminants have specialized stomachs. For the digestion of plant matter, ruminant stomachs have more than one chamber. When a ruminant chews up and swallows grass, leaves, and other material, it goes into the first chamber of its stomach, where it sits and softens. There, specialized bacteria break down the food. When the material is soft enough, the animal regurgitates the food and chews it again. This helps break down the plant matter. This partially digested food is called cud. The animal then swallows the cud, and it goes into a second chamber of the stomach. Chemicals in the second chamber digest the plant material further, and it goes into the third chamber. Finally, the digested food goes to the fourth chamber, which is similar to a human stomach. Sheep, deer, giraffes, camels, and cattle are all ruminants.

Picky Eaters

Some herbivores eat any plant matter they can find. Elephants, for example, eat bark, leaves, small branches, roots, grasses, and fruit. Black rhinoceroses also eat a variety of fruits, branches, and leaves.

Other herbivores eat only one part of a plant. An animal that specializes in eating fruit is called a frugivore. Oilbirds, which live in northern South America, are frugivores. They eat nothing but the fruit of palms and laurels. The koala, which is native to Australia, eats little besides the leaves of eucalyptus trees. An animal that eats the leaves and shoots of trees is called a folivore. Pandas, which feed almost exclusively on bamboo, are folivores. Termites are insects that feed mostly on wood. Wood-eaters are called xylophages.

Many insects are herbivores. Some, such as grasshoppers, will eat every part of a plant. Others specialize in certain parts of the plant. Aphids drink sap, a sticky fluid that carries nutrients through the plant. Caterpillars eat leaves. The larvae, or young wormlike forms, of root weevils feed on roots. Asian long-horned beetles tunnel deep into the heart of a tree and eat the wood there. Honeybees feed on nectar and pollen from flowers.

Some herbivores consume only dead plant material. These organisms are called detritivores. Detritivores also consume other dead organic material, such as decaying animals, fungi, and algae. Detritivores such as earthworms, bacteria, and fungi are an important part of the food chain. They break down the dead organic material and recycle nutrients back into the ecosystem. Detritivores can survive in many places. Earthworms and mushrooms live in the soil. There are also detritivore bacteria at the bottom of the ocean.

Plants that are parasites can still be considered herbivores. A parasite is an organism that lives on or in another organism and gets its nutrients from it. Parasitic plants get their nutrients from other plants, called host plants. Dodder, native to tropical and temperate climates around the world, is a parasitic vine that wraps around a host plant. Dodder has rootlike parts called haustoria that attach to the host plant, so it can feed on its nutrients. Eventually, the parasitic dodder feeds on all the nutrients of the host plant, and the host plant dies. The dodder vines then move on to another plant.

Herbivores in the Food Chain

Many herbivores spend a large part of their life eating. Elephants need to eat about 130 kilograms (300 pounds) of food a day. It takes a long time to eat that much leaves and grass, so elephants sometimes eat for 18 hours a day.

Herbivores depend on plants for their survival. If the plant population declines, herbivores cannot get enough food. Beavers, for example, feed on trees and plants that live near water. If the trees are removed to build houses and roads, the beaver population cannot survive.

Similarly, many carnivores need herbivores to survive. Herbivorous zebras and gazelles once traveled in great herds across the savannas of Africa. But these herds have shrunk and are now mostly confined to parks and wildlife reserves. As the number of these herbivores declines, carnivores such as African wild dogs, which prey on them, also decline. Scientists estimate that only 3,000 to 5,500 African wild dogs remain in the wild.

In some places, the disappearance of large carnivores has led to an overpopulation of herbivores. Wolves and cougars are traditional predators, or hunters, of white-tailed deer, which are herbivores. Hunting and expanding human settlements have practically eliminated these predators from the northeastern United States. Without its natural predators, the population of white-tailed deer has skyrocketed. In some areas, there are so many deer that they cannot find enough food. They now frequently stray into towns and suburbs in search of food.

21577.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2349 2024-10-21 00:08:47

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2249) Century

Gist

100 years is called a century. The computer, the television, and video games were all invented in the twentieth century. People are now wondering what the twenty-first century holds for us.

The noun century comes from the Latin word centuria, which was a group of 100, particularly a group of 100 Roman soldiers (one of 16 such groups in a Roman legion). The word today still can refer to 100 of something. In sports, a century is a score of 100 in a game of cricket. A race of 100 yards or 100 miles is also sometimes called a century. In slang, century can also mean a 100 dollar bill.

Details

A century is a period of 100 years or 10 decades. Centuries are numbered ordinally in English and many other languages. The word century comes from the Latin centum, meaning one hundred. Century is sometimes abbreviated as c.

A centennial or centenary is a hundredth anniversary, or a celebration of this, typically the remembrance of an event which took place a hundred years earlier.

Start and end of centuries

Although a century can mean any arbitrary period of 100 years, there are two viewpoints on the nature of standard centuries. One is based on strict construction, while the other is based on popular perception.

According to the strict construction, the 1st century AD, which began with AD 1, ended with AD 100, and the 2nd century with AD 200; in this model, the n-th century starts with a year that follows a year with a multiple of 100 (except the first century as it began after the year 1 BC) and ends with the next coming year with a multiple of 100 (100n), i.e. the 20th century comprises the years 1901 to 2000, and the 21st century comprises the years 2001 to 2100 in strict usage.

In popular perception and practice, centuries are structured by grouping years based on sharing the 'hundreds' digit(s). In this model, the n-th century starts with the year that ends in "00" and ends with the year ending in "99"; for example, in popular culture, the years 1900 to 1999 constitute the 20th century, and the years 2000 to 2099 constitute the 21st century. (This is similar to the grouping of "0-to-9 decades" which share the 'tens' digit.)

To facilitate calendrical calculations by computer, the astronomical year numbering and ISO 8601 systems both contain a year zero, with the astronomical year 0 corresponding to the year 1 BC, the astronomical year -1 corresponding to 2 BC, and so on.

Alternative naming systems

Informally, years may be referred to in groups based on the hundreds part of the year. In this system, the years 1900–1999 are referred to as the nineteen hundreds (1900s). Aside from English usage, this system is used in Swedish, Danish, Norwegian, Icelandic, Finnish and Hungarian. The Swedish nittonhundratalet (or 1900-talet), Danish nittenhundredetallet (or nittenhundredetallet), Norwegian nittenhundretallet (or 1900-tallet), Finnish tuhatyhdeksänsataaluku (or 1900-luku) and Hungarian ezerkilencszázas évek (or 1900-as évek) refer unambiguously to the years 1900–1999. In Swedish, however, a century is in more rare cases referred to as det n-te seklet/århundradet ("the n-th century") rather than n-hundratalet, i.e. the 17th century is (in rare cases) referred to as 17:(d)e/sjuttonde århundradet/seklet rather than 1600-talet and mainly also referring to the years 1601–1700 rather than 1600–1699; according to Svenska Akademiens ordbok, 16:(d)e/sextonde århundradet may refer to either the years 1501–1600 or 1500–1599.

Similar dating units in other calendar systems

While the century has been commonly used in the West, other cultures and calendars have utilized differently sized groups of years in a similar manner. The Hindu calendar, in particular, summarizes its years into groups of 60, while the Aztec calendar considers groups of 52.

post.Eng1_.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2350 2024-10-22 00:13:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,283

Re: Miscellany

2250) Kindergarten

Gist

A kindergarten is a school for very young children, aged from about 3 to 5.

Summary

Kindergarten, literally meaning "children's garden", is the first stage in the move from home to formal education. Children learn through play. In most countries kindergarten is part of the pre-school system. In North America and some parts of Australia kindergarten is the first year of school. Kindergarten children are usually between four and six years of age. Kindergarten ages vary from country to country. In Malaysia, for example, kindergarten children are six and when they are seven years old they go to primary school.

The name kindergarten was coined by Friedrich Fröbel (1782–1852). His work influenced early-years education around the world.

Pre-Kindergarten

Sometimes, children go to pre-kindergarten for a year, before they go to kindergarten. Pre-kindergarten is more common in United States, Canada, and Turkey than other countries. It normally begins at four years old.

Details

Kindergarten is a preschool educational approach based on playing, singing, practical activities such as drawing, and social interaction as part of the transition from home to school. Such institutions were originally made in the late 18th century in Germany, Bavaria and Alsace to serve children whose parents both worked outside home. The term was coined by German pedagogue Friedrich Fröbel, whose approach globally influenced early-years education. Today, the term is used in many countries to describe a variety of educational institutions and learning spaces for children ranging from two to six years of age, based on a variety of teaching methods.

History:

Early years and development

In 1779, Johann Friedrich Oberlin and Louise Scheppler founded in Strasbourg an early establishment for caring for and educating preschool children whose parents were absent during the day. At about the same time, in 1780, similar infant establishments were created in Bavaria. In 1802, Princess Pauline zur Lippe established a preschool center in Detmold, the capital of the then principality of Lippe, Germany (now in the State of North Rhine-Westphalia).

In 1816, Robert Owen, a philosopher and pedagogue, opened the first British and probably globally the first infants school in New Lanark, Scotland. In conjunction with his venture for cooperative mills, Owen wanted the children to be given a good moral education so that they would be fit for work. His system was successful in producing obedient children with basic literacy and numeracy.

Samuel Wilderspin opened his first infant school in London in 1819, and went on to establish hundreds more. He published many works on the subject, and his work became the model for infant schools throughout England and further afield. Play was an important part of Wilderspin's system of education. He is credited with inventing the playground. In 1823, Wilderspin published On the Importance of Educating the Infant Poor, based on the school. He began working for the Infant School Society the next year, informing others about his views. He also wrote The Infant System, for developing the physical, intellectual, and moral powers of all children from 1 to seven years of age.

Countess Theresa Brunszvik (1775–1861), who had known and been influenced by Johann Heinrich Pestalozzi, was influenced by this example to open an Angyalkert ('angel garden' in Hungarian) on May 27, 1828, in her residence in Buda, the first of eleven care centers that she founded for young children. In 1836 she established an institute for the foundation of preschool centers. The idea became popular among the nobility and the middle class and was copied throughout the Kingdom of Hungary.

Creation of the kindergarten

Friedrich Fröbel (1782–1852) opened a "play and activity" institute in 1837, in Bad Blankenburg, in the principality of Schwarzburg-Rudolstadt, as an experimental social experience for children entering school. He renamed his institute Kindergarten (meaning "garden of children") on June 28, 1840, reflecting his belief that children should be nurtured and nourished "like plants in a garden". Fröbel introduced a pedagogical environment where children could develop through their own self-expression and self-directed learning, facilitated by play, songs, stories, and various other activities; this was in contrast to earlier infant establishments, and Fröbel is therefore credited with the creation of the kindergarten. Around 1873, Caroline Wiseneder's method for teaching instrumental music to young children was adopted by the national kindergarten movement in Germany.

In 1840, the well-connected educator Emily Ronalds was the first British person to study Fröbel's approach and he urged her to transplant his kindergarten concepts in England. Later, women trained by Fröbel opened kindergartens throughout Europe and around the world. The first kindergarten in the US was founded in Watertown, Wisconsin, in 1856, and was conducted in German by Margaretha Meyer-Schurz.

Elizabeth Peabody founded the first English-language kindergarten in the US in 1860. The first free kindergarten in the US was founded in 1870 by Conrad Poppenhusen, a German industrialist and philanthropist, who also established the Poppenhusen Institute. The first publicly financed kindergarten in the US was established in St. Louis in 1873 by Susan Blow.

Canada's first private kindergarten was opened by the Wesleyan Methodist Church in Charlottetown, Prince Edward Island, in 1870. By the end of the decade, they were common in large Canadian towns and cities. In 1882, The country's first public-school kindergartens were established in Berlin, Ontario (modern Kitchener) at the Central School. In 1885, the Toronto Normal School (teacher training) opened a department for kindergarten teaching.

The Australian kindergarten movement emerged in the last decade of the nineteenth century as both a philanthropic and educational endeavour. The first free kindergarten in Australia was established in 1896 in Sydney, New South Wales, by the Kindergarten Union of NSW (now KU Children's Services) led by reformer Maybanke Anderson.

American educator Elizabeth Harrison wrote extensively on the theory of early childhood education and worked to enhance educational standards for kindergarten teachers by establishing what became the National College of Education in 1886.

Additional Information

Kindergarten, educational division, a supplement to elementary school intended to accommodate children between the ages of four and six years. Originating in the early 19th century, the kindergarten was an outgrowth of the ideas and practices of Robert Owen in Great Britain, J.H. Pestalozzi in Switzerland and his pupil Friedrich Froebel in Germany, who coined the term, and Maria Montessori in Italy. It stressed the emotional and spiritual nature of the child, encouraging self-understanding through play activities and greater freedom, rather than the imposition of adult ideas.

In Great Britain the circumstances of the Industrial Revolution tended to encourage the provision of infant schools for young children whose parents and older brothers and sisters were in the factories for long hours. One of the earliest of these schools was founded at New Lanark, Scot., in 1816 by Owen, a cotton-mill industrialist, for the children of his employees. It was based on Owen’s two ideals—pleasant, healthful conditions and a life of interesting activity. Later infant schools in England, unlike Owen’s, emphasized memory drill and moral training while restricting the children’s freedom of action. In 1836, however, the Home and Colonial School Society was founded to train teachers in the methods advanced by Pestalozzi.

In 1837 Froebel opened in Blankenburg, Prussia, “a school for the psychological training of little children by means of play.” In applying to it the name Kindergarten, he sought to convey the impression of an environment in which children grew freely like plants in a garden. During the 25 years after Froebel’s death, kindergartens proliferated throughout Europe, North America, Japan, and elsewhere. In the United States the kindergarten generally became accepted as the first unit of elementary school.

Screen-Shot-2022-12-21-at-1.31.08-PM-1030x698.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB