Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2276 2024-08-28 16:05:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2276) Pencil

Gist

A pencil is a writing or drawing implement with a solid pigment core in a protective casing that reduces the risk of core breakage and keeps it from marking the user's hand.

Pencils are available in diverse types, including traditional graphite pencils, versatile mechanical pencils, and vibrant colored pencils. They continually adjust to a wide array of artistic and functional needs.

The middle ground is referred to as HB. Softer lead gets a B grading, with a number to say how soft the lead is. B on its own is just a little softer than HB. 2B, 3B and 4B are increasingly soft. Further up the range, 9B is the very softest lead available, but so soft and crumbly that it's rarely used.

The degree of hardness of a pencil is printed on the pencil.

H stands for "hard". HB stands for "hard black", which means "medium hard".

Summary

pencil, slender rod of a solid marking substance, such as graphite, enclosed in a cylinder of wood, metal, or plastic; used as an implement for writing, drawing, or marking. In 1565 the German-Swiss naturalist Conrad Gesner first described a writing instrument in which graphite, then thought to be a type of lead, was inserted into a wooden holder. Gesner was the first to describe graphite as a separate mineral, and in 1779 the Swedish chemist Carl Wilhelm Scheele showed it to be a form of carbon. The name graphite is from the Greek graphein, “to write.” The modern lead pencil became possible when an unusually pure deposit of graphite was discovered in 1564 in Borrowdale, Cumberland, Eng.

The hardness of writing pencils, which is related to the proportion of clay (used as a binder) to graphite in the lead, is usually designated by numbers from one, the softest, to four, the hardest. Artists’ drawing pencils range in a hardness designation generally given from 8B, the softest, to F, the hardest. The designation of the hardness of drafting pencils ranges from HB, the softest, to 10H, the hardest.

The darkness of a pencil mark depends on the number of small particles of graphite deposited by the pencil. The particles are equally black (though graphite is never truly black) regardless of the hardness of the lead; only the size and number of particles determine the apparent degree of blackness of the pencil mark. The degree of hardness of a lead is a measure of how much the lead resists abrasion by the fibres of the paper.

Details

A pencil is a writing or drawing implement with a solid pigment core in a protective casing that reduces the risk of core breakage and keeps it from marking the user's hand.

Pencils create marks by physical abrasion, leaving a trail of solid core material that adheres to a sheet of paper or other surface. They are distinct from pens, which dispense liquid or gel ink onto the marked surface.

Most pencil cores are made of graphite powder mixed with a clay binder. Graphite pencils (traditionally known as "lead pencils") produce grey or black marks that are easily erased, but otherwise resistant to moisture, most solvents, ultraviolet radiation and natural aging. Other types of pencil cores, such as those of charcoal, are mainly used for drawing and sketching. Coloured pencils are sometimes used by teachers or editors to correct submitted texts, but are typically regarded as art supplies, especially those with cores made from wax-based binders that tend to smear when erasers are applied to them. Grease pencils have a softer, oily core that can leave marks on smooth surfaces such as glass or porcelain.

The most common pencil casing is thin wood, usually hexagonal in section, but sometimes cylindrical or triangular, permanently bonded to the core. Casings may be of other materials, such as plastic or paper. To use the pencil, the casing must be carved or peeled off to expose the working end of the core as a sharp point. Mechanical pencils have more elaborate casings which are not bonded to the core; instead, they support separate, mobile pigment cores that can be extended or retracted (usually through the casing's tip) as needed. These casings can be reloaded with new cores (usually graphite) as the previous ones are exhausted.

Types:

By marking material:

Graphite

Graphite pencils are the most common types of pencil, and are encased in wood. They are made of a mixture of clay and graphite and their darkness varies from light grey to black. Their composition allows for the smoothest strokes.

Solid

Solid graphite pencils are solid sticks of graphite and clay composite (as found in a 'graphite pencil'), about the diameter of a common pencil, which have no casing other than a wrapper or label. They are often called "woodless" pencils. They are used primarily for art purposes as the lack of casing allows for covering larger spaces more easily, creating different effects, and providing greater economy as the entirety of the pencil is used. They are available in the same darkness range as wood-encased graphite pencils.

Liquid

Liquid graphite pencils are pencils that write like pens. The technology was first invented in 1955 by Scripto and Parker Pens. Scripto's liquid graphite formula came out about three months before Parker's liquid lead formula. To avoid a lengthy patent fight the two companies agreed to share their formulas.

Charcoal

Charcoal pencils are made of charcoal and provide fuller blacks than graphite pencils, but tend to smudge easily and are more abrasive than graphite. Sepia-toned and white pencils are also available for duotone techniques.

Carbon pencils

Carbon pencils are generally made of a mixture of clay and lamp black, but are sometimes blended with charcoal or graphite depending on the darkness and manufacturer. They produce a fuller black than graphite pencils, are smoother than charcoal, and have minimal dust and smudging. They also blend very well, much like charcoal.

Colored

Colored pencils, or pencil crayons, have wax-like cores with pigment and other fillers. Several colors are sometimes blended together.

Grease

Grease pencils can write on virtually any surface (including glass, plastic, metal and photographs). The most commonly found grease pencils are encased in paper (Berol and Sanford Peel-off), but they can also be encased in wood (Staedtler Omnichrom).

Watercolor

Watercolor pencils are designed for use with watercolor techniques. Their cores can be diluted by water. The pencils can be used by themselves for sharp, bold lines. Strokes made by the pencil can also be saturated with water and spread with brushes.

By use:

Carpentry

Carpenter's pencils are pencils that have two main properties: their shape prevents them from rolling, and their graphite is strong. The oldest surviving pencil is a German carpenter's pencil dating from the 17th Century and now in the Faber-Castell collection.

Copying

Copying pencils (or indelible pencils) are graphite pencils with an added dye that creates an indelible mark. They were invented in the late 19th century for press copying and as a practical substitute for fountain pens. Their markings are often visually indistinguishable from those of standard graphite pencils, but when moistened their markings dissolve into a coloured ink, which is then pressed into another piece of paper. They were widely used until the mid-20th century when ball pens slowly replaced them. In Italy their use is still mandated by law for voting paper ballots in elections and referendums.

Eyeliner

Eye liner pencils are used for make-up. Unlike traditional copying pencils, eyeliner pencils usually contain non-toxic dyes.

Erasable coloring

Unlike wax-based colored pencils, the erasable variants can be easily erased. Their main use is in sketching, where the objective is to create an outline using the same color that other media (such as wax pencils, or watercolor paints) would fill or when the objective is to scan the color sketch. Some animators prefer erasable color pencils as opposed to graphite pencils because they do not smudge as easily, and the different colors allow for better separation of objects in the sketch. Copy-editors find them useful too as markings stand out more than those of graphite, but can be erased.

Non-reproduction

Also known as non-photo blue pencils, the non-reproducing types make marks that are not reproducible by photocopiers (examples include "Copy-not" by Sanford and "Mars Non-photo" by Staedtler) or by whiteprint copiers (such as "Mars Non-Print" by Staedtler).

Stenography

Stenographer's pencils, also known as a steno pencil, are expected to be very reliable, and their lead is break-proof. Nevertheless, steno pencils are sometimes sharpened at both ends to enhance reliability. They are round to avoid pressure pain during long texts.

Golf

Golf pencils are usually short (a common length is 9 cm or 3.5 in) and very cheap. They are also known as library pencils, as many libraries offer them as disposable writing instruments.

By shape:

* Triangular (more accurately a Reuleaux triangle)
* Hexagonal
* Round
* Bendable (flexible plastic)

By size:

Typical

A standard, hexagonal, "#2 pencil" is cut to a hexagonal height of 6 mm (1⁄4 in), but the outer diameter is slightly larger (about 7 mm or 9⁄32 in) A standard, "#2", hexagonal pencil is 19 cm (7.5 in) long.

Biggest

On 3 September 2007, Ashrita Furman unveiled his giant US$20,000 pencil – 23 metres (76 ft) long, 8,200 kilograms (18,000 lb) with over 2,000 kilograms (4,500 lb) for the graphite centre – after three weeks of creation in August 2007 as a birthday gift for teacher Sri Chinmoy. It is longer than the 20-metre (65 ft) pencil outside the Malaysia HQ of stationers Faber-Castell.

By manufacture:

Mechanical:

Mechanical pencils use mechanical methods to push lead through a hole at the end. These can be divided into two groups: with propelling pencils an internal mechanism is employed to push the lead out from an internal compartment, while clutch pencils merely hold the lead in place (the lead is extended by releasing it and allowing some external force, usually gravity, to pull it out of the body). The erasers (sometimes replaced by a sharpener on pencils with larger lead sizes) are also removable (and thus replaceable), and usually cover a place to store replacement leads. Mechanical pencils are popular for their longevity and the fact that they may never need sharpening. Lead types are based on grade and size; with standard sizes being 2.00 mm (0.079 in), 1.40 mm (0.055 in), 1.00 mm (0.039 in), 0.70 mm (0.028 in), 0.50 mm (0.020 in), 0.35 mm (0.014 in), 0.25 mm (0.0098 in), 0.18 mm (0.0071 in), and 0.13 mm (0.0051 in) (ISO 9175-1)—the 0.90 mm (0.035 in) size is available, but is not considered a standard ISO size.

Pop a Point

Pioneered by Taiwanese stationery manufacturer Bensia Pioneer Industrial Corporation in the early 1970s, Pop a Point Pencils are also known as Bensia Pencils, stackable pencils or non-sharpening pencils. It is a type of pencil where many short pencil tips are housed in a cartridge-style plastic holder. A blunt tip is removed by pulling it from the writing end of the body and re-inserting it into the open-ended bottom of the body, thereby pushing a new tip to the top.

Plastic

Invented by Harold Grossman for the Empire Pencil Company in 1967, plastic pencils were subsequently improved upon by Arthur D. Little for Empire from 1969 through the early 1970s; the plastic pencil was commercialised by Empire as the "EPCON" Pencil. These pencils were co-extruded, extruding a plasticised graphite mix within a wood-composite core.

Other aspects

By factory state: sharpened, unsharpened
By casing material: wood, paper, plastic
The P&P Office Waste Paper Processor recycles paper into pencils

Health

Residual graphite from a pencil stick is not poisonous, and graphite is harmless if consumed.

Although lead has not been used for writing since antiquity, such as in Roman styli, lead poisoning from pencils was not uncommon. Until the middle of the 20th century the paint used for the outer coating could contain high concentrations of lead, and this could be ingested when the pencil was sucked or chewed.

Manufacture

The lead of the pencil is a mix of finely ground graphite and clay powders. Before the two substances are mixed, they are separately cleaned of foreign matter and dried in a manner that creates large square cakes. Once the cakes have fully dried, the graphite and the clay squares are mixed together using water. The amount of clay content added to the graphite depends on the intended pencil hardness (lower proportions of clay makes the core softer), and the amount of time spent on grinding the mixture determines the quality of the lead. The mixture is then shaped into long spaghetti-like strings, straightened, dried, cut, and then tempered in a kiln. The resulting strings are dipped in oil or molten wax, which seeps into the tiny holes of the material and allows for the smooth writing ability of the pencil. A juniper or incense-cedar plank with several long parallel grooves is cut to fashion a "slat," and the graphite/clay strings are inserted into the grooves. Another grooved plank is glued on top, and the whole assembly is then cut into individual pencils, which are then varnished or painted. Many pencils feature an eraser on the top and so the process is usually still considered incomplete at this point. Each pencil has a shoulder cut on one end of the pencil to allow for a metal ferrule to be secured onto the wood. A rubber plug is then inserted into the ferrule for a functioning eraser on the end of the pencil.

shutterstock_1113179420.jpg?fm=jpg&fl=progressive&w=660&h=433&fit=fill


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2277 2024-08-29 16:16:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2277) Fuel Gas

Gist

Fuel gas means gas generated at a petroleum refinery or petrochemical plant and that is combusted separately or in any combination with any type of gas.

Regular, Mid-Grade, Or Premium

* Regular gas has the lowest octane level, typically at 87.
* Mid-grade gas typically has an octane level of 89.
* Premium gas has the highest octane levels and can range from 91 to 94.

Summary

The substances in which such ingredients are present which produce heat by acting with oxygen are called fuels.

In order to obtain heat from the fuel, initially it is given a small amount of heat in the presence of oxygen from some other source, by doing this, a microscopic part of the fuel reacts with oxygen and in this action heat is produced, which first Exceeded heat is given.

In this way, one part of the heat generated becomes radiate and the remaining part is helpful in reacting oxygen to another part of the fuel. Thus heat is generated and the fuel is slowly destroyed, this action is called burning of fuel.

Fuel Gas System

Fuel + Oxygen → Products + Heat

The amount of heat that a calorie produces from burning one gram of a fuel is called the calorific value of that fuel. The higher the calorific value of a fuel, the better the fuel is considered to be. The calorific value of carbon is 7830 calories per gram.

There are three types of fuel solid, fluid, gas, wood and coal are the main fuel. Among the liquid fuels, kerosene petrol and diesel are the main ones. Oil gas petrol gas producer gas water gas and call gas are major in gas fuel. Gaseous fuel is more useful than solid and liquid fuels.

Details

Fuel gas is one of a number of fuels that under ordinary conditions are gaseous. Most fuel gases are composed of hydrocarbons (such as methane and propane), hydrogen, carbon monoxide, or mixtures thereof. Such gases are sources of energy that can be readily transmitted and distributed through pipes.

Fuel gas is contrasted with liquid fuels and solid fuels, although some fuel gases are liquefied for storage or transport (for example, autogas and liquified petroleum gas). While their gaseous nature has advantages, avoiding the difficulty of transporting solid fuel and the dangers of spillage inherent in liquid fuels, it also has limitations. It is possible for a fuel gas to be undetected and cause a gas explosion. For this reason, odorizers are added to most fuel gases. The most common type of fuel gas in current use is natural gas.

Types of fuel gas

There are two broad classes of fuel gases, based not on their chemical composition, but their source and the way they are produced: those found naturally, and those manufactured from other materials.

Manufactured fuel gas

Manufactured fuel gases are those produced by chemical transformations of solids, liquids, or other gases. When obtained from solids, the conversion is referred to as gasification and the facility is known as a gasworks.

Manufactured fuel gases include:

* Coal gas, obtained from pyrolysis of coal
* Water gas, largely obsolete, obtained by passing steam over hot coke
* Producer gas, largely obsolete, obtained by passing steam and air over hot coke
* Syngas, major current technology, obtained mainly from natural gas
* Wood gas, obtained mainly from wood, once was popular and of relevance to biofuels
* Biogas, obtained from landfills
* Blast furnace gas
* Hydrogen from Electrolysis or Steam reforming

The coal gas made by the pyrolysis of coal contains impurities such a tar, ammonia and hydrogen sulfide. These must be removed and a substantial amount of plant may be required to do this.

Well or mine extracted fuel gases

In the 20th century, natural gas, composed primarily of methane, became the dominant source of fuel gas, as instead of having to be manufactured in various processes, it could be extracted from deposits in the earth. Natural gas may be combined with hydrogen to form a mixture known as HCNG.

Additional fuel gases obtained from natural gas or petroleum:

* Propane
* Butane
* Regasified liquefied petroleum gas

Natural gas is produced with water and gas condensate. These liquids have to be removed before the gas can be used as fuel. Even after treatment the gas will be saturated and liable to condense as liquid in the pipework. This can be reduced by superheating the fuel gas.

Uses of fuel gas

One of the earliest uses was gas lighting, which enabled the widespread adoption of streetlamps and the illumination of buildings in towns. Fuel gas was also used in gas burners, in particular the Bunsen burner used in laboratories. It may also be used gas heaters, camping stoves, and even to power vehicles, as they have a high calorific value.

Fuel gas is widely used by industrial, commercial and domestic users. Industry uses fuel gas for heating furnaces, kilns, boilers and ovens and for space heating and drying . The electricity industry uses fuel gas to power gas turbines to generate electricity. The specification of fuel gas for gas turbines may be quite stringent. Fuel gas may also be used as a feedstock for chemical processes.

Fuel gas in the commercial sector is used for heating, cooking, baking and drying, and in the domestic sector for heating and cooking.

Currently, fuel gases, especially syngas, are used heavily for the production of ammonia for fertilizers and for the preparation of many detergents and specialty chemicals.

On an industrial plant fuel gas may be used to purge pipework and vessels to prevent the ingress of air. Any fuel gas surplus to needs may be disposed of by burning in the plant gas flare system.

For users that burn gas directly fuel gas is supplied at a pressure of about 15 psi (1 barg). Gas turbines need a supply pressure of 250-350 psi (17-24 barg).

istock-499467967_gasstove-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2278 2024-08-30 16:20:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2278) Petroleum

Gist

Petroleum is a naturally occurring liquid found beneath the earth's surface that can be refined into fuel. Petroleum is used as fuel to power vehicles, heating units, and machines, and can be converted into plastics.

Summary

Petroleum is a complex mixture of hydrocarbons that occur in Earth in liquid, gaseous, or solid form. A natural resource, petroleum is most often conceived of in its liquid form, commonly called crude oil, but, as a technical term, petroleum also refers to natural gas and the viscous or solid form known as bitumen, which is found in tar sands. The liquid and gaseous phases of petroleum constitute the most important of the primary fossil fuels.

Liquid and gaseous hydrocarbons are so intimately associated in nature that it has become customary to shorten the expression “petroleum and natural gas” to “petroleum” when referring to both. The first use of the word petroleum (literally “rock oil” from the Latin petra, “rock” or “stone,” and oleum, “oil”) is often attributed to a treatise published in 1556 by the German mineralogist Georg Bauer, known as Georgius Agricola. However, there is evidence that it may have originated with Persian philosopher-scientist Avicenna some five centuries earlier.

The burning of all fossil fuels (coal and biomass included) releases large quantities of carbon dioxide (CO2) into the atmosphere. The CO2 molecules do not allow much of the long-wave solar radiation absorbed by Earth’s surface to reradiate from the surface and escape into space. The CO2 absorbs upward-propagating infrared radiation and reemits a portion of it downward, causing the lower atmosphere to remain warmer than it would otherwise be. This phenomenon has the effect of enhancing Earth’s natural greenhouse effect, producing what scientists refer to as anthropogenic (human-generated) global warming. There is substantial evidence that higher concentrations of CO2 and other greenhouse gases have contributed greatly to the increase of Earth’s near-surface mean temperature since 1950.

Details

Petroleum or crude oil, also referred to as simply oil, is a naturally occurring yellowish-black liquid mixture of mainly hydrocarbons, and is found in geological formations. The name petroleum covers both naturally occurring unprocessed crude oil and petroleum products that consist of refined crude oil.

Petroleum is primarily recovered by oil drilling. Drilling is carried out after studies of structural geology, sedimentary basin analysis, and reservoir characterization. Unconventional reserves such as oil sands and oil shale exist.

Once extracted, oil is refined and separated, most easily by distillation, into innumerable products for direct use or use in manufacturing. Products include fuels such as gasoline (petrol), diesel, kerosene and jet fuel; asphalt and lubricants; chemical reagents used to make plastics; solvents, textiles, refrigerants, paint, synthetic rubber, fertilizers, pesticides, pharmaceuticals, and thousands of others. Petroleum is used in manufacturing a vast variety of materials essential for modern life, and it is estimated that the world consumes about 100 million barrels (16 million cubic metres) each day. Petroleum production can be extremely profitable and was critical to global economic development in the 20th century. Some countries, known as petrostates, gained significant economic and international power over their control of oil production and trade.

Petroleum exploitation can be damaging to the environment and human health. Extraction, refining and burning of petroleum fuels all release large quantities of greenhouse gases, so petroleum is one of the major contributors to climate change. Other negative environmental effects include direct releases, such as oil spills, and as well as air and water pollution at almost all stages of use. These environmental effects have direct and indirect health consequences for humans. Oil has also been a source of internal and inter-state conflict, leading to both state-led wars and other resource conflicts. Production of petroleum is estimated to reach peak oil before 2035 as global economies lower dependencies on petroleum as part of climate change mitigation and a transition towards renewable energy and electrification. Oil has played a key role in industrialization and economic development.

Etymology

The word petroleum comes from Medieval Latin petroleum (literally 'rock oil'), which comes from Latin petra 'rock' and oleum 'oil'.

The origin of the term stems from monasteries in southern Italy where it was in use by the end of the first millennium as an alternative for the older term "naphtha". After that, the term was used in numerous manuscripts and books, such as in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola. After the advent of the oil industry, during the second half of the 19th century, the term became commonly known for the liquid form of hydrocarbons.

Use

In terms of volume, most petroleum is converted into fuels for combustion engines. In terms of value, petroleum underpins the petrochemical industry, which includes many high value products such as pharmaceuticals and plastics.

Fuels and lubricants

Petroleum is used mostly, by volume, for refining into fuel oil and gasoline, both important primary energy sources. 84% by volume of the hydrocarbons present in petroleum is converted into fuels, including gasoline, diesel, jet, heating, and other fuel oils, and liquefied petroleum gas.

Due to its high energy density, easy transportability and relative abundance, oil has become the world's most important source of energy since the mid-1950s. Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics; the 16 percent not used for energy production is converted into these other materials. Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust. There is also petroleum in oil sands (tar sands). Known oil reserves are typically estimated at 190 {km}^3 (1.2 trillion (short scale) barrels) without oil sands, or 595 {km}^3 (3.74 trillion barrels) with oil sands. Consumption is currently around 84 million barrels (13.4×{10}^6 m^3) per day, or 4.9 {km}^3 per year, yielding a remaining oil supply of only about 120 years, if current demand remains static. More recent studies, however, put the number at around 50 years.

Closely related to fuels for combustion engines are Lubricants, greases, and viscosity stabilizers. All are derived from petroleum.

Chemicals

Many pharmaceuticals are derived from petroleum, albeit via multistep processes.[citation needed] Modern medicine depends on petroleum as a source of building blocks, reagents, and solvents. Similarly, virtually all pesticides - insecticides, herbicides, etc. - are derived from petroleum. Pesticides have profoundly affected life expectancies by controlling disease vectors and by increasing yields of crops. Like pharmaceuticals, pesticides are in essence petrochemicals. Virtually all plastics and synthetic polymers are derived from petroleum, which is the source of monomers. Alkenes (olefins) are one important class of these precursor molecules.

Other derivatives

* Wax, used in the packaging of frozen foods, among others, Paraffin wax, derived from petroleum oil.
* Sulfur and its derivative sulfuric acid. Hydrogen sulfide is a product of sulfur removal from petroleum fraction. It is oxidized to elemental sulfur and then to sulfuric acid.
* Bulk tar and Asphalt
* Petroleum coke, used in speciality carbon products or as solid fuel.

Additional Information:

What Is Petroleum?

Petroleum, also called crude oil, is a naturally occurring liquid found beneath the earth’s surface that can be refined into fuel. A fossil fuel, petroleum is created by the decomposition of organic matter over time and used as fuel to power vehicles, heating units, and machines, and can be converted into plastics.

Because the majority of the world relies on petroleum for many goods and services, the petroleum industry is a major influence on world politics and the global economy.

Key Takeaways

* Petroleum is a naturally occurring liquid found beneath the earth’s surface that can be refined into fuel.
* Petroleum is used as fuel to power vehicles, heating units, and machines, and can be converted into plastics.
* The extraction and processing of petroleum and its availability are drivers of the world's economy and global politics.
* Petroleum is non-renewable energy, and other energy sources, such as solar and wind power, are becoming prominent.

Understanding Petroleum

The extraction and processing of petroleum and its availability is a driver of the world's economy and geopolitics. Many of the largest companies in the world are involved in the extraction and processing of petroleum or create petroleum-based products, including plastics, fertilizers, automobiles, and airplanes.

Petroleum is recovered by oil drilling and then refined and separated into different types of fuels. Petroleum contains hydrocarbons of different molecular weights and the denser the petroleum the more difficult it is to process and the less valuable it is.

Investing in petroleum means investing in oil, through the purchase of oil futures or options, or indirect investing, in exchanged traded funds (ETFs) that invest in companies in the energy sector.

Petroleum companies are divided into upstream, midstream, and downstream, depending on the oil and gas company's position in the supply chain. Upstream oil and gas companies identify, extract, or produce raw materials.

Downstream oil companies engage in the post-production of crude oil and natural gas. Midstream oil and gas companies connect downstream and upstream companies by storing and transporting oil and other refined products.

Pros and Cons of Petroleum

Petroleum provides transportation, heat, light, and plastics to global consumers. It is easy to extract but is a non-renewable, limited supply source of energy. Petroleum has a high power ratio and is easy to transport.

However, the extraction process and the byproducts of the use of petroleum are toxic to the environment. Underwater drilling may cause leaks and fracking can affect the water table. Carbon released into the atmosphere by using petroleum increases temperatures and is associated with global warming.

Pros:

* Stable energy source
* Easily extracted
* Variety of uses
* High power ratio
* Easily transportable

Cons

* Carbon emissions are toxic to the environment.
* Transportation can damage the environment.
* Extraction process is harmful to the environment.

The Petroleum Industry:

Classification

Oil is classified into three categories including the geographic location where it was drilled, its sulfur content, and its API gravity, its density measure.

Reservoirs

Geologists, chemists, and engineers research geographical structures that hold petroleum using “seismic reflection." A reservoir’s oil-in-place that can be extracted and refined is that reservoir’s oil reserves.

As of 2022, the top ranking countries for total oil reserves include Venezuela with 303.8 billion barrels, Saudi Arabia with 297.5, and Canada holding 168.1.

Extracting

Drilling for oil includes developmental drilling, where oil reserves have already been found. Exploratory drilling is conducted to search for new reserves and directional drilling is drilling vertically to a known source of oil.

Investing In Petroleum

The energy sector attracts investors who speculate on the demand for oil and fossil fuels and many oil and energy fund offerings consist of companies related to energy.

Mutual funds like Vanguard Energy Fund Investor Shares (VGENX) with holdings in ConocoPhillips, Shell, and Marathon Petroleum Corporation, and the Fidelity Select Natural Gas Fund (FSNGX), holding Enbridge and Hess, are two funds that invest in the energy sector and pay dividends.

Oil and gas exchange-traded funds (ETFs) offer investors more direct and easier access to the often-volatile energy market than many other alternatives. Three of the top-rated oil and gas ETFs for 2022 include the Invesco Dynamic Energy Exploration & Production ETF (PXE), First Trust Natural Gas ETF (FCG), and iShares U.S. Oil & Gas Exploration & Production ETF (IEO).

How Is Petroleum Formed?

Petroleum is a fossil fuel that was formed over millions of years through the transformation of dead organisms, such as algae, plants, and bacteria, that experienced high heat and pressure when trapped inside rock formations.

Is Petroleum Renewable?

Petroleum is not a renewable energy source. It is a fossil fuel with a finite amount of petroleum available.

What Are Alternatives to Petroleum?

Alternatives include wind, solar, and biofuels. Wind power uses wind turbines to harness the power of the wind to create energy. Solar power uses the sun as an energy source, and biofuels use vegetable oils and animal fat as a power source.

What Are Classifications of Petroleum?

Unrefined petroleum classes include asphalt, bitumen, crude oil, and natural gas.

The Bottom Line

Petroleum is a fossil fuel that is used widely in the daily lives of global consumers. In its refined state, petroleum is used to create gasoline, kerosene, plastics, and other byproducts. Petroleum is a finite material and non-renewable energy source. Because of its potential to be harmful to the environment, alternative energy sources are being explored and implemented, such as solar and wind energy.

Importance-of-Petroleum.jpg?resize=1080%2C675&ssl=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2279 2024-08-31 00:23:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2279) Electric Vehicles

Gist

An EV's electric motor doesn't have to pressurize and ignite gasoline to move the car's wheels. Instead, it uses electromagnets inside the motor that are powered by the battery to generate rotational force. Inside the motor are two sets of magnets.

Here's a basic rundown of how electric cars work: EVs receive energy from a charging station and store the energy in its battery. The battery gives power to the motor which moves the wheels. Many electrical parts work together in the background to make this motion happen.

Summary

EV’s are like an automatic car. They have a forward and reverse mode. When you place the vehicle in gear and press on the accelerator pedal these things happen:

* Power is converted from the DC battery to AC for the electric motor
* The accelerator pedal sends a signal to the controller which adjusts the vehicle's speed by changing the frequency of the AC power from the inverter to the motor
* The motor connects and turns the wheels through a cog
* When the brakes are pressed or the car is decelerating, the motor becomes an alternator and produces power, which is sent back to the battery

AC/DC and electric cars

AC stands for Alternating Current. In AC, the current changes direction at a determined frequency, like the pendulum on a clock.

DC stands for Direct Current. In DC, the current flows in one direction only, from positive to negative.

Battery Electric Vehicles

The key components of a Battery Electric Vehicle are:

* Electric motor
* Inverter
* Battery
* Battery charger
* Controller
* Charging cable

Electric motor

You will find electric motors in everything from juicers and toothbrushes, washing machines and dryers, to robots. They are familiar, reliable and very durable. Electric vehicle motors use AC power.

Inverter

An inverter is a device that converts DC power to the AC power used in an electric vehicle motor. The inverter can change the speed at which the motor rotates by adjusting the frequency of the alternating current. It can also increase or decrease the power or torque of the motor by adjusting the amplitude of the signal.

Battery

An electric vehicle uses a battery to store electrical energy that is ready to use. A battery pack is made up of a number of cells that are grouped into modules. Once the battery has sufficient energy stored, the vehicle is ready to use.

Battery technology has improved hugely in recent years. Current EV batteries are lithium based. These have a very low rate of discharge. This means an EV should not lose charge if it isn't driven for a few days, or even weeks.

Battery charger

The battery charger converts the AC power available on our electricity network to DC power stored in a battery. It controls the voltage level of the battery cells by adjusting the rate of charge. It will also monitor the cell temperatures and control the charge to help keep the battery healthy.

Controller

The controller is like the brain of a vehicle, managing all of its parameters. It controls the rate of charge using information from the battery. It also translates pressure on the accelerator pedal to adjust speed in the motor inverter.

Charging cable

A charging cable for standard charging is supplied with and stored in the vehicle. It's used for charging at home or at standard public charge points. A fast charge point will have its own cable.

Details

An electric vehicle (EV) is a vehicle that uses one or more electric motors for propulsion. The vehicle can be powered by a collector system, with electricity from extravehicular sources, or can be powered autonomously by a battery or by converting fuel to electricity using a generator or fuel cells. EVs include road and rail vehicles, electric boats and underwater vessels, electric aircraft and electric spacecraft.

Early electric vehicles first came into existence in the late 19th century, when the Second Industrial Revolution brought forth electrification. Using electricity was among the preferred methods for motor vehicle propulsion as it provides a level of quietness, comfort and ease of operation that could not be achieved by the gasoline engine cars of the time, but range anxiety due to the limited energy storage offered by contemporary battery technologies hindered any mass adoption of private electric vehicles throughout the 20th century. Internal combustion engines (both gasoline and diesel engines) were the dominant propulsion mechanisms for cars and trucks for about 100 years, but electricity-powered locomotion remained commonplace in other vehicle types, such as overhead line-powered mass transit vehicles like electric trains, trams, monorails and trolley buses, as well as various small, low-speed, short-range battery-powered personal vehicles such as mobility scooters.

Hybrid electric vehicles, where electric motors are used as a supplementary propulsion to internal combustion engines, became more widespread in the late 1990s. Plug-in hybrid electric vehicles, where electric motors can be used as the predominant propulsion rather than a supplement, did not see any mass production until the late 2000s, and battery electric cars did not become practical options for the consumer market until the 2010s.

Progress in batteries, electric motors and power electronics have made electric cars more feasible than during the 20th century. As a means of reducing tailpipe emissions of carbon dioxide and other pollutants, and to reduce use of fossil fuels, government incentives are available in many areas to promote the adoption of electric cars and trucks.

History

Electric motive power started in 1827 when Hungarian priest Ányos Jedlik built the first crude but viable electric motor; the next year he used it to power a small model car. In 1835, Professor Sibrandus Stratingh of the University of Groningen, in the Netherlands, built a small-scale electric car, and sometime between 1832 and 1839, Robert Anderson of Scotland invented the first crude electric carriage, powered by non-rechargeable primary cells. American blacksmith and inventor Thomas Davenport built a toy electric locomotive, powered by a primitive electric motor, in 1835. In 1838, a Scotsman named Robert Davidson built an electric locomotive that attained a speed of four miles per hour (6 km/h). In England, a patent was granted in 1840 for the use of rails as conductors of electric current, and similar American patents were issued to Lilley and Colten in 1847.

The first mass-produced electric vehicles appeared in America in the early 1900s. In 1902, the Studebaker Automobile Company entered the automotive business with electric vehicles, though it also entered the gasoline vehicles market in 1904. However, with the advent of cheap assembly line cars by Ford Motor Company, the popularity of electric cars declined significantly.

Due to lack of electricity grids and the limitations of storage batteries at that time, electric cars did not gain much popularity; however, electric trains gained immense popularity due to their economies and achievable speeds. By the 20th century, electric rail transport became commonplace due to advances in the development of electric locomotives. Over time their general-purpose commercial use reduced to specialist roles as platform trucks, forklift trucks, ambulances, tow tractors, and urban delivery vehicles, such as the iconic British milk float. For most of the 20th century, the UK was the world's largest user of electric road vehicles.

Electrified trains were used for coal transport, as the motors did not use the valuable oxygen in the mines. Switzerland's lack of natural fossil resources forced the rapid electrification of their rail network. One of the earliest rechargeable batteries – the nickel-iron battery – was favored by Edison for use in electric cars.

EVs were among the earliest automobiles, and before the preeminence of light, powerful internal combustion engines (ICEs), electric automobiles held many vehicle land speed and distance records in the early 1900s. They were produced by Baker Electric, Columbia Electric, Detroit Electric, and others, and at one point in history outsold gasoline-powered vehicles. In 1900, 28 percent of the cars on the road in the US were electric. EVs were so popular that even President Woodrow Wilson and his secret service agents toured Washington, D.C., in their Milburn Electrics, which covered 60–70 miles (100–110 km) per charge.

Most producers of passenger cars opted for gasoline cars in the first decade of the 20th century, but electric trucks were an established niche well into the 1920s. A number of developments contributed to a decline in the popularity of electric cars. Improved road infrastructure required a greater range than that offered by electric cars, and the discovery of large reserves of petroleum in Texas, Oklahoma, and California led to the wide availability of affordable gasoline/petrol, making internal combustion powered cars cheaper to operate over long distances. Electric vehicles were seldom marketed as a women's luxury car, which may have been a stigma among male consumers. Also, internal combustion powered cars became ever-easier to operate thanks to the invention of the electric starter by Charles Kettering in 1912, which eliminated the need of a hand crank for starting a gasoline engine, and the noise emitted by ICE cars became more bearable thanks to the use of the muffler, which Hiram Percy Maxim had invented in 1897. As roads were improved outside urban areas, electric vehicle range could not compete with the ICE. Finally, the initiation of mass production of gasoline-powered vehicles by Henry Ford in 1913 reduced significantly the cost of gasoline cars as compared to electric cars.

In the 1930s, National City Lines, which was a partnership of General Motors, Firestone, and Standard Oil of California purchased many electric tram networks across the country to dismantle them and replace them with GM buses. The partnership was convicted of conspiring to monopolize the sale of equipment and supplies to their subsidiary companies, but was acquitted of conspiring to monopolize the provision of transportation services.

The Copenhagen Summit, which was conducted in the midst of a severe observable climate change brought on by human-made greenhouse gas emissions, was held in 2009. During the summit, more than 70 countries developed plans to eventually reach net zero. For many countries, adopting more EVs will help reduce the use of gasoline.

Experimentation

In January 1990, General Motors President introduced its EV concept two-seater, the "Impact", at the Los Angeles Auto Show. That September, the California Air Resources Board mandated major-automaker sales of EVs, in phases starting in 1998. From 1996 to 1998 GM produced 1117 EV1s, 800 of which were made available through three-year leases.

Chrysler, Ford, GM, Honda, and Toyota also produced limited numbers of EVs for California drivers during this time period. In 2003, upon the expiration of GM's EV1 leases, GM discontinued them. The discontinuation has variously been attributed to:

* the auto industry's successful federal court challenge to California's zero-emissions vehicle mandate,
* a federal regulation requiring GM to produce and maintain spare parts for the few thousand EV1s and
* the success of the oil and auto industries' media campaign to reduce public acceptance of EVs.

A movie made on the subject in 2005–2006 was titled Who Killed the Electric Car? and released theatrically by Sony Pictures Classics in 2006. The film explores the roles of automobile manufacturers, oil industry, the U.S. government, batteries, hydrogen vehicles, and the general public, and each of their roles in limiting the deployment and adoption of this technology.

Ford released a number of their Ford Ecostar delivery vans into the market. Honda, Nissan and Toyota also repossessed and crushed most of their EVs, which, like the GM EV1s, had been available only by closed-end lease. After public protests, Toyota sold 200 of its RAV4 EVs; they later sold at over their original forty-thousand-dollar price. Later, BMW of Canada sold off a number of Mini EVs when their Canadian testing ended.

The production of the Citroën Berlingo Electrique stopped in September 2005. Zenn started production in 2006 but ended by 2009.

Reintroduction

During the late 20th and early 21st century, the environmental impact of the petroleum-based transportation infrastructure, along with the fear of peak oil, led to renewed interest in electric transportation infrastructure. EVs differ from fossil fuel-powered vehicles in that the electricity they consume can be generated from a wide range of sources, including fossil fuels, nuclear power, and renewables such as solar power and wind power, or any combination of those. Recent advancements in battery technology and charging infrastructure have addressed many of the earlier barriers to EV adoption, making electric vehicles a more viable option for a wider range of consumers.

The carbon footprint and other emissions of electric vehicles vary depending on the fuel and technology used for electricity generation. The electricity may be stored in the vehicle using a battery, flywheel, or supercapacitors. Vehicles using internal combustion engines usually only derive their energy from a single or a few sources, usually non-renewable fossil fuels. A key advantage of electric vehicles is regenerative braking, which recovers kinetic energy, typically lost during friction braking as heat, as electricity restored to the on-board battery.

99722470.cms


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2280 2024-08-31 22:07:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2280) Slipper / Flip-flops

Gist

A slipper is a kind of indoor shoe that slips easily on and off your foot. You may prefer to walk around barefoot unless it's really cold, in which case you wear slippers.

Slippers are cozy, and they're often warm too. A more old fashioned kind of slipper was a dress shoe that slipped on the foot, rather than being buckled or buttoned—like Cinderella's glass slipper. The word comes from the fact that you can slip a slipper on or off easily. It's related to the Old English slypescoh, literally "slip-shoe."

Summary

Slippers are a type of shoes falling under the broader category of light footwear, that are easy to put on and off and are intended to be worn indoors, particularly at home. They provide comfort and protection for the feet when walking indoors.

History

The recorded history of slippers can be traced back to the 12th century. In the West, the record can only be traced to 1478. The English word "slippers" (sclyppers) occurs from about 1478. English-speakers formerly also used the related term "pantofles" (from French pantoufle).

Slippers in China date from 4700 BCE; they were made of cotton or woven rush, had leather linings, and featured symbols of power, such as dragons.

Native American moccasins were also highly decorative. Such moccasins depicted nature scenes and were embellished with beadwork and fringing; their soft sure-footedness made them suitable for indoors appropriation. Inuit and Aleut people made shoes from smoked hare-hide to protect their feet against the frozen ground inside their homes.

Fashionable Orientalism saw the introduction into the West of designs like the baboosh.

Victorian people needed such shoes to keep the dust and gravel outside their homes. For Victorian ladies slippers gave an opportunity to show off their needlepoint skills and to use embroidery as decoration.

Types

Types of slippers include:

* Open-heel slippers – usually made with a fabric upper layer that encloses the top of the foot and the toes, but leaves the heel open. These are often distributed in expensive hotels, included with the cost of the room.
* Closed slippers – slippers with a heel guard that prevents the foot from sliding out.
* Slipper boots – slippers meant to look like boots. Often favored by women, they are typically furry boots with a fleece or soft lining, and a soft rubber sole. Modeled after sheepskin boots, they may be worn outside.
* Sandal slippers – cushioned sandals with soft rubber or fabric soles, similar to Birkenstock's cushioned sandals.
* Evening slipper, also known as the "Prince Albert" slipper in reference to Albert, Prince Consort. It is made of velvet with leather soles and features a grosgrain bow or the wearer’s initials embroidered in gold.

Novelty animal-feet slippers

Some slippers are made to resemble something other than a slipper and are sold as a novelty item. The slippers are usually made from soft and colorful materials and may come in the shapes of animals, animal paws, vehicles, cartoon characters, etc.

Not all shoes with a soft fluffy interior are slippers. Any shoe with a rubber sole and laces is a normal outdoor shoe. In India, rubber chappals (flip-flops) are worn as indoor shoes.

In popular culture

The fictional character Cinderella is said to have worn glass slippers; in modern parlance, they would probably be called glass high heels. This motif was introduced in Charles Perrault's 1697 version of the fairy tale, "Cendrillon ou la petite pantoufle de verre" "Cinderella, or The Little Glass Slipper". For some years it was debated that this detail was a mistranslation and the slippers in the story were instead made of fur (French: vair), but this interpretation has since been discredited by folklorists.

A pair of ruby slippers worn by Judy Garland in The Wizard of Oz sold at Christie's in June 1988 for $165,000. The same pair was resold on May 24, 2000, for $666,000. On both occasions, they were the most expensive shoes from a film to be sold at auction.

In Hawaii and many islands of The Caribbean, slippers, or "slippahs" is used for describing flip-flops.

The term "house shoes" (elided into how-shuze) is common in the American South.

Details

Flip-flops are a type of light sandal-like shoe, typically worn as a form of casual footwear. They consist of a flat sole held loosely on the foot by a Y-shaped strap known as a toe thong that passes between the first and second toes and around both sides of the foot. This style of footwear has been worn by people of many cultures throughout the world, originating as early as the ancient Egyptians in 1500 BC. In the United States the modern flip-flop may have had its design taken from the traditional Japanese zōri after World War II, as soldiers brought them back from Japan.

Flip-flops became a prominent unisex summer footwear starting in the 1960s.

"Flip-flop" etymology and other names

The term flip-flop has been used in American and British English since the 1960s to describe inexpensive footwear consisting of a flat base, typically rubber, and a strap with three anchor points: between the big and second toes, then bifurcating to anchor on both sides of the foot. "Flip-flop" may be an onomatopoeia of the sound made by the sandals when walking in them.

Flip-flops are also called thongs (sometimes pluggers, single- or double- depending on construction) in Australia, jandals (originally a trademarked name derived from "Japanese sandals") in New Zealand, and slops or plakkies in South Africa and Zimbabwe.

In the Philippines, they are called tsinelas.

In India, they are called chappals, (which traditionally referred to leather slippers). This is hypothesized to have come from the Telugu word ceppu, from Proto-Dravidian *keruppu, meaning "sandal".

In some parts of Latin America, flip-flops are called chanclas. Throughout the world, they are also known by a variety of other names, including slippers in the Bahamas, Hawai‘i, Jamaica and Trinidad and Tobago.

History

Thong sandals have been worn for thousands of years, as shown in images of them in ancient Egyptian murals from 4,000 BC. A pair found in Europe was made of papyrus leaves and dated to be approximately 1,500 years old. These early versions of flip-flops were made from a wide variety of materials. Ancient Egyptian sandals were made from papyrus and palm leaves. The Maasai people of Africa made them out of rawhide. In India, they were made from wood. In China and Japan, rice straw was used. The leaves of the sisal plant were used to make twine for sandals in South America, while the natives of Mexico used the yucca plant.

The Ancient Greeks and Romans wore versions of flip-flops as well. In Greek sandals, the toe strap was worn between the first and second toes, while Roman sandals had the strap between the second and third toes. These differ from the sandals worn by the Mesopotamians, with the strap between the third and fourth toes. In India, a related "toe knob" sandal was common, with no straps but instead a small knob located between the first and second toes. They are known as Padukas.

The modern flip-flop became popular in the United States as soldiers returning from World War II brought Japanese zōri with them. It caught on in the 1950s during the postwar boom and after the end of hostilities of the Korean War. As they became adopted into American popular culture, the sandals were redesigned and changed into the bright colors that dominated 1950s design. They quickly became popular due to their convenience and comfort, and were popular in beach-themed stores and as summer shoes. During the 1960s, flip-flops became firmly associated with the beach lifestyle of California. As such, they were promoted as primarily a casual accessory, typically worn with shorts, bathing suits, or summer dresses. As they became more popular, some people started wearing them for dressier or more formal occasions.

In 1962, Alpargatas S.A. marketed a version of flip-flops known as Havaianas in Brazil. By 2010, more than 150 million pairs of Havaianas were produced each year. By 2019, production tops 200 million pairs per year. Prices range from under $5 for basics to more than $50 for high-end fashion models.

Flip-flops quickly became popular as casual footwear of young adults. Girls would often decorate their flip-flops with metallic finishes, charms, chains, beads, rhinestones, or other jewelry. Modern flip-flops are available in leather, suede, cloth or synthetic materials such as plastic. Platform and high-heeled variants of the sandals began to appear in the 1990s, and in the late 2010s, kitten heeled "kit-flops".

In the U.S., flip-flops with college colors and logos became common for fans to wear to intercollegiate games. In 2011, while vacationing in his native Hawaii, Barack Obama became the first President of the United States to be photographed wearing a pair of flip-flops. The Dalai Lama of Tibet is also a frequent wearer of flip-flops and has met with several U.S. presidents, including George W. Bush and Barack Obama, while wearing the sandals.

While exact sales figures for flip-flops are difficult to obtain due to the large number of stores and manufacturers involved, the Atlanta-based company Flip Flop Shops claimed that the shoes were responsible for a $20 billion industry in 2009. Furthermore, sales of flip-flops exceeded those of sneakers for the first time in 2006. If these figures are accurate, it is remarkable considering the low cost of most flip-flops.

Design and custom

The modern flip-flop has a straightforward design, consisting of a thin sole with two straps running in a Y shape from the sides of the foot to the gap between the big toe and the one beside it. Flip-flops are made from a wide variety of materials, as were the ancient thong sandals. The modern sandals are made of more modern materials, such as rubber, foam, plastic, leather, suede, and even fabric. Flip-flops made of polyurethane have caused some environmental concerns; because polyurethane is a number 7 resin, they can't be easily discarded, and they persist in landfills for a very long time. In response to these concerns, some companies have begun selling flip-flops made from recycled rubber, such as that from used bicycle tires, or even hemp, and some offer a recycling program for used flip flops.

Because of the strap between the toes, flip-flops are typically not worn with socks. In colder weather, however, some people wear flip-flops with toe socks or merely pull standard socks forward and bunch them up between the toes. The Japanese commonly wear tabi, a type of sock with a single slot for the thong, with their zōri.

Flip-flop health issues

Flip-flops provide the wearer with some mild protection from hazards on the ground, such as sharp rocks, splintery wooden surfaces, hot sand at the beach, broken glass, or even fungi and wart-causing viruses in locker rooms or community pool surfaces. However, walking for long periods in flip-flops can result in pain in the feet, ankles and lower legs or tendonitis.

The flip-flop straps may cause frictional issues, such as rubbing during walking, resulting in blisters, and the open-toed design may result in stubbed or even broken toes. Particularly, individuals with flat feet or other foot issues are advised to wear a shoe or sandal with better support.

The American Podiatric Medical Association strongly recommends that people not play sports in flip-flops, or do any type of yard work with or without power tools, including cutting the grass, when they wear these shoes. There are reports of people who ran or jumped in flip-flops and suffered sprained ankles, fractures, and severe ligament injuries that required surgery.

Because they provide almost no protection from the sun, on a part of the body more heavily exposed and where sunscreen can more easily be washed off, sunburn can be a risk for flip-flop wearers.

Flip-flops in popular culture

For many Latin Americans, la chancla (the flip-flop), held or thrown, is known to be used as a tool of corporal punishment by mothers, similar to the use of slippers for the same purpose in Europe. Poor conduct in public may elicit being struck on the head with a flip-flop. The flip-flop may also be thrown at a misbehaving child. For many children, even the threat of the mother reaching down to take off a flip-flop and hold it in her raised hand is considered enough to correct their behaviour. The notoriety of the practice has become an Internet meme among Latin Americans and Hispanic and Latino immigrants to the United States. In recent years, the practice has been increasingly condemned as physically abusive. One essay, "The Meaning of Chancla: Flip Flops and Discipline", seeks to end "chancla culture" in disciplining children.

In India, a chappal is traditionally a leather slipper, but the term has also come to include flip-flops. Throwing a chappal became a video trope, "flying chappal," and "Flying chappal received" an expression by an adult acknowledging that they had been verbally chastised by their parents or other adults.

Flip-flops are "tsinelas" in the Philippines, derived from the Spanish "chinela" (for slipper), and are used to discipline children, but with no mention of throwing. And children play Tumbang preso, which involves trying to knock over a can with thrown flip-flops.

When the Los Angeles–based Angel City FC and San Diego Wave FC joined the National Women's Soccer League in 2022, a leader in an Angel City supporters' group called the new regional rivalry La Chanclásico as a nod to the region's Hispanic heritage. The rivalry name combines chancla with clásico ("classic"), used in Spanish to describe many sports rivalries. The Chanclásico name quickly caught on with both fanbases, and before the first game between the teams, the aforementioned Angel City supporter created a rivalry trophy consisting of a flip-flop mounted on a trophy base and covered with gold spray paint. The rivalry name was effectively codified via a tweet from Wave and US national team star Alex Morgan.

As part of Q150 celebrations in 2009 celebrating the first 150 years of Queensland, Australia, the Queensland Government published a list of 150 cultural icons of Queensland, representing the people, places, events, and things that were significant to Queensland's first 150 years. Thongs (as they are known in Queensland) were among as the Q150 Icons.

clean-flip-flops-header.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2281 2024-09-01 16:05:55

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2281) Pediatrics

Gist

Pediatrics is the branch of medicine dealing with the health and medical care of infants, children, and adolescents from birth up to the age of 18. The word “paediatrics” means “healer of children”; they are derived from two Greek words: (pais = child) and (iatros = doctor or healer).

Summary

Pediatrics is the branch of medicine dealing with the health and medical care of infants, children, and adolescents from birth up to the age of 18. The word “paediatrics” means “healer of children”; they are derived from two Greek words: (pais = child) and (iatros = doctor or healer). Paediatrics is a relatively new medical specialty, developing only in the mid-19th century. Abraham Jacobi (1830–1919) is known as the father of paediatrics.

What does a pediatrician do?

A paediatrician is a child's physician who provides not only medical care for children who are acutely or chronically ill but also preventive health services for healthy children. A paediatrician manages physical, mental, and emotional well-being of the children under their care at every stage of development, in both sickness and health.

Aims of pediatrics

The aims of the study of paediatrics is to reduce infant and child rate of deaths, control the spread of infectious disease, promote healthy lifestyles for a long disease-free life and help ease the problems of children and adolescents with chronic conditions.

Paediatricians diagnose and treat several conditions among children including:-

* injuries
* infections
* genetic and congenital conditions
* cancers
* organ diseases and dysfunctions

Paediatrics is concerned not only about immediate management of the ill child but also long term effects on quality of life, disability and survival. Paediatricians are involved with the prevention, early detection, and management of problems including:-

* developmental delays and disorders
* behavioral problems
* functional disabilities
* social stresses
* mental disorders including depression and anxiety disorders

Collaboration with other specialists

Paediatrics is a collaborative specialty. Paediatricians need to work closely with other medical specialists and healthcare professionals and subspecialists of paediatrics to help children with problems.

Details

Pediatrics (American English) also spelled paediatrics or pædiatrics (British English), is the branch of medicine that involves the medical care of infants, children, adolescents, and young adults. In the United Kingdom, pediatrics covers many of their youth until the age of 18. The American Academy of Pediatrics recommends people seek pediatric care through the age of 21, but some pediatric subspecialists continue to care for adults up to 25. Worldwide age limits of pediatrics have been trending upward year after year. A medical doctor who specializes in this area is known as a pediatrician, or paediatrician. The word pediatrics and its cognates mean "healer of children", derived from the two Greek words: παῖς (pais "child") and ἰατρός (iatros "doctor, healer"). Pediatricians work in clinics, research centers, universities, general hospitals and children's hospitals, including those who practice pediatric subspecialties (e.g. neonatology requires resources available in a NICU).

History

The earliest mentions of child-specific medical problems appear in the Hippocratic Corpus, published in the fifth century B.C., and the famous Sacred Disease. These publications discussed topics such as childhood epilepsy and premature births. From the first to fourth centuries A.D., Greek philosophers and physicians Celsus, Soranus of Ephesus, Aretaeus, Galen, and Oribasius, also discussed specific illnesses affecting children in their works, such as rashes, epilepsy, and meningitis. Already Hippocrates, Aristotle, Celsus, Soranus, and Galen understood the differences in growing and maturing organisms that necessitated different treatment: Ex toto non sic pueri ut viri curari debent ("In general, boys should not be treated in the same way as men"). Some of the oldest traces of pediatrics can be discovered in Ancient India where children's doctors were called kumara bhrtya.

Even though some pediatric works existed during this time, they were scarce and rarely published due to a lack of knowledge in pediatric medicine. Sushruta Samhita, an ayurvedic text composed during the sixth century BCE, contains the text about pediatrics. Another ayurvedic text from this period is Kashyapa Samhita. A second century AD manuscript by the Greek physician and gynecologist Soranus of Ephesus dealt with neonatal pediatrics. Byzantine physicians Oribasius, Aëtius of Amida, Alexander Trallianus, and Paulus Aegineta contributed to the field.[6] The Byzantines also built brephotrophia (crêches). Islamic Golden Age writers served as a bridge for Greco-Roman and Byzantine medicine and added ideas of their own, especially Haly Abbas, Yahya Serapion, Abulcasis, Avicenna, and Averroes. The Persian philosopher and physician al-Razi (865–925), sometimes called the father of pediatrics, published a monograph on pediatrics titled Diseases in Children. Also among the first books about pediatrics was Libellus [Opusculum] de aegritudinibus et remediis infantium 1472 ("Little Book on Children Diseases and Treatment"), by the Italian pediatrician Paolo Bagellardo.[14][5] In sequence came Bartholomäus Metlinger's Ein Regiment der Jungerkinder 1473, Cornelius Roelans (1450–1525) no title Buchlein, or Latin compendium, 1483, and Heinrich von Louffenburg (1391–1460) Versehung des Leibs written in 1429 (published 1491), together form the Pediatric Incunabula, four great medical treatises on children's physiology and pathology.

While more information about childhood diseases became available, there was little evidence that children received the same kind of medical care that adults did. It was during the seventeenth and eighteenth centuries that medical experts started offering specialized care for children. The Swedish physician Nils Rosén von Rosenstein (1706–1773) is considered to be the founder of modern pediatrics as a medical specialty, while his work The diseases of children, and their remedies (1764) is considered to be "the first modern textbook on the subject". However, it was not until the nineteenth century that medical professionals acknowledged pediatrics as a separate field of medicine. The first pediatric-specific publications appeared between the 1790s and the 1920s.

Etymology

The term pediatrics was first introduced in English in 1859 by Abraham Jacobi. In 1860, he became "the first dedicated professor of pediatrics in the world." Jacobi is known as the father of American pediatrics because of his many contributions to the field. He received his medical training in Germany and later practiced in New York City.

The first generally accepted pediatric hospital is the Hôpital des Enfants Malades (French: Hospital for Sick Children), which opened in Paris in June 1802 on the site of a previous orphanage. From its beginning, this famous hospital accepted patients up to the age of fifteen years, and it continues to this day as the pediatric division of the Necker-Enfants Malades Hospital, created in 1920 by merging with the nearby Necker Hospital, founded in 1778.

In other European countries, the Charité (a hospital founded in 1710) in Berlin established a separate Pediatric Pavilion in 1830, followed by similar institutions at Saint Petersburg in 1834, and at Vienna and Breslau (now Wrocław), both in 1837. In 1852 Britain's first pediatric hospital, the Hospital for Sick Children, Great Ormond Street was founded by Charles West. The first Children's hospital in Scotland opened in 1860 in Edinburgh. In the US, the first similar institutions were the Children's Hospital of Philadelphia, which opened in 1855, and then Boston Children's Hospital (1869). Subspecialties in pediatrics were created at the Harriet Lane Home at Johns Hopkins by Edwards A. Park.

Differences between adult and pediatric medicine

The body size differences are paralleled by maturation changes. The smaller body of an infant or neonate is substantially different physiologically from that of an adult. Congenital defects, genetic variance, and developmental issues are of greater concern to pediatricians than they often are to adult physicians. A common adage is that children are not simply "little adults". The clinician must take into account the immature physiology of the infant or child when considering symptoms, prescribing medications, and diagnosing illnesses.

Pediatric physiology directly impacts the pharmacokinetic properties of drugs that enter the body. The absorption, distribution, metabolism, and elimination of medications differ between developing children and grown adults. Despite completed studies and reviews, continual research is needed to better understand how these factors should affect the decisions of healthcare providers when prescribing and administering medications to the pediatric population.

Absorption

Many drug absorption differences between pediatric and adult populations revolve around the stomach. Neonates and young infants have increased stomach pH due to decreased acid secretion, thereby creating a more basic environment for drugs that are taken by mouth. Acid is essential to degrading certain oral drugs before systemic absorption. Therefore, the absorption of these drugs in children is greater than in adults due to decreased breakdown and increased preservation in a less acidic gastric space.

Children also have an extended rate of gastric emptying, which slows the rate of drug absorption.

Drug absorption also depends on specific enzymes that come in contact with the oral drug as it travels through the body. Supply of these enzymes increase as children continue to develop their gastrointestinal tract. Pediatric patients have underdeveloped proteins, which leads to decreased metabolism and increased serum concentrations of specific drugs. However, prodrugs experience the opposite effect because enzymes are necessary for allowing their active form to enter systemic circulation.

Distribution

Percentage of total body water and extracellular fluid volume both decrease as children grow and develop with time. Pediatric patients thus have a larger volume of distribution than adults, which directly affects the dosing of hydrophilic drugs such as beta-lactam antibiotics like ampicillin. Thus, these drugs are administered at greater weight-based doses or with adjusted dosing intervals in children to account for this key difference in body composition.

Infants and neonates also have fewer plasma proteins. Thus, highly protein-bound drugs have fewer opportunities for protein binding, leading to increased distribution.

Metabolism

Drug metabolism primarily occurs via enzymes in the liver and can vary according to which specific enzymes are affected in a specific stage of development. Phase I and Phase II enzymes have different rates of maturation and development, depending on their specific mechanism of action (i.e. oxidation, hydrolysis, acetylation, methylation, etc.). Enzyme capacity, clearance, and half-life are all factors that contribute to metabolism differences between children and adults. Drug metabolism can even differ within the pediatric population, separating neonates and infants from young children.

Elimination

Drug elimination is primarily facilitated via the liver and kidneys. In infants and young children, the larger relative size of their kidneys leads to increased renal clearance of medications that are eliminated through urine. In preterm neonates and infants, their kidneys are slower to mature and thus are unable to clear as much drug as fully developed kidneys. This can cause unwanted drug build-up, which is why it is important to consider lower doses and greater dosing intervals for this population. Diseases that negatively affect kidney function can also have the same effect and thus warrant similar considerations.

Pediatric autonomy in healthcare

A major difference between the practice of pediatric and adult medicine is that children, in most jurisdictions and with certain exceptions, cannot make decisions for themselves. The issues of guardianship, privacy, legal responsibility, and informed consent must always be considered in every pediatric procedure. Pediatricians often have to treat the parents and sometimes, the family, rather than just the child. Adolescents are in their own legal class, having rights to their own health care decisions in certain circumstances. The concept of legal consent combined with the non-legal consent (assent) of the child when considering treatment options, especially in the face of conditions with poor prognosis or complicated and painful procedures/surgeries, means the pediatrician must take into account the desires of many people, in addition to those of the patient.

History of pediatric autonomy

The term autonomy is traceable to ethical theory and law, where it states that autonomous individuals can make decisions based on their own logic. Hippocrates was the first to use the term in a medical setting. He created a code of ethics for doctors called the Hippocratic Oath that highlighted the importance of putting patients' interests first, making autonomy for patients a top priority in health care. 

In ancient times, society did not view pediatric medicine as essential or scientific. Experts considered professional medicine unsuitable for treating children. Children also had no rights. Fathers regarded their children as property, so their children's health decisions were entrusted to them. As a result, mothers, midwives, "wise women", and general practitioners treated the children instead of doctors. Since mothers could not rely on professional medicine to take care of their children, they developed their own methods, such as using alkaline soda ash to remove the vernix at birth and treating teething pain with opium or wine. The absence of proper pediatric care, rights, and laws in health care to prioritize children's health led to many of their deaths. Ancient Greeks and Romans sometimes even killed healthy female babies and infants with deformities since they had no adequate medical treatment and no laws prohibiting infanticide.

In the twentieth century, medical experts began to put more emphasis on children's rights. In 1989, in the United Nations Rights of the Child Convention, medical experts developed the Best Interest Standard of Child to prioritize children's rights and best interests. This event marked the onset of pediatric autonomy. In 1995, the American Academy of Pediatrics (AAP) finally acknowledged the Best Interest Standard of a Child as an ethical principle for pediatric decision-making, and it is still being used today.

Parental authority and current medical issues

The majority of the time, parents have the authority to decide what happens to their child. Philosopher John Locke argued that it is the responsibility of parents to raise their children and that God gave them this authority. In modern society, Jeffrey Blustein, modern philosopher and author of the book Parents and Children: The Ethics of Family, argues that parental authority is granted because the child requires parents to satisfy their needs. He believes that parental autonomy is more about parents providing good care for their children and treating them with respect than parents having rights. The researcher Kyriakos Martakis, MD, MSc, explains that research shows parental influence negatively affects children's ability to form autonomy. However, involving children in the decision-making process allows children to develop their cognitive skills and create their own opinions and, thus, decisions about their health. Parental authority affects the degree of autonomy the child patient has. As a result, in Argentina, the new National Civil and Commercial Code has enacted various changes to the healthcare system to encourage children and adolescents to develop autonomy. It has become more crucial to let children take accountability for their own health decisions.

In most cases, the pediatrician, parent, and child work as a team to make the best possible medical decision. The pediatrician has the right to intervene for the child's welfare and seek advice from an ethics committee. However, in recent studies, authors have denied that complete autonomy is present in pediatric healthcare. The same moral standards should apply to children as they do to adults. In support of this idea is the concept of paternalism, which negates autonomy when it is in the patient's interests. This concept aims to keep the child's best interests in mind regarding autonomy. Pediatricians can interact with patients and help them make decisions that will benefit them, thus enhancing their autonomy. However, radical theories that question a child's moral worth continue to be debated today. Authors often question whether the treatment and equality of a child and an adult should be the same. Author Tamar Schapiro notes that children need nurturing and cannot exercise the same level of authority as adults. Hence, continuing the discussion on whether children are capable of making important health decisions until this day.

Modern advancements

According to the Subcommittee of Clinical Ethics of the Argentinean Pediatric Society (SAP), children can understand moral feelings at all ages and can make reasonable decisions based on those feelings. Therefore, children and teens are deemed capable of making their own health decisions when they reach the age of 13. Recently, studies made on the decision-making of children have challenged that age to be 12.

Technology has made several modern advancements that contribute to the future development of child autonomy, for example, unsolicited findings (U.F.s) of pediatric exome sequencing. They are findings based on pediatric exome sequencing that explain in greater detail the intellectual disability of a child and predict to what extent it will affect the child in the future. Genetic and intellectual disorders in children make them incapable of making moral decisions, so people look down upon this kind of testing because the child's future autonomy is at risk. It is still in question whether parents should request these types of testing for their children. Medical experts argue that it could endanger the autonomous rights the child will possess in the future. However, the parents contend that genetic testing would benefit the welfare of their children since it would allow them to make better health care decisions. Exome sequencing for children and the decision to grant parents the right to request them is a medically ethical issue that many still debate today.

Additional Information

Pediatrics is a medical specialty dealing with the development and care of children and with the diagnosis and treatment of childhood diseases. The first important review of childhood illness, an anonymous European work called The Children’s Practice, dates from the 12th century. The specialized focus of pediatrics did not begin to emerge in Europe until the 18th century. The first specialized children’s hospitals, such as the London Foundling Hospital, established in 1745, were opened at this time. These hospitals later became major centres for training in pediatrics, which began to be taught as a separate discipline in medical schools by the middle of the 19th century.

The major focus of early pediatrics was the treatment of infectious diseases that affected children. Thomas Sydenham in Britain had led the way with the first accurate descriptions of measles, scarlet fever, and other diseases in the 17th century. Clinical studies of childhood diseases proliferated throughout the 18th and 19th centuries, culminating in one of the first modern textbooks of pediatrics, published by Frédéric Rilliet and Antoine Barthez in France in 1838–43, but there was little that could be done to cure these diseases until the end of the 19th century. As childhood diseases came under control through the combined efforts of pediatricians, immunologists, and public-health workers, the focus of pediatrics began to change, and early in the 20th century the first well-child clinics were established to monitor and study the normal growth and development of children. By the mid-20th century, the use of antibiotics and vaccines had all but eliminated most serious infectious diseases of childhood in the developed world, and infant and child mortality had fallen to the lowest levels ever. In the last half of the century, pediatrics again expanded to incorporate the study of behavioral and social as well as specifically medical aspects of child health.

hhcmg-pediatric-care-banner.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2282 2024-09-02 18:27:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2282) Blood Bank

Gist

A blood bank is a place where blood is collected and stored before it is used for transfusions. Blood banking takes place in the lab. This is to make sure that donated blood and blood products are safe before they are used.

There are three kinds of blood banks - government, private and those run by NGOs.

Safe blood transfusion is vital for treatment and emergency intervention. It can help an individual suffering from dangerous ailments and also supports various surgical procedures.

Charles R. Drew was an African American surgeon and researcher who pioneered in the field of blood transfusions and blood storage.

Summary

Blood banking is the process that takes place in the lab to make sure that donated blood, or blood products, are safe before they are used in blood transfusions and other medical procedures. Blood banking includes typing the blood for transfusion and testing for infectious diseases.

Facts about blood banking

According to the American Association of Blood Banks as of 2013:

* About 36,000 units of blood are needed every day.

* The number of blood units donated is about 13.6 million a year.

* About 6.8 million volunteers are blood donors each year.

Each unit of blood is broken down into components, such as red blood cells, plasma, cryoprecipitated AHF, and platelets. One unit of whole blood, once it's separated, may be transfused to several patients, each with different needs.

Annually, more than 21 million blood components are transfused.

Who are the blood donors?

Most blood donors are volunteers. However, sometimes, a patient may want to donate blood a couple of weeks before undergoing surgery, so that his or her blood is available in case of a blood transfusion. Donating blood for yourself is called an autologous donation. Volunteer blood donors must pass certain criteria, including the following:

* Must be at least 16 years of age, or in accordance with state law   

* Must be in good health

* Must weigh at least 110 pounds

* Must pass the physical and health history exam given before donation

Some states permit people younger than 16 or 17 years to donate blood, with parental consent.

What tests are done in blood banking?

A certain set of standard tests are done in the lab once blood is donated, including, but not limited to, the following:

* Typing: ABO group (blood type)

* Rh typing (positive or negative antigen)

* Screening for any unexpected red blood cell antibodies that may cause problems in the recipient

Screening for current or past infections, including:

* Hepatitis viruses B and C

* Human immunodeficiency virus (HIV)

* Human T-lymphotropic viruses (HTLV) I and II

* Syphilis

* West Nile virus

* Chagas disease

Irradiation to blood cells is performed to disable any T-lymphocytes present in the donated blood. (T-lymphocytes can cause a reaction when transfused, but can also cause graft-versus-host problems with repeated exposure to foreign cells.)

Leukocyte-reduced blood has been filtered to remove the white blood cells that contain antibodies that can cause fevers in the recipient of the transfusion. (These antibodies, with repeated transfusions, may also increase a recipient's risk of reactions to subsequent transfusions.)

What are the blood types?

According to the American Association of Blood Banks, distribution of blood types in the U.S. includes the following:

* O Rh-positive - 39%

* A Rh-positive - 31%

* B Rh-positive - 9%

* O Rh-negative - 9%

* A Rh-negative - 6%

* AB Rh-positive - 3%

* B Rh-negative - 2%

* AB Rh-negative - 1%

What are the components of blood?

While blood, or one of its components, may be transferred, each component serves many functions, including the following:

* Red blood cells. These cells carry oxygen to the tissues in the body and are commonly used in the treatment of anemia.

* Platelets. They help the blood to clot and are used in the treatment of leukemia and other forms of cancer.

* White blood cells. These cells help to fight infection, and aid in the immune process.

* Plasma. The watery, liquid part of the blood in which the red blood cells, white blood cells, and platelets are suspended. Plasma is needed to carry the many parts of the blood through the bloodstream. Plasma serves many functions, including the following:

** Helps to maintain blood pressure

** Provides proteins for blood clotting

** Balances the levels of sodium and potassium

* Cryoprecipitate AHF. The portion of the plasma that contains clotting factors that help to control bleeding.

Albumin, immune globulins, and clotting factor concentrates may also be separated and processed for transfusions.

Details

A blood bank is a center where blood gathered as a result of blood donation is stored and preserved for later use in blood transfusion. The term "blood bank" typically refers to a department of a hospital usually within a clinical pathology laboratory where the storage of blood product occurs and where pre-transfusion and blood compatibility testing is performed. However, it sometimes refers to a collection center, and some hospitals also perform collection. Blood banking includes tasks related to blood collection, processing, testing, separation, and storage.

For blood donation agencies in various countries, see list of blood donation agencies and list of blood donation agencies in the United States.

Types of blood transfused

Several types of blood transfusion exist:

* Whole blood, which is blood transfused without separation.
* Red blood cells or packed cells is transfused to patients with anemia/iron deficiency. It also helps to improve the oxygen saturation in blood. It can be stored at 2.0 °C-6.0 °C for 35–45 days.
* Platelet transfusion is transfused to those with low platelet count. Platelets can be stored at room temperature for up to 5–7 days. Single donor platelets, which have a more platelet count but it is bit expensive than regular.
* Plasma transfusion is indicated to patients with liver failure, severe infections or serious burns. Fresh frozen plasma can be stored at a very low temperature of -30 °C for up to 12 months. The separation of plasma from a donor's blood is called plasmapheresis.

History

While the first blood transfusions were made directly from donor to receiver before coagulation, it was discovered that by adding anticoagulant and refrigerating the blood it was possible to store it for some days, thus opening the way for the development of blood banks. John Braxton Hicks was the first to experiment with chemical methods to prevent the coagulation of blood at St Mary's Hospital, London, in the late 19th century. His attempts, using phosphate of soda, however, were unsuccessful.

The first non-direct transfusion was performed on March 27, 1914, by the Belgian doctor Albert Hustin, though this was a diluted solution of blood. The Argentine doctor Luis Agote used a much less diluted solution in November of the same year. Both used sodium citrate as an anticoagulant.

First World War

The First World War acted as a catalyst for the rapid development of blood banks and transfusion techniques. Inspired by the need to give blood to wounded soldiers in the absence of a donor,  Francis Peyton Rous at the Rockefeller University (then The Rockefeller Institute for Medical Research) wanted to solve the problems of blood transfusion. With a colleague, Joseph R. Turner, he made two critical discoveries: blood typing was necessary to avoid blood clumping (coagulation) and blood samples could be preserved using chemical treatment. Their report in March 1915 to identify possible blood preservative was of a failure. The experiments with gelatine, agar, blood serum extracts, starch and beef albumin proved useless.

In June 1915, they made the first important report in the Journal of the American Medical Association that agglutination could be avoided if the blood samples of the donor and recipient were tested before. They developed a rapid and simple method for testing blood compatibility in which coagulation and the suitability of the blood for transfusion could be easily determined. They used sodium citrate to dilute the blood samples, and after mixing the recipient's and donor's blood in 9:1 and 1:1 parts, blood would either clump or remain watery after 15 minutes. Their result with a medical advice was clear:

[If] clumping is present in the 9:1 mixture and to a less degree or not at all in the 1:1 mixture, it is certain that the blood of the patient agglutinates that of the donor and may perhaps hemolyze it. Transfusion in such cases is dangerous. Clumping in the 1:1 mixture with little or none in the 9:1 indicates that the plasma of the prospective donor agglutinates the cells of the prospective recipient. The risk from transfusing is much less under such circumstances, but it may be doubted whether the blood is as useful as one which does not and is not agglutinated. A blood of the latter kind should always be chosen if possible.

Rous was well aware that Austrian physician Karl Landsteiner had discovered blood types a decade earlier, but the practical usage was not yet developed, as he described: "The fate of Landsteiner's effort to call attention to the practical bearing of the group differences in human bloods provides an exquisite instance of knowledge marking time on technique. Transfusion was still not done because (until at least 1915), the risk of clotting was too great." In February 1916, they reported in the Journal of Experimental Medicine the key method for blood preservation. They replaced the additive, gelatine, with a mixture sodium citrate and glucose (dextrose) solution and found: "in a mixture of 3 parts of human blood, 2 parts of isotonic citrate solution (3.8 per cent sodium citrate in water), and 5 parts of isotonic dextrose solution (5.4 per cent dextrose in water), the cells remain intact for about 4 weeks." A separate report indicates the use of citrate-saccharose (sucrose) could maintain blood cells for two weeks. They noticed that the preserved bloods were just like fresh bloods and that they "function excellently when reintroduced into the body.". The use of sodium citrate with sugar, sometimes known as Rous-Turner solution, was the main discovery that paved the way for the development of various blood preservation methods and blood bank.

Canadian Lieutenant Lawrence Bruce Robertson was instrumental in persuading the Royal Army Medical Corps (RAMC) to adopt the use of blood transfusion at the Casualty Clearing Stations for the wounded. In October 1915, Robertson performed his first wartime transfusion with a syringe to a patient who had multiple shrapnel wounds. He followed this up with four subsequent transfusions in the following months, and his success was reported to Sir Walter Morley Fletcher, director of the Medical Research Committee.

Robertson published his findings in the British Medical Journal in 1916, and—with the help of a few like minded individuals (including the eminent physician Edward William Archibald)—was able to persuade the British authorities of the merits of blood transfusion. Robertson went on to establish the first blood transfusion apparatus at a Casualty Clearing Station on the Western Front in the spring of 1917.

Oswald Hope Robertson, a medical researcher and U.S. Army officer, worked with Rous at the Rockefeller between 1915 and 1917, and learned the blood matching and preservation methods. He was attached to the RAMC in 1917, where he was instrumental in establishing the first blood banks, with soldiers as donors, in preparation for the anticipated Third Battle of Ypres. He used sodium citrate as the anticoagulant, and the blood was extracted from punctures in the vein, and was stored in bottles at British and American Casualty Clearing Stations along the Front. He also experimented with preserving separated red blood cells in iced bottles. Geoffrey Keynes, a British surgeon, developed a portable machine that could store blood to enable transfusions to be carried out more easily.

Expansion

Alexander Bogdanov established a scientific institute to research the effects of blood transfusion in Moscow, 1925.
The world's first blood donor service was established in 1921 by the secretary of the British Red Cross, Percy Lane Oliver. Volunteers were subjected to a series of physical tests to establish their blood group. The London Blood Transfusion Service was free of charge and expanded rapidly. By 1925, it was providing services for almost 500 patients and it was incorporated into the structure of the British Red Cross in 1926. Similar systems were established in other cities including Sheffield, Manchester and Norwich, and the service's work began to attract international attention. Similar services were established in France, Germany, Austria, Belgium, Australia and Japan.

Vladimir Shamov and Sergei Yudin in the Soviet Union pioneered the transfusion of cadaveric blood from recently deceased donors. Yudin performed such a transfusion successfully for the first time on March 23, 1930, and reported his first seven clinical transfusions with cadaveric blood at the Fourth Congress of Ukrainian Surgeons at Kharkiv in September. Also in 1930, Yudin organized the world's first blood bank at the Nikolay Sklifosovsky Institute, which set an example for the establishment of further blood banks in different regions of the Soviet Union and in other countries. By the mid-1930s the Soviet Union had set up a system of at least 65 large blood centers and more than 500 subsidiary ones, all storing "canned" blood and shipping it to all corners of the country.

One of the earliest blood banks was established by Frederic Durán-Jordà during the Spanish Civil War in 1936. Duran joined the Transfusion Service at the Barcelona Hospital at the start of the conflict, but the hospital was soon overwhelmed by the demand for blood and the paucity of available donors. With support from the Department of Health of the Spanish Republican Army, Duran established a blood bank for the use of wounded soldiers and civilians. The 300–400 ml of extracted blood was mixed with 10% citrate solution in a modified Duran Erlenmeyer flask. The blood was stored in a sterile glass enclosed under pressure at 2 °C. During 30 months of work, the Transfusion Service of Barcelona registered almost 30,000 donors, and processed 9,000 liters of blood.

In 1937 Bernard Fantus, director of therapeutics at the Cook County Hospital in Chicago, established one of the first hospital blood banks in the United States. In creating a hospital laboratory that preserved, refrigerated and stored donor blood, Fantus originated the term "blood bank". Within a few years, hospital and community blood banks were established across the United States.

Frederic Durán-Jordà fled to Britain in 1938, and worked with Janet Vaughan at the Royal Postgraduate Medical School at Hammersmith Hospital to create a system of national blood banks in London. With the outbreak of war looking imminent in 1938, the War Office created the Army Blood Supply Depot (ABSD) in Bristol headed by Lionel Whitby and in control of four large blood depots around the country. British policy through the war was to supply military personnel with blood from centralized depots, in contrast to the approach taken by the Americans and Germans where troops at the front were bled to provide required blood. The British method proved to be more successful at adequately meeting all requirements and over 700,000 donors were bled over the course of the war. This system evolved into the National Blood Transfusion Service established in 1946, the first national service to be implemented.

Medical advances

A blood collection program was initiated in the US in 1940 and Edwin Cohn pioneered the process of blood fractionation. He worked out the techniques for isolating the serum albumin fraction of blood plasma, which is essential for maintaining the osmotic pressure in the blood vessels, preventing their collapse.

The use of blood plasma as a substitute for whole blood and for transfusion purposes was proposed as early as 1918, in the correspondence columns of the British Medical Journal, by Gordon R. Ward. At the onset of World War II, liquid plasma was used in Britain. A large project, known as 'Blood for Britain' began in August 1940 to collect blood in New York City hospitals for the export of plasma to Britain. A dried plasma package was developed, which reduced breakage and made the transportation, packaging, and storage much simpler.

The resulting dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. Charles R. Drew was appointed medical supervisor, and he was able to transform the test tube methods into the first successful mass production technique.

Another important breakthrough came in 1939–40 when Karl Landsteiner, Alex Wiener, Philip Levine, and R.E. Stetson discovered the Rh blood group system, which was found to be the cause of the majority of transfusion reactions up to that time. Three years later, the introduction by J.F. Loutit and Patrick L. Mollison of acid-citrate-dextrose (ACD) solution, which reduced the volume of anticoagulant, permitted transfusions of greater volumes of blood and allowed longer-term storage.

Carl Walter and W.P. Murphy Jr. introduced the plastic bag for blood collection in 1950. Replacing breakable glass bottles with durable plastic bags allowed for the evolution of a collection system capable of safe and easy preparation of multiple blood components from a single unit of whole blood.

Further extending the shelf life of stored blood up to 42 days was an anticoagulant preservative, CPDA-1, introduced in 1979, which increased the blood supply and facilitated resource-sharing among blood banks.

Collection and processing

In the U.S., certain standards are set for the collection and processing of each blood product. "Whole blood" (WB) is the proper name for one defined product, specifically unseparated venous blood with an approved preservative added. Most blood for transfusion is collected as whole blood. Autologous donations are sometimes transfused without further modification, however whole blood is typically separated (via centrifugation) into its components, with red blood cells (RBC) in solution being the most commonly used product. Units of WB and RBC are both kept refrigerated at 33.8 to 42.8 °F (1.0 to 6.0 °C), with maximum permitted storage periods (shelf lives) of 35 and 42 days respectively. RBC units can also be frozen when buffered with glycerol, but this is an expensive and time-consuming process, and is rarely done. Frozen red cells are given an expiration date of up to ten years and are stored at −85 °F (−65 °C).

The less-dense blood plasma is made into a variety of frozen components, and is labeled differently based on when it was frozen and what the intended use of the product is. If the plasma is frozen promptly and is intended for transfusion, it is typically labeled as fresh frozen plasma. If it is intended to be made into other products, it is typically labeled as recovered plasma or plasma for fractionation. Cryoprecipitate can be made from other plasma components. These components must be stored at 0 °F (−18 °C) or colder, but are typically stored at −22 °F (−30 °C). The layer between the red cells and the plasma is referred to as the buffy coat and is sometimes removed to make platelets for transfusion. Platelets are typically pooled before transfusion and have a shelf life of 5 to 7 days, or 3 days once the facility that collected them has completed their tests. Platelets are stored at room temperature (72 °F or 22 °C) and must be rocked/agitated. Since they are stored at room temperature in nutritive solutions, they are at relatively high risk for growing bacteria.

Some blood banks also collect products by apheresis. The most common component collected is plasma via plasmapheresis, but red blood cells and platelets can be collected by similar methods. These products generally have the same shelf life and storage conditions as their conventionally-produced counterparts.

Donors are sometimes paid; in the U.S. and Europe, most blood for transfusion is collected from volunteers while plasma for other purposes may be from paid donors.

Most collection facilities as well as hospital blood banks also perform testing to determine the blood type of patients and to identify compatible blood products, along with a battery of tests (e.g. disease) and treatments (e.g. leukocyte filtration) to ensure or enhance quality. The increasingly recognized problem of inadequate efficacy of transfusion is also raising the profile of RBC viability and quality. Notably, U.S. hospitals spend more on dealing with the consequences of transfusion-related complications than on the combined costs of buying, testing/treating, and transfusing their blood.

Storage and management

Routine blood storage is 42 days or 6 weeks for stored packed red blood cells (also called "StRBC" or "pRBC"), by far the most commonly transfused blood product, and involves refrigeration but usually not freezing. There has been increasing controversy about whether a given product unit's age is a factor in transfusion efficacy, specifically on whether "older" blood directly or indirectly increases risks of complications. Studies have not been consistent on answering this question, with some showing that older blood is indeed less effective but with others showing no such difference; nevertheless, as storage time remains the only available way to estimate quality status or loss, a first-in-first-out inventory management approach is standard presently. It is also important to consider that there is large variability in storage results for different donors, which combined with limited available quality testing, poses challenges to clinicians and regulators seeking reliable indicators of quality for blood products and storage systems.

Transfusions of platelets are comparatively far less numerous, but they present unique storage/management issues. Platelets may only be stored for 7 days, due largely to their greater potential for contamination, which is in turn due largely to a higher storage temperature.

RBC storage lesion

Insufficient transfusion efficacy can result from red blood cell (RBC) blood product units damaged by so-called storage lesion—a set of biochemical and biomechanical changes which occur during storage. With red cells, this can decrease viability and ability for tissue oxygenation. Although some of the biochemical changes are reversible after the blood is transfused, the biomechanical changes are less so, and rejuvenation products are not yet able to adequately reverse this phenomenon.

Current regulatory measures are in place to minimize RBC storage lesion—including a maximum shelf life (currently 42 days), a maximum auto-hemolysis threshold (currently 1% in the US), and a minimum level of post-transfusion RBC survival in vivo (currently 75% after 24 hours). However, all of these criteria are applied in a universal manner that does not account for differences among units of product; for example, testing for the post-transfusion RBC survival in vivo is done on a sample of healthy volunteers, and then compliance is presumed for all RBC units based on universal (GMP) processing standards. RBC survival does not guarantee efficacy, but it is a necessary prerequisite for cell function, and hence serves as a regulatory proxy. Opinions vary as to the best way to determine transfusion efficacy in a patient in vivo. In general, there are not yet any in vitro tests to assess quality deterioration or preservation for specific units of RBC blood product prior to their transfusion, though there is exploration of potentially relevant tests based on RBC membrane properties such as erythrocyte deformability and erythrocyte fragility (mechanical).

Many physicians have adopted a so-called "restrictive protocol"—whereby transfusion is held to a minimum—due in part to the noted uncertainties surrounding storage lesion, in addition to the very high direct and indirect costs of transfusions, along with the increasing view that many transfusions are inappropriate or use too many RBC units.

Platelet storage lesion

Platelet storage lesion is a very different phenomenon from RBC storage lesion, due largely to the different functions of the products and purposes of the respective transfusions, along with different processing issues and inventory management considerations.

Alternative inventory and release practices

Although as noted the primary inventory-management approach is first in, first out (FIFO) to minimize product expiration, there are some deviations from this policy—both in current practice as well as under research. For example, exchange transfusion of RBC in neonates calls for use of blood product that is five days old or less, to "ensure" optimal cell function. Also, some hospital blood banks will attempt to accommodate physicians' requests to provide low-aged RBC product for certain kinds of patients (e.g. cardiac surgery).

More recently, novel approaches are being explored to complement or replace FIFO. One is to balance the desire to reduce average product age (at transfusion) with the need to maintain sufficient availability of non-outdated product, leading to a strategic blend of FIFO with last in, first out (LIFO).

Long-term storage

"Long-term" storage for all blood products is relatively uncommon, compared to routine/short-term storage. Cryopreservation of red blood cells is done to store rare units for up to ten years. The cells are incubated in a glycerol solution which acts as a cryoprotectant ("antifreeze") within the cells. The units are then placed in special sterile containers in a freezer at very low temperatures. The exact temperature depends on the glycerol concentration.

Additional Information


Blood bank is an organization that collects, stores, processes, and transfuses blood. During World War I it was demonstrated that stored blood could safely be used, allowing for the development of the first blood bank in 1932. Before the first blood banks came into operation, a physician determined the blood types of the patient’s relatives and friends until the proper type was found, performed the crossmatch, bled the donor, and gave the transfusion to the patient. In the 1940s the discovery of many blood types and of several crossmatching techniques led to the rapid development of blood banking as a specialized field and to a gradual shift of responsibility for the technical aspects of transfusion from practicing physicians to technicians and clinical pathologists. The practicality of storing fresh blood and blood components for future needs made possible such innovations as artificial kidneys, heart-lung pumps for open-heart surgery, and exchange transfusions for infants with erythroblastosis fetalis.

Whole blood is donated and stored in units of about 450 ml (slightly less than one pint). Whole blood can be stored only for a limited time, but various components (e.g., red blood cells and plasma) can be frozen and stored for a year or longer. Therefore, most blood donations are separated and stored as components by the blood bank. These components include platelets to control bleeding; concentrated red blood cells to correct anemia; and plasma fractions, such as fibrinogen to aid clotting, immune globulins to prevent and treat a number of infectious diseases, and serum albumin to augment the blood volume in cases of shock. Thus, it is possible to serve the varying needs of five or more patients with a single blood donation.

Despite such replacement programs, many blood banks face continual problems in obtaining sufficient donations. The chronic shortage of donors has been alleviated somewhat by the development of apheresis, a technique by which only a desired blood component is taken from the donor’s blood, with the remaining fluid and blood cells immediately transfused back into the donor. This technique allows the collection of large amounts of a particular component, such as plasma or platelets, from a single donor.

1704357195_blood-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2283 2024-09-03 00:08:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2283) Fireworks

Gist

Today, fireworks mark celebrations all over the world. From ancient China to the New World, fireworks have evolved considerably. The very first fireworks — gunpowder firecrackers — came from humble beginnings and didn't do much more than pop, but modern versions can create shapes, multiple colors and various sounds.

How fireworks work

Before diving into the history of fireworks, it is important to understand how they work. Each modern firework consists of an aerial shell. This is a tube that contains gunpowder and dozens of small pods. Each of the pods is called a "star." These stars measure about 1 to 1.5 inches (3 to 4 centimeters) in diameter, according to the American Chemical Society (ACA), and hold:

* Fuel
* An oxidizing agent
* A binder
* Metal salts or metal oxides for color

A firework also has a fuse that is lit to ignite the gunpowder. Each star makes one dot in the fireworks explosion. When the colorants are heated, their atoms absorb energy and then produce light as they lose excess energy. Different chemicals produce different amounts of energy, creating different colors.

Summary

Firework is explosive or combustible used for display. Of ancient Chinese origin, fireworks evidently developed out of military rockets and explosive missiles, and they were (and still are) used in elaborate combinations for celebrations. During the Middle Ages, fireworks accompanied the spread of military explosives westward, and in Europe the military fireworks expert was pressed into service to conduct pyrotechnic celebrations of victory and peace. In the 19th century the introduction of new ingredients such as magnesium and aluminum greatly heightened the brilliance of such displays.

There are two main classes of fireworks: force-and-spark and flame. In force-and-spark compositions, potassium nitrate, sulfur, and finely ground charcoal are used, with additional ingredients that produce various types of sparks. In flame compositions, such as the stars that are shot out of rockets, potassium nitrate, salts of antimony, and sulfur may be used. For coloured fire, potassium chlorate or potassium perchlorate is combined with a metal salt that determines the colour.

The most popular form of firework, the rocket, is lifted into the sky by recoil from the jet of fire thrown out by its burning composition; its case is so designed as to produce maximum combustion and, thus, maximum thrust in its earliest stage.

Details

Fireworks are low explosive pyrotechnic devices used for aesthetic and entertainment purposes. They are most commonly used in fireworks displays (also called a fireworks show or pyrotechnics), combining a large number of devices in an outdoor setting. Such displays are the focal point of many cultural and religious celebrations, though mismanagement could lead to fireworks accidents.

Fireworks take many forms to produce four primary effects: noise, light, smoke, and floating materials (confetti most notably). They may be designed to burn with colored flames and sparks including red, orange, yellow, green, blue, purple and silver. They are generally classified by where they perform, either 'ground' or 'aerial'. Aerial fireworks may have their own propulsion (skyrocket) or be shot into the air by a mortar (aerial shell).

Most fireworks consist of a paper or pasteboard tube or casing filled with the combustible material, often pyrotechnic stars. A number of these tubes or cases may be combined so as to make when kindled, a great variety of sparkling shapes, often variously colored. A skyrocket is a common form of firework, although the first skyrockets were used in warfare. The aerial shell, however, is the backbone of today's commercial aerial display, and a smaller version for consumer use is known as the festival ball in the United States.

Fireworks were originally invented in China. China remains the largest manufacturer and exporter of fireworks in the world.

'Silent' fireworks displays are becoming popular due to concerns that noise effects traumatize pets, wildlife, and some humans. However, these are not a new type of firework and they are not completely silent. "Silent firework displays" refers to displays which simply exclude large, spectacular, noisy fireworks and make greater use of smaller, quieter devices.

History

The earliest fireworks came from China during the Song dynasty (960–1279). Fireworks were used to accompany many festivities. In China, pyrotechnicians were respected for their knowledge of complex techniques in creating fireworks and mounting firework displays.

During the Han dynasty (202 BC – 220 AD), people threw bamboo stems into a fire to produce an explosion with a loud sound. In later times, gunpowder packed into small containers was used to mimic the sounds of burning bamboo. Exploding bamboo stems and gunpowder firecrackers were interchangeably known as baozhu or baogan.  During the Song dynasty, people manufactured the first firecrackers comprising tubes made from rolled sheets of paper containing gunpowder and a fuse. They also strung these firecrackers together into large clusters, known as bian (lit. "whip") or bianpao (lit. "whip cannon"), so the firecrackers could be set off one by one in close sequence. By the 12th and possibly the 11th century, the term baozhang was used to specifically refer to gunpowder firecrackers. The first usage of the term was in the Dreams of the Glories of the Eastern Capital by Meng Yuanlao.

During the Song dynasty, common folk could purchase fireworks such as firecrackers from market vendors. Grand displays of fireworks were also known to be held. In 1110, according to the Dreams of the Glories of the Eastern Capital, a large fireworks display mounted by the military was held to entertain Emperor Huizong of Song (r. 1100–1125). The Qidong Yeyu states that a rocket-propelled firework called a dilaoshu went off near the Empress Dowager Gong Sheng and startled her during a feast held in her honor by her son Emperor Lizong of Song (r. 1224–1264). This type of firework was one of the earliest examples of rocket propulsion. Around 1280, a Syrian named Hasan al-Rammah wrote of rockets, fireworks, and other incendiaries, using terms that suggested he derived his knowledge from Chinese sources, such as his references to fireworks as "Chinese flowers".

Colored fireworks were developed from earlier (possibly Han dynasty or soon thereafter) Chinese application of chemical substances to create colored smoke and fire. Such application appears in the Huolongjing (14th century) and Wubeizhi (preface of 1621, printed 1628), which describes recipes, several of which used low-nitrate gunpowder, to create military signal smokes with various colors. In the Wubei Huolongjing, two formulas appears for firework-like signals, the sanzhangju  and baizhanglian, that produces silver sparkles in the smoke. In the Huoxilüe  by Zhao Xuemin, there are several recipes with low-nitrate gunpowder and other chemical substances to tint flames and smoke. These included, for instance, math sulphide for yellow, copper acetate (verdigris) for green, lead carbonate for lilac-white, and mercurous chloride (calomel) for white. The Chinese pyrotechnics were described by the French author Antoine Caillot (1818): "It is certain that the variety of colours which the Chinese have the secret of giving to flame is the greatest mystery of their fireworks." Similarly, the English geographer Sir John Barrow (ca. 1797) wrote "The diversity of colours indeed with which the Chinese have the secret of cloathing fire seems to be the chief merit of their pyrotechny."

Fireworks were produced in Europe by the 14th century, becoming popular by the 17th century. Lev Izmailov, ambassador of Peter the Great, once reported from China: "They make such fireworks that no one in Europe has ever seen." In 1758, the Jesuit missionary Pierre Nicolas le Chéron d'Incarville, living in Beijing, wrote about the methods and composition of Chinese fireworks to the Paris Academy of Sciences, which published the account five years later. Amédée-François Frézier published his revised work Traité des feux d'artice pour le spectacle (Treatise on Fireworks) in 1747 (originally 1706), covering the recreational and ceremonial uses of fireworks, rather than their military uses. Music for the Royal Fireworks was composed by George Frideric Handel in 1749 to celebrate the Peace treaty of Aix-la-Chapelle, which had been declared the previous year.

"Prior to the nineteenth century and the advent of modern chemistry they [fireworks] must have been relatively dull and unexciting." Bertholet in 1786 discovered that oxidations with potassium chlorate resulted in a violet emission. Subsequent developments revealed that oxidations with the chlorates of barium, strontium, copper, and sodium result in intense emission of bright colors. The isolation of metallic magnesium and aluminium marked another breakthrough as these metals burn with an intense silvery light.

Safety and environmental impact

Improper use of fireworks is dangerous, both to the person operating them (risks of burns and wounds) and to bystanders; in addition, they may start fires on landing. To prevent fireworks accidents, the use of fireworks is legally restricted in many countries. In such countries, display fireworks are restricted for use by professionals; smaller consumer versions may or may not be available to the public.

Effects on animals

Birds and animals, both domestic and wild, can be frightened by their noise, leading to them running away, often into danger, or hurting themselves on fences or in other ways in an attempt to escape the perceived danger.

Majority of dogs experience distress, fear and anxiety during fireworks. In 2016, following a petition signed by more than 100,000 Brits, House of Commons of the United Kingdom debated a motion to restrict firework use.

Fireworks also affect birds, especially larger birds like geese, eagles and others. According to a study by Max Planck Institute and Netherlands Institute of Ecology, many birds abruptly leave their sleeping sites on New Year's Eve, and some fly up to 500 km non-stop to get away from human settlements. On average, about 1000 times more birds are in flight on New Year's Eve than on other nights. Frightened birds also may abandon nests and not return to complete rearing their young. A scientific study from 2022 indicates that fireworks might have some sort of lasting effect on birds, with many birds spending more time to find food in the weeks after New Year's Eve fireworks.

Pollution

Fireworks produce smoke and dust that may contain residues of heavy metals, sulfur-coal compounds and some low concentration toxic chemicals. These by-products of fireworks combustion will vary depending on the mix of ingredients of a particular firework. (The color green, for instance, may be produced by adding the various compounds and salts of barium, some of which are toxic, and some of which are not.) Some fishers have noticed and reported to environmental authorities that firework residues can hurt fish and other water-life because some may contain toxic compounds (such as antimony sulfide or As). This is a subject of much debate due to the fact that large-scale pollution from other sources makes it difficult to measure the amount of pollution that comes specifically from fireworks. The possible toxicity of any fallout may also be affected by the amount of black powder used, type of oxidizer, colors produced and launch method.

Perchlorate salts, when in solid form, dissolve and move rapidly in groundwater and surface water. Even in low concentrations in drinking water supplies, perchlorate ions are known to inhibit the uptake of iodine by the thyroid gland. As of 2010, there are no federal drinking water standards for perchlorates in the United States, but the US Environmental Protection Agency has studied the impacts of perchlorates on the environment as well as drinking water.

Several U.S. states have enacted drinking water standard for perchlorates, including Massachusetts in 2006. California's legislature enacted AB 826, the Perchlorate Contamination Prevention Act of 2003, requiring California's Department of Toxic Substance Control (DTSC) to adopt regulations specifying best management practices for perchlorate-containing substances. The Perchlorate Best Management Practices were adopted on 31 December 2005 and became operative on 1 July 2006. California issued drinking water standards in 2007. Several other states, including Arizona, Maryland, Nevada, New Mexico, New York, and Texas have established non-enforceable, advisory levels for perchlorates.

The courts have also taken action with regard to perchlorate contamination. For example, in 2003, a federal district court in California found that Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) applied because perchlorate is ignitable and therefore a "characteristic" hazardous waste.

Pollutants from fireworks raise concerns because of potential health risks associated with the products of combustion during the liquid phase and the solid phase after they have cooled as well as the gases produced, particularly the carbon monoxide and carbon dioxide. For persons with asthma or other respiratory conditions, the smoke from fireworks may aggravate existing health problems.

Pollution is also a concern because fireworks often contain heavy metals as source of color.  However, gunpowder smoke and the solid residues are basic, and as such the cumulative effect of fireworks on acid rain is uncertain. What is not disputed is that most consumer fireworks leave behind a considerable amount of solid debris, including both readily biodegradable components as well as nondegradable plastic items. Concerns over pollution, consumer safety, and debris have restricted the sale and use of consumer fireworks in many countries. Professional displays, on the other hand, remain popular around the world.

Others argue that alleged concern over pollution from fireworks constitutes a red herring, since the amount of contamination from fireworks is minuscule in comparison to emissions from sources such as the burning of fossil fuels. In the US, some states and local governments restrict the use of fireworks in accordance with the Clean Air Act which allows laws relating to the prevention and control of outdoor air pollution to be enacted.

Some companies within the U.S. fireworks industry claim they are working with Chinese manufacturers to reduce and ultimately hope to eliminate of the pollutant perchlorate.

Competitions

Pyrotechnical competitions are held in many countries. Among them are the Montreal Fireworks Festival, an annual competition held in Montreal, Quebec, Canada; Le Festival d'Art Pyrotechnique, held in the summer annually at the Bay of Cannes in Côte d'Azur, France; and the Philippine International Pyromusical Competition, held in Manila, Philippines amongst the top fireworks companies in the world.

Clubs and Organizations

Enthusiasts in the United States have formed clubs which unite hobbyists and professionals. The groups provide safety instruction and organize meetings and private "shoots" at remote premises where members shoot commercial fireworks as well as fire pieces of their own manufacture. Clubs secure permission to fire items otherwise banned by state or local ordinances. Competition among members and between clubs, demonstrating everything from single shells to elaborate displays choreographed to music, are held. One of the oldest clubs is Crackerjacks, Inc., organized in 1976 in the Eastern Seaboard region.

Though based in the US, membership of the Pyrotechnics Guild International, Inc. (PGI) is annual convention founded in 1969, it hosts some the world's biggest fireworks displays occur. Aside from the nightly firework shows, one of the most popular events of the convention is a unique event where individual classes of hand-built fireworks are competitively judged, ranging from simple fireworks rockets to extremely large and complex aerial shells. Some of the biggest, most intricate fireworks displays in the United States take place during the convention week.

Additional Information:

How do fireworks function?

The explosive, colourful displays made by fireworks are the result of several different chemical reactions. There are many different types of fireworks. One of the most common types of commercial fireworks that is often used in public fireworks displays functions similar to a rocket.

To set off a firework, the user lights a fuse. The heat travels along the fuse until it reaches the bottom of the main part of the firework, sometimes called the shell. This ignites the lift charge, which is made from black powder—a type of gunpowder—located at the bottom of the shell.

When ignited, the black powder reacts to create hot gases and lots of energy. These forces launch the shell out of the tube it is sitting in, also known as the mortar. The shell is filled with small pellets, known as stars. Once the firework reaches a certain height, a second fuse, sometimes called the timed fuse, ignites and activates the burst charge. This sets off the stars within the firework, which explode into a dazzling display of colours, sounds and other effects.

The appearance of each firework depends on what type of stars it contains, and on the size and amount of these pellets. Some stars contain metal salts, which produce brilliant colours, while others contain different chemical compounds that cause dazzling light effects, like strobing, sparkling and more.

Some stars even include chemicals that cause special sound effects. Potassium chlorate results in a louder sound, while the use of bismuth creates a crackling or popping effect. Other compounds can be packed tightly into a tube to create a slow burn. The result is a slow release of gas that creates a whistling sound within the tube.

Colourful chemistry

The stars inside fireworks are made of metal salts, which are powdered combinations of metal and other chemical components.

When the stars ignite, the metal particles (in the metal salts) absorb a huge amount of energy. Once they begin to cool, the particles emit this extra energy in the form of light. The colour of this light depends on the type of metal:

Strontium: Red
Calcium: Orange
Sodium: Yellow
Barium: Green
Copper: Blue
Strontium + Copper: Purple
Magnesium, Aluminum + Titanium: White

Did you know?

The first fireworks were made from gunpowder packed into long, narrow bamboo tubes. The resulting boom was thought to scare away evil spirits!

Anatomy of a Firework

Mortar: This tall cylinder holds the shell until it is launched.

Shell: This often comes in the form of a paper sphere packed as two halves. It is filled with stars designed to produce a specific effect in the sky. The bottom of the shell contains a lift charge that launches the shell out of the mortar.

Fuse: This fuse carries heat to activate the black powder.

Lift charge: Located in the bottom of the shell, this charge is made from black powder. When ignited, it launches the shell out of the mortar and into the sky.

Black powder: Invented in China over 1000 years ago, this is a type of gunpowder made from 75% potassium nitrate (saltpeter), 15% charcoal and 10% sulfur.

Timed fuse: This fuse activates the burst charge within the firework.

Burst charge: Located at the centre of the shell, this ignites the stars within the firework.

Stars: These small pellets are made of chemicals, such as powdered metal salts. When ignited, they create the firework’s special sound and light effects.

fireworksheader3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2284 2024-09-03 21:15:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2284) Balcony

Gist

A balcony is a porch or platform that extends from an upper floor of a building. Your apartment might have a balcony with a view of a city park.

Most balconies have railings around them to protect people from tumbling off, and many balconies provide an interesting view. You might linger on the balcony of your hotel room in Hawaii, enjoying the warm air and the distant glimpse of the ocean it gives you. Balcony comes from the Italian balcone, which in turn comes from balcone, or "scaffold." The root is most likely Germanic, possibly related to the Old English balca, "beam or ridge."

Summary

A balcony is external extension of an upper floor of a building, enclosed up to a height of about three feet (one metre) by a solid or pierced screen, by balusters (see also balustrade), or by railings. In the medieval and Renaissance periods, balconies were supported by corbels made out of successive courses of stonework, or by large wooden or stone brackets. Since the 19th century, supports of cast iron, reinforced concrete, and other materials have become common.

The balcony serves to enlarge the living space and range of activities possible in a dwelling without a garden or lawn. In many apartment houses the balcony is partly recessed to provide for both sunshine and shelter or shade. (In Classical architecture a balcony that is fully recessed or covered by its own roof is described as a loggia; [q.v.].) In hot countries a balcony allows a greater movement of air inside the building, as the doors opening onto it are usually louvered.

From classical Rome to the Victorian period, balconies on public buildings were places from which speeches could be made or crowds exhorted. In Italy, where there are innumerable balconies and loggias, the best known is that at St. Peter’s in Rome from which the pope gives his blessing.

In Islāmic countries the faithful are called to prayer from the top balcony of a minaret. In Japanese architecture, based on wooden structures, a balcony is provided around each, or part of each, story.

Internal balconies, also called galleries, were constructed in Gothic churches to accommodate singers. In larger halls during the Middle Ages they were provided for minstrels. With the Renaissance development of the theatre, balconies with sloped floors, allowing more and more spectators to have a clear view of the stage, were built in the auditorium.

Details

A balcony (from Italian: balcone, "scaffold") is a platform projecting from the wall of a building, supported by columns or console brackets, and enclosed with a balustrade, usually above the ground floor. They are commonly found on multi-level houses, apartments and cruise ships.

Types

The traditional Maltese balcony is a wooden, closed balcony projecting from a wall.

In contrast, a Juliet balcony does not protrude out of the building. It is usually part of an upper floor, with a balustrade only at the front, resembling a small loggia. A modern Juliet balcony often involves a metal barrier placed in front of a high window that can be opened. In the UK, the technical name for one of these was officially changed in August 2020 to a Juliet guarding.

Juliet balconies are named after William Shakespeare's Juliet who, in traditional staging of the play Romeo and Juliet, is courted by Romeo while she is on her balcony—although the play itself, as written, makes no mention of a balcony, but only of a window at which Juliet appears. Various types of balcony have been used in depicting this famous scene; however the 'balcony of Juliet' at Villa Capuleti in Verona is not a 'Juliet balcony', as it does indeed protrude from the wall of the villa.

Functions

A unit with a regular balcony will have doors that open onto a small patio with railings, a small patio garden or skyrise greenery. A French balcony is a false balcony, with doors that open to a railing with a view of the courtyard or the surrounding scenery below.

Sometimes balconies are adapted for ceremonial purposes, e.g. that of St. Peter's Basilica at Rome, when the newly elected pope gives his blessing urbi et orbi after the conclave. Inside churches, balconies are sometimes provided for the singers, and in banqueting halls and the like for the musicians.

In theatres, the balcony was formerly a stage box, but the name is now usually confined to the part of the auditorium above the dress circle and below the gallery.

Balconies are part of the sculptural shape of the building allowing for irregular facades without the cost of irregular internal structures.

In addition to functioning as an outdoor space for a dwelling unit, balconies can also play a secondary role in building sustainability and indoor environmental quality (IEQ). Balconies have been shown to provide an overhang effect that helps prevent interior overheating by reducing solar gain, and may also have benefits in terms of blocking noise and improving natural ventilation within units.

Materials

Balconies can be made out of various materials; historically, stone was the most commonly used. With the rise of technology and the modern age, balconies are now able to be built out of other materials, including glass and stainless steel to provide a durable and modern look to a building.

Notable balconies

One of the most famous uses of a balcony is in traditional staging of the scene that has come to be known as the "balcony scene" in Shakespeare's tragedy Romeo and Juliet (though the scene makes no mention of a balcony, only of a window at which Juliet appears).

balcony-with-plants-balcony-min.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2285 2024-09-04 00:27:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2285) Sphalerite

Gist

Sphalerite is an ore—a mineral of economic value—that was once mined in southeastern Kansas for its zinc content. It is also called zinc blende, blende, blackjack, and mock lead. Sphalerite crystals are usually shaped like triangular pyramids, with three sides and a base.

Summary

Sphalerite is zinc sulfide (ZnS), the chief ore mineral of zinc. It is found associated with galena in most important lead-zinc deposits. The name sphalerite is derived from a Greek word meaning “treacherous,” an allusion to the ease with which the dark-coloured, opaque varieties are mistaken for galena (a valuable lead ore). The alternative names blende and zinc blende, from the German word meaning “blind,” similarly allude to the fact that sphalerite does not yield lead.

In the United States the most important sphalerite deposits are those in the Mississippi River valley region. There sphalerite is found associated with chalcopyrite, galena, marcasite, and dolomite in solution cavities and brecciated (fractured) zones in limestone and chert. Similar deposits occur in Poland, Belgium, and North Africa. Sphalerite also is distributed worldwide as an ore mineral in hydrothermal vein deposits, in contact metamorphic zones, and in high-temperature replacement deposits.

Details

Sphalerite is a sulfide mineral with the chemical formula (Zn, Fe)S. It is the most important ore of zinc. Sphalerite is found in a variety of deposit types, but it is primarily in sedimentary exhalative, Mississippi-Valley type, and volcanogenic massive sulfide deposits. It is found in association with galena, chalcopyrite, pyrite (and other sulfides), calcite, dolomite, quartz, rhodochrosite, and fluorite.

German geologist Ernst Friedrich Glocker discovered sphalerite in 1847, naming it based on the Greek word sphaleros, meaning "deceiving", due to the difficulty of identifying the mineral.

In addition to zinc, sphalerite is an ore of cadmium, gallium, germanium, and indium. Miners have been known to refer to sphalerite as zinc blende, black-jack, and ruby blende. Marmatite is an opaque black variety with a high iron content.

The crystal structure of sphalerite

Sphalerite crystallizes in the face-centered cubic zincblende crystal structure, which named after the mineral. This structure is a member of the hextetrahedral crystal class (space group F43m). In the crystal structure, both the sulfur and the zinc or iron ions occupy the points of a face-centered cubic lattice, with the two lattices displaced from each other such that the zinc and iron are tetrahedrally coordinated to the sulfur ions, and vice versa. Minerals similar to sphalerite include those in the sphalerite group, consisting of sphalerite, colaradoite, hawleyite, metacinnabar, stilleite and tiemannite. The structure is closely related to the structure of diamond. The hexagonal polymorph of sphalerite is wurtzite, and the trigonal polymorph is matraite. Wurtzite is the higher temperature polymorph, stable at temperatures above 1,020 °C (1,870 °F). The lattice constant for zinc sulfide in the zinc blende crystal structure is 0.541 nm. Sphalerite has been found as a pseudomorph, taking the crystal structure of galena, tetrahedrite, barite and calcite. Sphalerite can have Spinel Law twins, where the twin axis is.

The chemical formula of sphalerite is (Zn,Fe)S; the iron content generally increases with increasing formation temperature and can reach up to 40%. The material can be considered a ternary compound between the binary endpoints ZnS and FeS with composition ZnxFe(x-1)S, where x can range from 1 (pure ZnS) to 0.6.

All natural sphalerite contains concentrations of various impurities, which generally substitute for zinc in the cation position in the lattice; the most common cation impurities are cadmium, mercury and manganese, but gallium, germanium and indium may also be present in relatively high concentrations (hundreds to thousands of ppm). Cadmium can replace up to 1% of zinc and manganese is generally found in sphalerite with high iron abundances. Sulfur in the anion position can be substituted for by selenium and tellurium. The abundances of these impurities are controlled by the conditions under which the sphalerite formed; formation temperature, pressure, element availability and fluid composition are important controls.

Properties:

Physical properties

Sphalerite possesses perfect dodecahedral cleavage, having six cleavage planes. In pure form, it is a semiconductor, but transitions to a conductor as the iron content increases. It has a hardness of 3.5 to 4 on the Mohs scale of mineral hardness.

It can be distinguished from similar minerals by its perfect cleavage, its distinctive resinous luster, and the reddish-brown streak of the darker varieties.

Optical properties

Pure zinc sulfide is a wide-bandgap semiconductor, with bandgap of about 3.54 electron volts, which makes the pure material transparent in the visible spectrum. Increasing iron content will make the material opaque, while various impurities can give the crystal a variety of colors. In thin section, sphalerite exhibits very high positive relief and appears colorless to pale yellow or brown, with no pleochroism.

The refractive index of sphalerite (as measured via sodium light, average wavelength 589.3 nm) ranges from 2.37 when it is pure ZnS to 2.50 when there is 40% iron content. Sphalerite is isotropic under cross-polarized light, however sphalerite can experience birefringence if intergrown with its polymorph wurtzite; the birefringence can increase from 0 (0% wurtzite) up to 0.022 (100% wurtzite).

Depending on the impurities, sphalerite will fluoresce under ultraviolet light. Sphalerite can be triboluminescent. Sphalerite has a characteristic triboluminescence of yellow-orange. Typically, specimens cut into end-slabs are ideal for displaying this property.

Varieties

Gemmy, colorless to pale green sphalerite from Franklin, New Jersey, are highly fluorescent orange and/or blue under longwave ultraviolet light and are known as cleiophane, an almost pure ZnS variety. Cleiophane contains less than 0.1% of iron in the sphalerite crystal structure. Marmatite or christophite is an opaque black variety of sphalerite and its coloring is due to high quantities of iron, which can reach up to 25%; marmatite is named after Marmato mining district in Colombia and christophite is named for the St. Christoph mine in Breitenbrunn, Saxony. Both marmatite and cleiophane are not recognized by the International Mineralogical Association (IMA). Red, orange or brownish-red sphalerite is termed ruby blende or ruby zinc, whereas dark colored sphalerite is termed black-jack.

Deposit types

Sphalerite is amongst the most common sulfide minerals, and it is found worldwide and in a variety of deposit types. The reason for the wide distribution of sphalerite is that it appears in many types of deposits; it is found in skarns, hydrothermal deposits, sedimentary beds, volcanogenic massive sulfide deposits (VMS), Mississippi-valley type deposits (MVT), granite and coal.

Sedimentary exhalitive

Approximately 50% of zinc (from sphalerite) and lead comes from Sedimentary exhalative (SEDEX) deposits, which are stratiform Pb-Zn sulfides that form at seafloor vents. The metals precipitate from hydrothermal fluids and are hosted by shales, carbonates and organic-rich siltstones in back-arc basins and failed continental rifts. The main ore minerals in SEDEX deposits are sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts such as tetrahedrite-freibergite and boulangerite; the zinc + lead grade typically ranges between 10 and 20%. Important SEDEX mines are Red Dog in Alaska, Sullivan Mine in British Columbia, Mount Isa and Broken Hill in Australia and Mehdiabad in Iran.

Mississippi-Valley type

Similar to SEDEX, Mississippi-Valley type (MVT) deposits are also a Pb-Zn deposit which contains sphalerite. However, they only account for 15–20% of zinc and lead, are 25% smaller in tonnage than SEDEX deposits and have lower grades of 5–10% Pb + Zn. MVT deposits form from the replacement of carbonate host rocks such as dolostone and limestone by ore minerals; they are located in platforms and foreland thrust belts. Furthermore, they are stratabound, typically Phanerozoic in age and epigenetic (form after the lithification of the carbonate host rocks). The ore minerals are the same as SEDEX deposits: sphalerite, galena, pyrite, pyrrhotite and marcasite, with minor sulfosalts. Mines that contain MVT deposits include Polaris in the Canadian arctic, Mississippi River in the United States, Pine Point in Northwest Territories, and Admiral Bay in Australia.

Volcanogenic massive sulfide

Volcanogenic massive sulfide (VMS) deposits can be Cu-Zn- or Zn-Pb-Cu-rich, and accounts for 25% of Zn in reserves. There are various types of VMS deposits with a range of regional contexts and host rock compositions; a common characteristic is that they are all hosted by submarine volcanic rocks. They form from metals such as copper and zinc being transferred by hydrothermal fluids (modified seawater) which leach them from volcanic rocks in the oceanic crust; the metal-saturated fluid rises through fractures and faults to the surface, where it cools and deposits the metals as a VMS deposit. The most abundant ore minerals are pyrite, chalcopyrite, sphalerite and pyrrhotite. Mines that contain VMS deposits include Kidd Creek in Ontario, Urals in Russia, Troodos in Cyprus, and Besshi in Japan.

Localities

The top producers of sphalerite include the United States, Russia, Mexico, Germany, Australia, Canada, China, Ireland, Peru, Kazakhstan and England.

Uses:

Metal ore

Sphalerite is an important ore of zinc; around 95% of all primary zinc is extracted from sphalerite ore. However, due to its variable trace element content, sphalerite is also an important source of several other metals such as cadmium, gallium, germanium, and indium which replace zinc. The ore was originally called blende by miners (from German blind or deceiving) because it resembles galena but yields no lead.

Brass and bronze

The zinc in sphalerite is used to produce brass, an alloy of copper with 3–45% zinc. Major element alloy compositions of brass objects provide evidence that sphalerite was being used to produce brass by the Islamic as far back as the medieval ages between the 7th and 16th century CE. Sphalerite may have also been used during the cementation process of brass in Northern China during the 12th–13th century CE (Jin Dynasty). Besides brass, the zinc in sphalerite can also be used to produce certain types of bronze; bronze is dominantly copper which is alloyed with other metals such as tin, zinc, lead, nickel, iron and As.

Additional Information

Sphalerite is a sulfide mineral with the chemical formula (Zn, Fe)S. It is the most important ore of zinc.

Sphalerite is a zinc sulfide mineral with a chemical composition of (Zn,Fe)S. It is found in metamorphic, igneous, and sedimentary rocks in many parts of the world. Sphalerite is the most commonly encountered zinc mineral and the world's most important ore of zinc.

Dozens of countries have mines that produce sphalerite. Recent top producers include Australia, Bolivia, Canada, China, India, Ireland, Kazakhstan, Mexico, Peru, and the United States. In the United States, sphalerite is produced in Alaska, Idaho, Missouri, and Tennessee.

The name sphalerite is from the Greek word "sphaleros" which means deceiving or treacherous. This name is in response to the many different appearances of sphalerite and because it can be challenging to identify in hand specimens. Names for sphalerite used in the past or by miners include "zinc blende," "blackjack," "steel jack," and "rosin jack."

Geologic Occurrence

Many minable deposits of sphalerite are found where hydrothermal activity or contact metamorphism has brought hot, acidic, zinc-bearing fluids in contact with carbonate rocks. There, sphalerite can be deposited in veins, fractures, and cavities, or it can form as mineralizations or replacements of its host rocks.

In these deposits, sphalerite is frequently associated with galena, dolomite, calcite, chalcopyrite, pyrite, marcasite, and pyrrhotite. When weathered, the zinc often forms nearby occurrences of smithsonite or hemimorphite.

Chemical Composition

The chemical formula of sphalerite is (Zn,Fe)S. It is a zinc sulfide containing variable amounts of iron that substitutes for zinc in the mineral lattice. The iron content is normally less than 25% by weight. The amount of iron substitution that occurs depends upon iron availability and temperature, with higher temperatures favoring higher iron content.

Sphalerite often contains trace to minor amounts of cadmium, indium, germanium, or gallium. These rare elements are valuable and when abundant enough can be recovered as profitable byproducts. Minor amounts of manganese and As can also be present in sphalerite.

Physical Properties

The appearance and properties of sphalerite are variable. It occurs in a variety of colors, and its luster ranges from nonmetallic to submetallic and resinous to adamantine. Occasionally it will be transparent with a vitreous luster. Sphalerite's streak is white to yellowish brown and sometimes is accompanied by a distinct odor of sulfur. Occasionally it streaks reddish brown.

One of the most distinctive properties of sphalerite is its cleavage. It has six directions of perfect cleavage with faces that exhibit a resinous to adamantine luster. Specimens that display this distinctive cleavage are easy to identify. Unfortunately, many specimens have such a fine grain size that the cleavage is difficult to observe.

Because sphalerite often forms in veins and cavities, excellent crystals are relatively common. Sphalerite is a member of the isometric crystal system, and cubes, octahedrons, tetrahedrons, and dodecahedrons are all encountered.

Sphalerite-watermark.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2286 2024-09-05 00:06:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2286) Crowbar

Gist

A crowbar, also called a wrecking bar, pry bar or prybar, pinch-bar, or occasionally a prise bar or prisebar, colloquially gooseneck, or pig bar, or in Britain and Australia a jemmy, is a lever consisting of a metal bar with a single curved end and flattened points, used to force two objects apart or gain mechanical ...

Summary

A crowbar is a metal bar that has a thin flat edge at one end and is used to open or lift things.

A crowbar is an iron or steel bar that is usually wedge-shaped at the working end for use as a pry or lever.

A crowbar, also known as a pry bar, wrecking bar, gorilla bar or pinch bar, is a metal bar tool with flatted points at each end, often with a small fissure. The function of this fissure is to help remove nails or prise two materials or objects apart.

Unfortunately, the crowbar is another one of those tools like the hammer and the screwdriver whose origins are lost to history. It's impossible to say who was the first person to invent one, but the crowbar was in the public consciousness prior to the early 1400s where they were referred to as crows or iron crows.

The crowbar is the oldest type of pry bar and was first used in France in 1748. This straight piece of iron with a wedge-shaped end was developed to open wooden crates, doors, and boxes.

Details

A crowbar, also called a wrecking bar, pry bar or prybar, pinch-bar, or occasionally a prise bar or prisebar, colloquially gooseneck, or pig bar, or in Britain and Australia a jemmy, is a lever consisting of a metal bar with a single curved end and flattened points, used to force two objects apart or gain mechanical advantage in lifting; often the curved end has a notch for removing nails.

The design can be used as any of the three lever classes. The curved end is usually used as a first-class lever, and the flat end as a second-class lever.

Designs made from thick flat steel bar are often referred to as utility bars.

Materials and construction

A common hand tool, the crow bar is typically made of medium-carbon steel, possibly hardened on its ends.

Commonly crowbars are forged from long steel stock, either hexagonal or sometimes cylindrical. Alternative designs may be forged with a rounded I-shaped cross-section shaft. Versions using relatively wide flat steel bar are often referred to as "utility" or "flat bars".

Etymology and usage

The accepted etymology identifies the first component of the word crowbar with the bird-name "crow", perhaps due to the crowbar's resemblance to the feet or beak of a crow. The first use of the term is dated back to c. 1400. It was also called simply a crow, or iron crow; William Shakespeare used the latter, as in Romeo and Juliet, Act 5, Scene 2: "Get me an iron crow and bring it straight unto my cell."

In Daniel Defoe's 1719 novel Robinson Crusoe, the protagonist lacks a pickaxe so uses a crowbar instead: "As for the pickaxe, I made use of the iron crows, which were proper enough, though heavy."

Types

Types of crowbar include:

* Alignment pry bar, also referred to as Sleeve bar
* Cat’s claw pry bar, more simply known as a cat's paw
* Digging pry bar
* Flat pry bar
* Gooseneck pry bar
* Heavy-duty pry bar
* Molding pry bar
* Rolling head pry bar.

istock-000027898872-small-2f0d88108fac6cc0cb94adb8f11bad8e.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2287 2024-09-05 22:21:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2287) Gardener

Gist

A gardener is a person who gardens: the person is  one employed to care for the gardens or grounds of a home, business concern, or other property.

A person whose hobby or job is growing flowers in a garden is called a gardener. If you want homegrown flowers and veggies, get to know a gardener. If you grow vegetables professionally, you're called a farmer, but if you design, tend, or care for a flower garden, you're a gardener.

A person whose hobby or job is growing flowers in a garden is called a gardener. If you want homegrown flowers and veggies, get to know a gardener.

If you grow vegetables professionally, you're called a farmer, but if you design, tend, or care for a flower garden, you're a gardener. Planting anything on a small scale, in your own backyard, also makes you a gardener. The word gardener was a common last name starting in the 13th century, from the Old French jardineor, and the Old North French gardin, "kitchen garden or orchard."

Summary

A gardener is someone who practices gardening, either professionally or as a hobby.

Description

A gardener is any person involved in gardening, arguably the oldest occupation, from the hobbyist in a residential garden, the home-owner supplementing the family food with a small vegetable garden or orchard, to an employee in a plant nursery or the head gardener in a large estate.

Garden design and maintenance

The garden designer is someone who will design the garden, and the gardener is the person who will undertake the work to produce the desired outcome.

Design

The term gardener is also used to describe garden designers and landscape architects, who are involved chiefly in the design of gardens, rather than the practical aspects of horticulture. Garden design is considered to be an art in most cultures, distinguished from gardening, which generally means garden maintenance. Vita Sackville-West, Gertrude Jekyll and William Robinson were garden designers as well as gardeners.

Garden design is the creation of a plan for the construction of a garden, in a design process. The product is the garden, and the garden designers attempt to optimize the given general conditions of the soil, location and climate, ecological, and geological conditions and processes to choose the right plants in corresponding conditions. The design can include different themes such as perennial, butterfly, wildlife, Japanese, water, tropical, or shade gardens. In 18th-century Europe, country estates were refashioned by landscape gardeners into formal gardens or landscaped park lands, such as at Versailles, France, or Stowe Gardens, England.

Today, landscape architects and garden designers continue to design both private garden spaces, residential estates and parkland, public parks and parkways to site planning for campuses and corporate office parks. Professional landscape designers are certified by the Association of Professional Landscape Designers.

Maintenance

The designer also provides directions and supervision during construction, and the management of establishment and maintenance once the garden has been created. The gardener is the person who has the skill to maintain the garden's design.

The gardener's labor during the year include planting flowers and other plants, weeding, pruning, grafting, deadheading, mixing and preparation of insecticides and other products for pest control, and tending garden compost. Weeds tend to thrive at the expense of the more refined edible or ornamental plants. Gardeners need to control weeds using physical or chemical methods to stop weeds from reaching a mature stage of growth when they could be harmful to domesticated plants. Early activities such as starting young plants from seeds for later transplantation are usually performed in early spring.

Details

Gardening is the laying out and care of a plot of ground devoted partially or wholly to the growing of plants such as flowers, herbs, or vegetables.

Gardening can be considered both as an art, concerned with arranging plants harmoniously in their surroundings, and as a science, encompassing the principles and techniques of plant cultivation. Because plants are often grown in conditions markedly different from those of their natural environment, it is necessary to apply to their cultivation techniques derived from plant physiology, chemistry, and botany, modified by the experience of the planter. The basic principles involved in growing plants are the same in all parts of the world, but the practice naturally needs much adaptation to local conditions.

The nature of gardening

Gardening in its ornamental sense needs a certain level of civilization before it can flourish. Wherever that level has been attained, in all parts of the world and at all periods, people have made efforts to shape their environment into an attractive display. The instinct and even enthusiasm for gardening thus appear to arise from some primitive response to nature, engendering a wish to produce growth and harmony in a creative partnership with it.

It is possible to be merely an admiring spectator of gardens. However, most people who cultivate a domestic plot also derive satisfaction from involvement in the processes of tending plants. They find that the necessary attention to the seasonal changes, and to the myriad small “events” in any shrubbery or herbaceous border, improves their understanding and appreciation of gardens in general.

A phenomenal upsurge of interest in gardening began in Western countries after World War II. A lawn with flower beds and perhaps a vegetable patch has become a sought-after advantage to home ownership. The increased interest produced an unprecedented expansion of business among horticultural suppliers, nurseries, garden centres, and seedsmen. Books, journals, and newspaper columns on garden practice have found an eager readership, while television and radio programs on the subject have achieved a dedicated following.

Several reasons for this expansion suggest themselves. Increased leisure in the industrial nations gives more people the opportunity to enjoy this relaxing pursuit. The increased public appetite for self-sufficiency in basic skills also encourages people to take up the spade. In the kitchen, the homegrown potato or ear of sweet corn rewards the gardener with a sense of achievement, as well as with flavour superior to that of store-bought produce. An increased awareness of threats to the natural environment and the drabness of many inner cities stir some people to cultivate the greenery and colour around their own doorsteps. The bustle of 20th-century life leads more individuals to rediscover the age-old tranquillity of gardens.

The varied appeal of gardening

The attractions of gardening are many and various and, to a degree perhaps unique among the arts and crafts, may be experienced by any age group and at all levels of ambition. At its most elemental, but not least valuable, the gardening experience begins with the child’s wonder that a packet of seeds will produce a charming festival of colour. At the adult level, it can be as simple as helping to raise a good and edible carrot, and it can give rise to almost parental pride. At higher levels of appreciation, it involves an understanding of the complexity of the gardening process, equivalent to a chess game with nature, because the variables are so many.

The gardening experience may involve visiting some of the world’s great gardens at different seasons to see the relation of individual groups of plants, trees, and shrubs to the whole design; to study the positioning of plants in terms of their colour, texture, and weight of leaf or blossom; and to appreciate the use of special features such as ponds or watercourses, pavilions, or rockeries. Garden visiting on an international scale provides an opportunity to understand the broad cultural influences, as well as the variations in climate and soil, that have resulted in so many different approaches to garden making.

The appeal of gardening is thus multifaceted and wide in range. The garden is often the only place where someone without special training can exercise creative impulses as designer, artist, technician, and scientific observer. In addition, many find it a relaxing and therapeutic pursuit. It is not surprising that the garden, accorded respect as a part of nature and a place of contemplation, holds a special place in the spiritual life of many.

Practical and spiritual aspects of gardening are shown in an impressive body of literature. In Western countries manuals of instruction date to classical Greece and Rome. Images of plants and gardens are profuse in the works of the major poets, from Virgil to Shakespeare, and on to some of the moderns.

Another of gardening’s attractions is that up to a certain level it is a simple craft to learn. The beginner can produce pleasing results without the exacting studies and practice required by, for example, painting or music. Gardens are also forgiving to the inexperienced to a certain degree. Nature’s exuberance will cover up minor errors or short periods of neglect, so gardening is an art practiced in a relatively nonjudgmental atmosphere. While tolerant in many respects, nature does, however, present firm reminders that all gardening takes place within a framework of natural law; and one important aspect of the study of the craft is to learn which of these primal rules are imperatives and which may be stretched.

Control and cooperation

Large areas of gardening development and mastery have concentrated on persuading plants to achieve what they would not have done if left in the wild and therefore “natural” state. Gardens at all times have been created through a good deal of control and what might be called interference. The gardener attends to a number of basic processes: combating weeds and pests; using space to allay the competition between plants; attending to feeding, watering, and pruning; and conditioning the soil. Above this fundamental level, the gardener assesses and accommodates the unique complex of temperature, wind, rainfall, sunlight, and shade found within his own garden boundaries. A major part of the fascination of gardening is that in problems and potential no one garden is quite like another; and it is in finding the most imaginative solutions to challenges that the gardener demonstrates artistry and finds the subtler levels of satisfaction.

Different aesthetics require different balances between controlling nature and cooperating with its requirements. The degree of control depends on the gardener’s objective, the theme and identity he is aiming to create. For example, the English wild woodland style of gardening in the mid-19th century dispensed with controls after planting, and any interference, such as pruning, would have been misplaced. At the other extreme is the Japanese dry-landscape garden, beautifully composed of rock and raked pebbles. The artistic control in this type of garden is so firm and refined that the intrusion of a single “natural” weed would spoil the effect.

Choice of plants

The need for cooperation with nature is probably most felt by the amateur gardener in choosing the plants he wants to grow. The range of plants available to the modern gardener is remarkably rich, and new varieties are constantly being offered by nurseries. Most of the shrubs and flowers used in the Western world are descendants of plants imported from other countries. Because they are nonnative, they present the gardener with some of his most interesting problems but also with the possibility of an enhanced display. Plants that originated in subtropical regions, for example, are naturally more sensitive to frost. Some, like rhododendrons or azaleas, originated in an acid soil, mainly composed of leaf mold. Consequently, they will not thrive in a chalky or an alkaline soil. Plant breeding continues to improve the adaptability of such exotic plants, but the more closely the new habitat resembles the original, the better the plant will flourish. Manuals offer solutions to most such problems, and the true gardener will always enjoy finding his own. In such experiments, he may best experience his work as part of the historical tradition of gardening.

Types of gardens

The domestic garden can assume almost any identity the owner wishes within the limits of climate, materials, and means. The size of the plot is one of the main factors, deciding not only the scope but also the kind of display and usage. Limits on space near urban centres, as well as the wish to spend less time on upkeep, have tended to make modern gardens ever smaller. Paradoxically, this happens at a time when the variety of plants and hybrids has never been wider. The wise small gardener avoids the temptations of this banquet. Some of the most attractive miniature schemes, such as those seen in Japan or in some Western patio gardens, are effectively based on an austere simplicity of design and content, with a handful of plants given room to find their proper identities.

In the medium- to large-sized garden, the tradition generally continues of dividing the area to serve various purposes: a main ornamental section to enhance the residence and provide vistas; walkways and seating areas for recreation; a vegetable plot; a children’s play area; and features to catch the eye here and there. Because most gardens are mixed, the resulting style is a matter of emphasis rather than exclusive concentration on one aspect. It may be useful to review briefly the main garden types.

Flower gardens

Though flower gardens in different countries may vary in the types of plants that are grown, the basic planning and principles are nearly the same, whether the gardens are formal or informal. Trees and shrubs are the mainstay of a well-designed flower garden. These permanent features are usually planned first, and the spaces for herbaceous plants, annuals, and bulbs are arranged around them. The range of flowering trees and shrubs is enormous. It is important, however, that such plants be appropriate to the areas they will occupy when mature. Thus it is of little use to plant a forest tree that will grow 100 feet (30 metres) high and 50 feet across in a small suburban front garden 30 feet square, but a narrow flowering cherry or redbud tree would be quite suitable.

Blending and contrast of colour as well as of forms are important aspects to consider in planning a garden. The older type of herbaceous border was designed to give a maximum display of colour in summer, but many gardeners now prefer to have flowers during the early spring as well, at the expense of some bare patches later. This is often done by planting early-flowering bulbs in groups toward the front. Mixed borders of flowering shrubs combined with herbaceous plants are also popular and do not require quite so much maintenance as the completely herbaceous border.

Groups of half-hardy annuals, which can withstand low night temperatures, may be planted at the end of spring to fill gaps left by the spring-flowering bulbs. The perpetual-flowering roses and some of the larger shrub roses look good toward the back of such a border, but the hybrid tea roses and the floribunda and polyantha roses are usually grown in separate rose beds or in a rose garden by themselves.

Woodland gardens

The informal woodland garden is the natural descendant of the shrubby “wilderness” of earlier times. The essence of the woodland garden is informality and naturalness. Paths curve rather than run straight and are of mulch or grass rather than pavement. Trees are thinned to allow enough light, particularly in the glades, but irregular groups may be left, and any mature tree of character can be a focal point. Plants are chosen largely from those that are woodlanders in their native countries: rhododendron, magnolia, pieris, and maple among the trees and shrubs; lily, daffodil, and snowdrop among the bulbs; primrose, hellebore, St.-John’s-wort, epimedium, and many others among the herbs.

Rock gardens

Rock gardens are designed to look as if they are a natural part of a rocky hillside or slope. If rocks are added, they are generally laid on their larger edges, as in natural strata. A few large boulders usually look better than a number of small rocks. In a well-designed rock garden, rocks are arranged so that there are various exposures for sun-tolerant plants such as rockroses and for shade-tolerant plants such as primulas, which often do better in a cool, north-facing aspect. Many smaller perennial plants are available for filling spaces in vertical cracks among the rock faces.

The main rocks from which rock gardens are constructed are sandstone and limestone. Sandstone, less irregular and pitted generally, looks more restful and natural, but certain plants, notably most of the dianthuses, do best in limestone. Granite is generally regarded as too hard and unsuitable for the rock garden because it weathers very slowly.

Water gardens

The water garden represents one of the oldest forms of gardening. Egyptian records and pictures of cultivated water lilies date as far back as 2000 bce. The Japanese have also made water gardens to their own particular and beautiful patterns for many centuries. Many have an ornamental lantern of stone in the centre or perhaps a flat trellis roof of wisteria extending over the water. In Europe and North America, water gardens range from formal pools with rectangular or circular outline, sometimes with fountains in the centre and often without plants or with just one or two water lilies (Nymphaea), to informal pools of irregular outline planted with water lilies and other water plants and surrounded by boggy or damp soil where moisture-tolerant plants can be grown. The pool must contain suitable oxygenating plants to keep the water clear and support any introduced fish. Most water plants, including even the large water lilies, do well in still water two to five feet deep. Temperate water lilies flower all day, but many of the tropical and subtropical ones open their flowers only in the evening.

In temperate countries water gardens also can be made under glass, and the pools can be kept heated. In such cases, more tropical plants, such as the great Victoria amazonica (V. regia) or the lotus (Nelumbo nucifera), can be grown together with papyrus reeds at the edge. The range of moisture-loving plants for damp places at the edge of the pool is great and includes many beautiful plants such as the candelabra primulas, calthas, irises, and osmunda ferns.

Herb and vegetable gardens

Most of the medieval gardens and the first botanical gardens were largely herb gardens containing plants used for medicinal purposes or herbs such as thyme, parsley, rosemary, fennel, marjoram, and dill for savouring foods. The term herb garden is usually used now to denote a garden of herbs used for cooking, and the medicinal aspect is rarely considered. Herb gardens need a sunny position, because the majority of the plants grown are native to warm, dry regions.

The vegetable garden also requires an open and sunny location. Good cultivation and preparation of the ground are important for successful vegetable growing, and it is also desirable to practice a rotation of crops as in farming. The usual period of rotation for vegetables is three years; this also helps to prevent the carryover from season to season of certain pests and diseases.

The old French potager, the prized vegetable garden, was grown to be decorative as well as useful; the short rows with little hedges around and the high standard of cultivation represent a model of the art of vegetable growing. The elaborate parterre vegetable garden at the Château de Villandry is perhaps the finest example in Europe of a decorative vegetable garden.

Specialty gardens:

Roof gardens

The modern tendency in architecture for flat roofs has made possible the development of attractive roof gardens in urban areas above private houses and commercial buildings. These gardens follow the same principles as others except that the depth of soil is less, to keep the weight on the rooftop low, and therefore the size of plants is limited. The plants are generally set in tubs or other containers, but elaborate roof gardens have been made with small pools and beds. Beds of flowering plants are suitable, among which may be stood tubs of specimen plants to produce a desired effect.

Scented gardens

Scent is one of the qualities that many people appreciate highly in gardens. Scented gardens, in which scent from leaves or flowers is the main criterion for inclusion of a plant, have been established, especially for the benefit of blind people. Some plants release a strong scent in full sunlight, and many must be bruised or rubbed to yield their fragrance. These are usually grown in raised beds within easy reach of visitors.

Contents of gardens:

Permanent elements

The more or less permanent plants available for any garden plan are various grasses for lawns, other ground-cover plants, shrubs, climbers, and trees. More transitory and therefore in need of continued attention are the herbaceous plants, such as the short-lived annuals and biennials, and the perennials and bulbous plants, which resume growth each year.

Lawns and ground covers

Areas of lawn, or turf, provide the green expanse that links all other garden plantings together. The main grasses used in cool areas for fine-textured lawns are fescues (Festuca species), bluegrasses (Poa species), and bent grasses (Agrostis species), often in mixtures. A rougher lawn mixture may contain ryegrass (Lolium species). In drier and subtropical regions, Bermuda grass (Cynodon dactylon) is frequently used, but it does not make nearly as fine a lawn as those seen in temperate regions of higher rainfall.

Ground covers are perennial plants used as grass substitutes in regions where grasses do poorly, or they are sometimes combined with grassy areas to produce a desired design. The deep greens, bronzes, and other colours that ground-cover plants can provide offer pleasing contrasts to the green of a turf. Ground covers, however, are not so durable as lawns and do not sustain themselves as well under foot traffic and other activities. Among the better known plants used as ground covers are Japanese spurge (Pachysandra terminalis), common periwinkle (Vinca minor), lily of the valley (Convallaria majalis), ajuga, or bugleweed (Ajuga reptans), many stonecrops (Sedum species), dichondra (Dichondra repens), and many ivies (Hedera species).

Shrubs and vines (climbers)

Smaller woody plants, such as shrubs and bushes, have several stems arising from the base. These plants attain heights up to about 20 feet (6 metres). They often form the largest part of modern gardens, because their cultivation requires less labour than that of herbaceous plants, and some flowering shrubs have extended blooming periods. Among the popular garden shrubs are lilac (Syringa vulgaris), privet (Ligustrum species), spirea (Spiraea species), honeysuckle (Lonicera species), forsythia (Forsythia species), mock orange (Philadelphus species), and hydrangea (Hydrangea species).

Climbers are often useful in softening the sharp lines of buildings, fences, and other structures. They can provide shade as an awning or cover on an arbour or garden house. Some species are also useful as ground covers on steep slopes and terraces. Among the many woody perennial climbers for the garden are the ivies, trumpet creeper (Bignonia, or Campsis, radicans), clematis (Clematis species), wisteria (Wisteria sinensis), climbing roses, annual herbaceous vines such as morning glory (Ipomoea species), and ornamental gourds, the last of which can provide rapid but temporary coverage of unsightly objects.

Trees

Trees are the most permanent features of a garden plan. The range of tree sizes, shapes, and colours is vast enough to suit almost any gardening scheme, from shrubby dwarf trees to giant shade trees, from slow to rapid growers, from all tones of green to bronzes, reds, yellows, and purples. A balance between evergreen trees, such as pines and spruces, and deciduous trees, such as oaks, maples, and beeches, can provide protection and visual interest throughout the year.

Transitory elements:

Herbaceous plants

Herbaceous plants, which die down annually and have no woody stem aboveground, are readily divided into three categories, as mentioned earlier: (1) Annuals, plants that complete their life cycle in one year, are usually grown from seed sown in the spring either in the place they are to flower or in separate containers, from which they are subsequently moved into their final position. Annuals flower in summer and die down in winter after setting seed. Many brilliantly coloured ornamental plants as well as many weeds belong in this category. Examples of annuals are petunia and lobelia. (2) Biennials are plants sown from seed one year, generally during the summer. They flower the second season and then die. Examples are wallflower and sweet william. (3) Herbaceous perennials are those that die down to the ground each year but whose roots remain alive and send up new top growth each year. They are an important group in horticulture, whether grown as individual plants or in the assembly of the herbaceous border. Because they flower each year, they help to create the structure of the garden’s appearance, so their placement must be considered carefully. Examples are delphinium and lupine.

Bulbous plants

For horticultural purposes, bulbous plants are defined to include those plants that have true bulbs (such as the daffodil), those with corms (such as the crocus), and a few that have tubers or rhizomes (such as the dahlia or iris). A bulb is defined as a modified shoot with a disklike basal plate and above it a number of fleshy scales that take the place of leaves and contain foods such as starch, sugar, and some proteins. Each year a new stem arises from the centre. A corm consists of the swollen base of a stem, generally rounded or flattened at the top and covered with a membranous tunic in which reserve food materials are stored. A tuber or rhizome is not the base of the stem but rather a swollen part of an underground stem; it is often knobbly. All such plants have evolved in places where they can survive in a semidormant state over long unfavourable seasons, either cold mountain winters or long droughty summers.

Additional Information

Gardening is the process of growing plants for their vegetables, fruits, flowers, herbs, and appearances within a designated space. Gardens fulfill a wide assortment of purposes, notably the production of aesthetically pleasing areas, medicines, cosmetics, dyes, foods, poisons, wildlife habitats, and saleable goods (see market gardening). People often partake in gardening for its therapeutic, health, educational, cultural, philosophical, environmental, and religious benefits.

Gardening varies in scale from the 800 hectare Versailles gardens down to container gardens grown inside. Gardens take many forms, some only contain one type of plant while others involve a complex assortment of plants with no particular order.

Gardening can be difficult to differentiate from farming. They are most easily differentiated based on their primary objectives. Farming prioritizes saleable goods and may include livestock production whereas gardening often prioritizes aesthetics and leisure. As it pertains to food production, gardening generally happens on a much smaller scale with the intent of personal or community consumption. It is important to note that there are cultures which do not differentiate between farming and gardening. This is primarily because subsistence agriculture has been the main method of farming throughout its 12,000 year history and is virtually indistinguishable from gardening.

pexels-photo-5231216-600x330.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2288 2024-09-06 00:53:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2288) Field

Gist

A field is an area of land on a farm, usually surrounded by fences or walls, used for growing crops or keeping animals in.

This word has many meanings — such as a field of daffodils, a field of study, or a field of battle in a war. Think of a field as an area, either physically or subject-wise.

A type of business or area of study is a field. All the subjects you study in school are different fields of study. Baseball players field a ball, and you need nine players to field a team. All the horses in a race are the field. Your field of vision is what you can see. Researchers go into the field to collect data — for an education researcher, that’s a school. Most fields are specific areas of one sort or another.

In Physics

Field, in physics, is a region in which each point has a physical quantity associated with it. The quantity could be a number, as in the case of a scalar field such as the Higgs field, or it could be a vector, as in the case of fields such as the gravitational field, which are associated with a force.

The three types of fields in physics are: scalar, vector, and tensor fields. Scalar fields describe things that only have a magnitude. Vector fields describe things that have two pieces of information (magnitude and direction). Tensor fields describe things that have three pieces of information.

A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively.

Summary

Field, in physics, is a region in which each point has a physical quantity associated with it. The quantity could be a number, as in the case of a scalar field such as the Higgs field, or it could be a vector, as in the case of fields such as the gravitational field, which are associated with a force. Objects fall to the ground because they are affected by the force of Earth’s gravitational field. A paper clip, placed in the magnetic field surrounding a magnet, is pulled toward the magnet, and two like magnetic poles repel each other when one is placed in the other’s magnetic field. An electric field surrounds an electric charge; when another charged particle is placed in that region, it experiences an electric force that either attracts or repels it. The strength of a field, or the forces in a particular region, can be represented by field lines; the closer the lines, the stronger the forces in that part of the field.

Details

In science, a field is a physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in space and time. A weather map, with the surface temperature described by assigning a number to each point on the map, is an example of a scalar field. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field.

In the modern framework of the quantum field theory, even without referring to a test particle, a field occupies space, contains energy, and its presence precludes a classical "true vacuum". This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics. Richard Feynman said, "The fact that the electromagnetic field can possess momentum and energy makes it very real, and [...] a particle makes a field, and a field acts on another particle, and the field has such familiar properties as energy content and momentum, just as particles can have." In practice, the strength of most fields diminishes with distance, eventually becoming undetectable. For instance the strength of many relevant classical fields, such as the gravitational field in Newton's theory of gravity or the electrostatic field in classical electromagnetism, is inversely proportional to the square of the distance from the source (i.e. they follow Gauss's law).

A field can be classified as a scalar field, a vector field, a spinor field or a tensor field according to whether the represented physical quantity is a scalar, a vector, a spinor, or a tensor, respectively. A field has a consistent tensorial character wherever it is defined: i.e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field: specifying its value at a point in spacetime requires three numbers, the components of the gravitational field vector at that point. Moreover, within each category (scalar, vector, tensor), a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In this theory an equivalent representation of field is a field particle, for instance a boson.

History

To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object.

The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1845 Michael Faraday became the first to coin the term "field".

The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields, called electromagnetic waves, propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past.

Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities.

In the late 1920s, the new rules of quantum mechanics were first applied to the electromagnetic field. In 1927, Paul Dirac used quantum fields to successfully explain how the decay of an atom to a lower quantum state led to the spontaneous emission of a photon, the quantum of the electromagnetic field. This was soon followed by the realization (following the work of Pascual Jordan, Eugene Wigner, Werner Heisenberg, and Wolfgang Pauli) that all particles, including electrons and protons, could be understood as the quanta of some quantum field, elevating fields to the status of the most fundamental objects in nature. That said, John Wheeler and Richard Feynman seriously considered Newton's pre-field concept of action at a distance (although they set it aside because of the ongoing utility of the field concept for research in general relativity and quantum electrodynamics).

Quantum fields

It is now believed that quantum mechanics should underlie all physical phenomena, so that a classical field theory should, at least in principle, permit a recasting in quantum mechanical terms; success yields the corresponding quantum field theory. For example, quantizing classical electrodynamics gives quantum electrodynamics. Quantum electrodynamics is arguably the most successful scientific theory; experimental data confirm its predictions to a higher precision (to more significant digits) than any other theory. The two other fundamental quantum field theories are quantum chromodynamics and the electroweak theory.

In quantum chromodynamics, the color field lines are coupled at short distances by gluons, which are polarized by the field and line up with it. This effect increases within a short distance (around 1 fm from the vicinity of the quarks) making the color force increase within a short distance, confining the quarks within hadrons. As the field lines are pulled together tightly by gluons, they do not "bow" outwards as much as an electric field between electric charges.

These three quantum field theories can all be derived as special cases of the so-called standard model of particle physics. General relativity, the Einsteinian field theory of gravity, has yet to be successfully quantized. However an extension, thermal field theory, deals with quantum field theory at finite temperatures, something seldom considered in quantum field theory.

In BRST theory one deals with odd fields, e.g. Faddeev–Popov ghosts. There are different descriptions of odd classical fields both on graded manifolds and supermanifolds.

As above with classical fields, it is possible to approach their quantum counterparts from a purely mathematical view using similar techniques as before. The equations governing the quantum fields are in fact PDEs (specifically, relativistic wave equations (RWEs)). Thus one can speak of Yang–Mills, Dirac, Klein–Gordon and Schrödinger fields as being solutions to their respective equations. A possible problem is that these RWEs can deal with complicated mathematical objects with exotic algebraic properties (e.g. spinors are not tensors, so may need calculus for spinor fields), but these in theory can still be subjected to analytical methods given appropriate mathematical generalization.

Additional Information

Scientists understood why forces acted the way they did when objects touched. The idea that confused them was forces that acted at a distance without touching. Think of examples such as gravitational force, electric force, and magnetic force. To help them explain what was happening, they used the idea of "field". They imagined that there was an area around the object, and anything that entered would feel a force. We say, for example, that the Moon has a gravitational field around it, and if you get close to the Moon, it will pull you down to its surface.

mceclip0-6344c643cf3df.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2289 2024-09-07 00:03:01

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2289) Copley Medal

Gist

The Copley Medal is the Society’s oldest and most prestigious award. The medal is awarded for sustained, outstanding achievements in any field of science.

First awarded in 1731 following donations from Godfrey Copley FRS, it was initially awarded for the most important scientific discovery or for the greatest contribution made by experiment. The Copley Medal is thought to be the world's oldest scientific prize and it was awarded 170 years before the first Nobel Prize. Notable winners include Benjamin Franklin, Dorothy Hodgkin, Albert Einstein and Charles Darwin. The medal is of silver gilt, is awarded annually, alternating between the physical and biological sciences (odd and even years respectively), and is accompanied by a a gift of £25,000.

Summary

Copley Medal, the most prestigious scientific award in the United Kingdom, given annually by the Royal Society of London “for outstanding achievements in research in any branch of science.”

The Copley Medal is named for Sir Godfrey Copley, 2nd Baronet (c. 1653–1709), a member of the Royal Society and longtime member of Parliament from Yorkshire who left a bequest of £100 to be used to fund experiments that would benefit the Society and further scientific knowledge. The first grant was awarded in 1731 to Stephen Gray, a self-made naturalist whose experiments and spectacular public demonstrations of electrical conduction were well known to the Society. In 1736 it was decided to use Copley’s bequest to pay for a gold medal that would be given annually as an honorary prize to the person whose work was most approved by the Society.

During the early years the focus of the Copley Medal was on important recent discoveries or experiments, but in 1831 the scope was broadened to honour any research deemed worthy by the Society, with no limit on the time period or on the scientist’s country of origin. The medal had already been given once to a foreigner—“Volta, of Pavia,” or Alessandro Volta, in 1794 (Benjamin Franklin had been given the medal in 1753, but at that time he was a British subject)—and since 1831 it has been awarded to a number of illustrious non-Britons, among them Hermann von Helmholtz (1873), Louis Pasteur (1874), Dmitri Mendeleev (1905), and Albert Einstein (1925). The medal’s domestic winners, ranging from Joseph Priestley (1772) through Charles Darwin (1864) to Stephen Hawking (2006), represent the depth, breadth, and durability of almost three centuries of British science.

The Copley Medal today is struck in silver gilt; the obverse bears a likeness of Sir Godfrey Copley, and the reverse shows the arms of the Royal Society. The award of the medal is accompanied by a gift of £25,000. Each year the award alternates between the physical and biological sciences. Nominations are reviewed and assessed by a committee made up of Royal Society fellows, who pass their recommendation to the Society’s governing council.

Details

The Copley Medal is the most prestigious award of the Royal Society, conferred "for sustained, outstanding achievements in any field of science". It alternates between the physical sciences or mathematics and the biological sciences. Given annually, the medal is the oldest Royal Society medal awarded and the oldest surviving scientific award in the world, having first been given in 1731 to Stephen Gray, for "his new Electrical Experiments: – as an encouragement to him for the readiness he has always shown in obliging the Society with his discoveries and improvements in this part of Natural Knowledge". The medal is made of silver-gilt and awarded with a £25,000 prize.

The Copley Medal is arguably the highest British and Commonwealth award for scientific achievement, and has been included among the most distinguished international scientific awards. It is awarded to "senior scientists" irrespective of nationality, and nominations are considered over three nomination cycles. Since 2022, scientific teams or research groups are collectively eligible to receive the medal; that year, the research team which developed the Oxford–AstraZeneca COVID-19 vaccine became the first collective recipient. John Theophilus Desaguliers has won the medal the most often, winning three times, in 1734, 1736 and 1741. In 1976, Dorothy Hodgkin became the first female recipient; Jocelyn Bell Burnell, in 2021, became the second.

History

In 1709, Sir Godfrey Copley, the MP for Thirsk, bequeathed Abraham Hill and Hans Sloane £100 (roughly equivalent to £17,156 in 2023) to be held in trust for the Royal Society "for improving natural knowledge to be laid out in experiments or otherwise for the benefit thereof as they shall direct and appoint". After the legacy had been received the following year, the interest of £5 was duly used to provide recurring grants for experimental work to researchers associated with the Royal Society, provided they registered their research within a stipulated period and demonstrated their experiments at an annual meeting. In 1726, following a proposal by Sloane, the grants were extended to "strangers" unaffiliated with the Society to encourage "new and useful experiments," though it was only five years later that Stephen Gray became the first such recipient. Prior to Gray, John Theophilus Desaguliers was apparently the only recipient of the grant, but had not always conducted the required annual demonstration.

In November 1736, Martin Folkes, then vice-president of the Society, suggested the Copley grant be converted to "a medal or other honorary prize to be bestowed on the person whose experiment should be best approved...a laudable emulation might be excited among men of genius to try their invention, who in all probability may never be moved for the sake of lucre". On 7 December 1736, the Council of the Royal Society agreed to Folkes' proposal, passing a resolution that the annual Copley grant of five pounds (roughly equivalent to £986 in 2023) be converted to a gold medal "of the same Value, with the Arms of the Society impress’d on it," to be gifted "for the best Experiment produced within the Year, and bestowed in such a manner as to avoid any Envy or Disgust in Rivalship". John Belchier was the first to receive the new award in 1737; due to delays in approving a medal design, however, the medal, designed and struck by John Sigismund Tanner of the Royal Mint, was only presented to Belchier in 1742. Gray and Desaguliers received their medals retrospectively.

Into the early 19th century, medal recipients were selected by the President of the Royal Society, though not always on the basis of publications and experimental demonstrations; political and social connections were also key considerations, along with service in the general interests of the Society. During the long presidency of Joseph Banks, the medal was frequently awarded to recipients whose researches were of a practical or technical nature, involving improvements to scientific equipment or instrument design. In 1831, new regulations adopted by the Royal Society Council made the medal an annual award, dropped the requirement for conducting the qualifying research within a certain period and officially opened it to scientists from any nation; although the medal had previously been awarded to foreign scientists including Alessandro Volta, the majority of recipients had been British subjects. A second donation of £1666 13s. 4d. (roughly equivalent to £212,366 in 2023) was made by Sir Joseph William Copley in 1881, and the interest from that amount is used to pay for the medal.

Prestige of the medal

By the 1840s, the Copley Medal had achieved international prestige; in 1850, George Airy noted the distinction of the medal was that it was "offered to the competition of the world." The Copley medal has been characterised as a predecessor to the Nobel Prize. Since its inception, it has been awarded to many distinguished scientists, including 52 winners of the Nobel Prize: 17 in Physics, 21 in Physiology or Medicine, and 14 in Chemistry.

Godfrey-Copley-Royal-Society-likeness-award-Medal.jpg?w=300


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2290 2024-09-07 21:36:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2290) Cardboard and Cardboard Box

Gist

Cardboard is mainly made of paper fibers, which are obtained from wood pulp. Fibers are normally derived from recycled paper or fresh wood, which is processed and formed into a layered structure to produce the rigid, lightweight material known as cardboard.

The heavy, rigid paper that's used to make the boxes you use for mailing things is called cardboard. Cardboard also comes in handy for crafts and projects in classrooms. A lot of cardboard is made from several layers of thick paper, so that it's stiff and strong, and protects items inside cardboard boxes.

Summary

Cardboard boxes are industrially prefabricated boxes, primarily used for packaging goods and materials. Specialists in industry seldom use the term cardboard because it does not denote a specific material. The term cardboard may refer to a variety of heavy paper-like materials, including card stock, corrugated fiberboard, and paperboard. Cardboard boxes can be readily recycled.

Terminology

Several types of containers are sometimes called cardboard boxes:

In business and industry, material producers, container manufacturers, packaging engineers, and standards organizations, try to use more specific terminology. There is still not complete and uniform usage. Often the term "cardboard" is avoided because it does not define any particular material.

Broad divisions of paper-based packaging materials are:

* Paper is thin material mainly used for writing upon, printing upon, or for packaging. It is produced by pressing together moist fibers, typically cellulose pulp derived from wood, rags, or grasses, and drying them into flexible sheets.
* Paperboard, sometimes known as cardboard, is generally thicker (usually over 0.25 mm or 10 points) than paper. According to ISO standards, paperboard is a paper with a basis weight (grammage) above 224 g/m^2, but there are exceptions. Paperboard can be single- or multi-ply.
* Corrugated fiberboard sometimes known as corrugated board or corrugated cardboard, is a combined paper-based material consisting of a fluted corrugated medium and one or two flat liner boards. The flute gives corrugated boxes much of their strength and is a contributing factor for why corrugated fiberboard is commonly used for shipping and storage.

There are also multiple names for containers:

* A shipping container made of corrugated fiberboard is sometimes called a "cardboard box", a "carton", or a "case". There are many options for corrugated box design. Shipping container is used in shipping and transporting goods due to its strength and durability, thus corrugated boxes are designed to withstand the rigors of transportation and handling.
* A folding carton made of paperboard is sometimes called a "cardboard box". Commonly used for packaging consumer goods, such as cereals, cosmetics, and pharmaceuticals. These cartons are designed to fold flat when empty, saving space during storage and transport.
* A set-up box is made of a non-bending grade of paperboard and is sometimes called a "cardboard box". Often used for high-end products, such as jewelry, electronics, or gift items. Unlike folding cartons, set-up boxes do not fold flat and are delivered fully constructed.
* Drink boxes made of paperboard laminates, are sometimes called "cardboard boxes", "cartons", or "boxes". Widely used for packaging beverages like juice, milk, and wine. These cartons are designed to maintain the freshness of liquid products and are often used in aseptic packaging.

History

The first commercial paperboard (not corrugated) box is sometimes credited to the firm M. Treverton & Son in England in 1817. Cardboard box packaging was made the same year in Germany.

The Scottish-born Robert Gair invented the pre-cut cardboard or paperboard box in 1890 – flat pieces manufactured in bulk that folded into boxes. Gair's invention came about as a result of an accident: he was a Brooklyn printer and paper-bag maker during the 1870s, and one day, while he was printing an order of seed bags, a metal ruler normally used to crease bags shifted in position and cut them. Gair discovered that by cutting and creasing in one operation he could make prefabricated paperboard boxes. Applying this idea to corrugated boxboard was a straightforward development when the material became available around the turn of the twentieth century.

Cardboard boxes were developed in France about 1840 for transporting the Bombyx mori moth and its eggs by silk manufacturers, and for more than a century the manufacture of cardboard boxes was a major industry in the Valréas area.

The advent of lightweight flaked cereals increased the use of cardboard boxes. The first to use cardboard boxes as cereal cartons was the Kellogg Company.

Corrugated (also called pleated) paper was patented in England in 1856, and used as a liner for tall hats, but corrugated boxboard was not patented and used as a shipping material until 20 December 1871. The patent was issued to Albert Jones of New York City for single-sided (single-face) corrugated board. Jones used the corrugated board for wrapping bottles and glass lantern chimneys. The first machine for producing large quantities of corrugated board was built in 1874 by G. Smyth, and in the same year Oliver Long improved upon Jones's design by inventing corrugated board with liner sheets on both sides. This was corrugated cardboard as we know it today.

The first corrugated cardboard box manufactured in the US was in 1895. By the early 1900s, wooden crates and boxes were being replaced by corrugated paper shipping cartons.

By 1908, the terms "corrugated paper-board" and "corrugated cardboard" were both in use in the paper trade.

Crafts and entertainment

Cardboard and other paper-based materials (paperboard, corrugated fiberboard, etc.) can have a post-primary life as a cheap material for the construction of a range of projects, among them being science experiments, children's toys, costumes, or insulative lining. Some children enjoy playing inside boxes.

A common cliché is that, if presented with a large and expensive new toy, a child will quickly become bored with the toy and play with the box instead. Although this is usually said somewhat jokingly, children certainly enjoy playing with boxes, using their imagination to portray the box as an infinite variety of objects. One example of this in popular culture is from the comic strip Calvin and Hobbes, whose protagonist, Calvin, often imagined a cardboard box as a "transmogrifier", a "duplicator", or a time machine.

So prevalent is the cardboard box's reputation as a plaything that in 2005 a cardboard box was added to the National Toy Hall of Fame in the US, one of very few non-brand-specific toys to be honoured with inclusion. As a result, a toy "house" (actually a log cabin) made from a large cardboard box was added to the Hall, housed at the Strong National Museum of Play in Rochester, New York.

The Metal Gear series of stealth video games has a running gag involving a cardboard box as an in-game item, which can be used by the player to try to sneak through places without getting caught by enemy sentries.

Housing and furniture

Living in a cardboard box is stereotypically associated with homelessness. However, in 2005, Melbourne architect Peter Ryan designed a house composed largely of cardboard. More common are small seatings or little tables made from corrugated cardboard. Merchandise displays made of cardboard are often found in self-service shops.

Cushioning by crushing

Mass and viscosity of the enclosed air help together with the limited stiffness of boxes to absorb the energy of oncoming objects. In 2012, British stuntman Gary Connery safely landed via wingsuit without deploying his parachute, landing on a 3.6-metre (12 ft) high crushable "runway" (landing zone) built with thousands of cardboard boxes.

Details

Cardboard is a common packaging material used in every industry as it prevents stored items from being damaged. It is also considered to be very durable making it one of the best packaging materials for businesses. Knowing what usual cardboard boxes are made of and their reusability can not only have a positive impact on your business but also the environment. It is worth noting though that cardboard recycling consumes lesser energy to be manufactured, hence increasing the chances of saving money and the environment. Now that we know what is cardboard made of and whether is cardboard biodegradable.

What Is Cardboard Made Of?

Cardboard is mainly made of paper fibers, which are obtained from wood pulp. Fibers are normally derived from recycled paper or fresh wood, which is processed and formed into a layered structure to produce the rigid, lightweight material known as cardboard.

How Is Cardboard Made?:

1. Initial Development

The process of converting wood into pulp is called ‘Kraft’ (meaning ‘strength’ in German) or sulfate process and it was first invented by German chemist Carl F. Dahl in 1884. He treated wood chips from Pine or Silver Birch trees with a hot mixture of sodium hydroxide and sodium sulfide and followed many mechanical and chemical steps thereafter to produce paper. This method is dominantly used in pulp mills today to manufacture paper. Typically, cardboard boxes have a Kraft paper outer liner and a test paper inner liner because Kraft paper is better quality than test and its smooth finish can be easily printed on. On the other hand, test paper is made of recycled material or from hardwood tree pulp which is why it is cheaper and has a more abrasive quality.

2. Pulping

Pulping is the process of extracting fibrous materials and cellulose from wood to produce paper. This ensures that the resulting wood chips are clean and suitable for the purpose. For this, first, the trees are cut and lumbered to tons of logs. These logs then go through a machine to be debarked and chipped. Out of the mechanical and chemical process (Kraft) of pulping the wood chips, the chemical one (Kraft) with the use of sulfide is more popular as it gives better separation and reduced lignin (a non-fibrous constituent of wood).

This results in a better-quality paper. The mechanical process (test paper) on the other hand, is ideal if you want a low-cost solution with a higher output that is of lower quality. It involves grinding debarked logs against a revolving stone or disk grinder to break down the fibers. The stone gets sprayed with water to remove the fibers.

3. From Pulp to Cardboard

The first stage is the ‘beating stage’ where the pulp is pounded and squeezed by machines. Based on the intended use of the paper, filler materials like clays and chalks or sizing such as gum and starch can be added followed by the removal of excess water content in a Fourdrinier machine. The nearly made cardboard is then pressed between wool felt rollers and then passed through steam-heated cylinders to eliminate any residual water content. To get corrugated cardboard, the next process involves wounding the sheet onto a wheel. While cardboard is intended for other uses, the process ends with coating, winding, and calendaring (smoothing of the surface).

4. Flutes

Flutes are what make the cardboard sturdy and protective. It is a wavy piece of cardboard sandwiched between two layers. The same process that was used to add ruffles to hats and shirts in the 18th century. It is used to create cardboard flutes now! Single wall boxes have one layer of fluting whereas double wall boxes have two layers for extra strength. Depending on the thickness of the flute, we have A, B, C, E, and R-grade flutes.

Corrugated Cardboard

Corrugated boxes are meant to be more durable than traditional cardboard boxes. They are made of three layers – outside layer, inside layer, and fluting with a ruffled shape. This makes the overall packaging lightweight with a high strength-to-weight ratio. While traditional cardboard is better left for items like cards and cereal boxes, corrugated cardboard is ideal for shipping heavier and fragile products.

Is Cardboard Biodegradable?

It is 100% recyclable and biodegradable. Corrugated board decomposes completely within a maximum of one year. Since it is essentially cellulose, its decay time is short and, when exposed to favourable weather conditions, that is, in a humid environment, this degradation accelerates even more.

Cardboard recycling is possible when the cardboard is not contaminated with oil, food, or water. Small businesses can reduce their waste using a baler to recycle cardboard, paper, bottles, and even aluminum/tin cans. Cardboard is also biodegradable because it is made of natural materials ideal for composting. It can be broken down into its natural elements without leaving any harmful toxins. If the cardboard is unbleached, then even better.

The nutrients from the compost can be used to boost the growth of plants in farms and gardens. Also, composting is a better option than recycling in the case of soiled cardboard boxes such as those used in pizza boxes, egg cartons, paper towels, etc. With cardboard used so enormously for packaging in every industry, it is important to learn about its impact on the environment.

Fortunately, paper and cardboard recycling benefits the environment by reducing deforestation and energy utilization. Moreover, corrugated cardboard can biodegrade completely within a year. By reducing, reusing, and recycling; businesses can avoid unnecessary consumption of cardboard and also save tons of money, thus, taking a smart step towards sustainability.

Additional Information

Similar to plastic, cardboard packaging can be of different shapes, sizes, and weights depending on the type of cardboard, such as corrugated or compact cardboard. Corrugated cardboard can come in different flute sizes or with a different number of flute layers (i.e., single, double, or triple wall board), to provide the desired mechanical strength, which increases with the number of the wall board and type and size of flutes. Triple-wall board cartons are stronger and heavier than double- or single-wall board with the same flute. Cardboard boxes and crates at the bottom of stacking pallet are susceptible to distortion and crushing due to the heavy load of the packages of the upper layers of the pallet. They can also be damaged by high moisture during transport, in storage rooms, or when it is in contact with water during precooling (hydro- or ice-cooling) or when exposed to heavy rain. Despite some of the above drawbacks, cartons are the dominant packaging material in the horticulture industry because of their reduced transportation cost due to lightweight of the unit; also, they can be custom-made to cut and shape for any wanted configuration, such as robust and stiff if corners are reinforced to avoid any distortion during stacking, ease of handling, best material for lettering and printing, and recyclable. Cardboard sheets are also used for separating and holding trays for fruits, such as apples, pears, and oranges, which are packed in corrugated boxes. The fruits are packed in layers in the corrugated boxes for long-distance transport or for export purposes.

cardboard-box-square.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2291 2024-09-08 00:12:23

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2291) Doll

Gist

i) a small figure representing a baby or other human being, especially for use as a child's toy.
ii) a small figure representing a nonhuman character, for use as a toy.

Summary

A Doll is a child’s toy modeled in human or animal form. It is perhaps the oldest plaything.

No dolls have been found in prehistoric graves, probably because they were made of such perishable materials as wood and fur or cloth, but a fragment of a Babylonian alabaster doll with movable arms has been recovered. Dolls dating from 3000–2000 bc, carved of flat pieces of wood, geometrically painted, with long, flowing hair made of strings of clay or wood beads, have been found in some Egyptian graves.

Some ancient dolls may have had religious meaning, and some authorities often argue that the religious doll preceded the toy. In ancient Greece and Rome, marriageable girls consecrated their discarded dolls to goddesses. Dolls were buried in children’s graves in Egypt, Greece, and Rome and in early Christian catacombs. Ancient rag, or stuffed, dolls have been found, as well as dolls crocheted of bright wool and others with woolen heads, clothed in coloured wool frocks.

As early as 1413 there were Dochenmacher, or doll makers, in Nürnberg, Germany, which, from the 16th to the 18th century, was the leading manufacturer of dolls and toys. Paris was another early mass-producer of dolls, making chiefly fashion dolls. Doll’s houses were also popular in Europe from the 16th century.

Doll with bisque head, human hair, and kid leather body, manufactured in the late 19th century, probably in Germany
Doll heads were made of wood, terra-cotta, alabaster, and wax—the last a technique perfected in England by Augusta Montanari and her son Richard (c. 1850–87), who popularized infant dolls. About 1820, glazed porcelain (Dresden) doll heads and unglazed bisque (ceramic) heads became popular. A French bisque doll made by the Jumeau family in the 1860s had a swivel neck; the body was made of kid-covered wood or wire or of kid stuffed with sawdust, a type of manufacture that remained common until it was supplanted by molded plastics in the 20th century. Socket joints, movable eyes, dolls with voices, and walking dolls were introduced in the 19th century, as were paper-doll books and dolls of India rubber or gutta-percha. The period from 1860 to 1890 was the golden age of the elaborately dressed Parisian bisque fashion dolls and the smaller “milliner’s models.”

The oldest American dolls may be those found in Inca and Aztec graves, such as those near the pyramids of Teotihuacán. Colonial dolls mostly followed European models. Among American Indian dolls, the kachina doll of the Pueblo Indians is noteworthy.

In Japan, dolls are more often festival figures than playthings. At the girls’ festival held in March, dolls representing the emperor, empress, and their court are displayed; girls from 7 to 17 visit each other’s collections, and refreshments are offered: first, to their majesties, then to the guests, in a ritual more than 900 years old. Japanese boys also have an annual doll festival, from the first May after they are born until they are about 15 years old. Warrior dolls, weapons, banners, and legendary-figure groups are displayed to encourage chivalrous virtues.

In India, elaborately dressed dolls were given to child brides by both Hindus and Muslims. In Syria, girls of marriageable age hang dolls in their windows. In South Africa, among the Mfengu people, every grown girl is given a doll to keep for her first child; on its birth, the mother receives a second doll to keep for the second child.

In the 20th century, notably popular dolls included the teddy bear (1903); the Kewpie Doll (1903); the Bye-lo Baby, who closed her eyes in sleep (1922); the Dydee and Wetsy Betsy dolls (1937); the Barbie doll (1959); Cabbage Patch Kids (1983); and the American Girls Collection (1986).

Details

A doll is a model typically of a human or humanoid character, often used as a toy for children. Dolls have also been used in traditional religious rituals throughout the world. Traditional dolls made of materials such as clay and wood are found in the Americas, Asia, Africa and Europe. The earliest documented dolls go back to the ancient civilizations of Egypt, Greece, and Rome. They have been made as crude, rudimentary playthings as well as elaborate art. Modern doll manufacturing has its roots in Germany, from the 15th century. With industrialization and new materials such as porcelain and plastic, dolls were increasingly mass-produced. During the 20th century, dolls became increasingly popular as collectibles.

History, types and materials:

Early history and traditional dolls

The earliest dolls were made from available materials such as clay, stone, wood, bone, ivory, leather, or wax. Archaeological evidence places dolls as the foremost candidate for the oldest known toy. Wooden paddle dolls have been found in Egyptian tombs dating to as early as the 21st century BC. Dolls with movable limbs and removable clothing date back to at least 200 BC. Archaeologists have discovered Greek dolls made of clay and articulated at the hips and shoulders. Rag dolls and stuffed animals were probably also popular, but no known examples of these have survived to the present day. Stories from ancient Greece around 100 AD show that dolls were used by little girls as playthings. Greeks called a doll, literally meaning "little girl". Often dolls had movable limbs, they were worked by strings or wires. In ancient Rome, dolls were made of clay, wood or ivory. Dolls have been found in the graves of Roman children. Like children today, the younger members of Roman civilization would have dressed their dolls according to the latest fashions. In Greece and Rome, it was customary for boys to dedicate their toys to the gods when they reached puberty and for girls to dedicate their toys to the goddesses when they married. At marriage the Greek girls dedicated their dolls to Artemis and the Roman girls to Venus, but if they died before marriage their dolls were buried with them. Rag dolls are traditionally home-made from spare scraps of cloth material. Roman rag dolls have been found dating back to 300 BC.

Traditional dolls are sometimes used as children's playthings, but they may also have spiritual, magical and ritual value. There is no defined line between spiritual dolls and toys. In some cultures dolls that had been used in rituals were given to children. They were also used in children's education and as carriers of cultural heritage. In other cultures dolls were considered too laden with magical powers to allow children to play with them.

African dolls are used to teach and entertain; they are supernatural intermediaries, and they are manipulated for ritual purposes. Their shape and costume vary according to region and custom. Dolls are frequently handed down from mother to daughter. Akuaba are wooden ritual fertility dolls from Ghana and nearby areas. The best known akuaba are those of the Ashanti people, whose akuaba have large, disc-like heads. Other tribes in the region have their own distinctive style of akuaba.

There is a rich history of Japanese dolls dating back to the Dogū figures (8000–200 BCE). and Haniwa funerary figures (300–600 AD). By the eleventh century, dolls were used as playthings as well as for protection and in religious ceremonies. During Hinamatsuri, the doll festival, hina do are displayed. These are made of straw and wood, painted, and dressed in elaborate, many-layered textiles. Daruma dolls are spherical dolls with red bodies and white faces without pupils. They represent Bodhidharma, the East Indian who founded Zen, and are used as good luck charms. Wooden Kokeshi dolls have no arms or legs, but a large head and cylindrical body, representing little girls.

The use of an effigy to perform a spell on someone is documented in African, Native American, and European cultures. Examples of such magical devices include the European poppet and the nkisi or bocio of West and Central Africa. In European folk magic and witchcraft, poppet dolls are used to represent a person for casting spells on that person. The intention is that whatever actions are performed upon the effigy will be transferred to the subject through sympathetic magic. The practice of sticking pins in voodoo dolls have been associated with African-American Hoodoo folk magic. Voodoo dolls are not a feature of Haitian Vodou religion, but have been portrayed as such in popular culture, and stereotypical voodoo dolls are sold to tourists in Haiti. Likely the voodoo doll concept in popular culture is influenced by the European poppet. A kitchen witch is a poppet originating in Northern Europe. It resembles a stereotypical witch or crone and is displayed in residential kitchens as a means to provide good luck and ward off bad spirits.

Hopi Kachina dolls are effigies made of cottonwood that embody the characteristics of the ceremonial Kachina, the masked spirits of the Hopi Native American tribe. Kachina dolls are objects meant to be treasured and studied in order to learn the characteristics of each Kachina. Inuit dolls are made out of soapstone and bone, materials common to the Inuit. Many are clothed with animal fur or skin. Their clothing articulates the traditional style of dress necessary to survive cold winters, wind, and snow. The tea dolls of the Innu people were filled with tea for young girls to carry on long journeys. Apple dolls are traditional North American dolls with a head made from dried apples. In Inca mythology, Sara Mama was the goddess of grain. She was associated with maize that grew in multiples or was similarly strange. These strange plants were sometimes dressed as dolls of Sara Mama. Corn husk dolls are traditional Native American dolls made out of the dried leaves or husk of a corncob. Traditionally, they do not have a face. The making of corn husk dolls was adopted by early European settlers in the United States. Early settlers also made rag dolls and carved wooden dolls, called Pennywoods. La última muñeca, or "the last doll", is a tradition of the Quinceañera, the celebration of a girl's fifteenth birthday in parts of Latin America. During this ritual the quinceañera relinquishes a doll from her childhood to signify that she is no longer in need of such a toy. In the United States, dollmaking became an industry in the 1860s, after the Civil War.

Matryoshka dolls are traditional Russian dolls, consisting of a set of hollow wooden figures that open up and nest inside each other. They typically portray traditional peasants and the first set was carved and painted in 1890. In Germany, clay dolls have been documented as far back as the 13th century, and wooden doll making from the 15th century. Beginning about the 15th century, increasingly elaborate dolls were made for Nativity scene displays, chiefly in Italy. Dolls with detailed, fashionable clothes were sold in France in the 16th century, though their bodies were often crudely constructed. The German and Dutch peg wooden dolls were cheap and simply made and were popular toys for poorer children in Europe from the 16th century. Wood continued to be the dominant material for dolls in Europe until the 19th century. Through the 18th and 19th centuries, wood was increasingly combined with other materials, such as leather, wax and porcelain and the bodies made more articulate. It is unknown when dolls' glass eyes first appeared, but brown was the dominant eye color for dolls up until the Victorian era when blue eyes became more popular, inspired by Queen Victoria.

Dolls, puppets and masks allow ordinary people to state what is impossible in the real situation; In Iran for example during Qajar era, people criticised the politics and social conditions of Ahmad-Shah's reign via puppetry without any fear of punishment. According to the Islamic rules, the act of dancing in public especially for women, is a taboo. But dolls or puppets have free and independent identities and are able to do what is not feasible for the real person. Layli is a hinged dancing doll, which is popular among the Lur people of Iran. The name Layli is originated from the Middle East folklore and love story, Layla and Majnun. Layli is the symbol of the beloved who is spiritually beautiful. Layli also represents and maintains a cultural tradition, which is gradually vanishing in urban life.

Industrial era

During the 19th century, dolls' heads were often made of porcelain and combined with a body of leather, cloth, wood, or composite materials, such as papier-mâché or composition, a mix of pulp, sawdust, glue and similar materials. With the advent of polymer and plastic materials in the 20th century, doll making largely shifted to these materials. The low cost, ease of manufacture, and durability of plastic materials meant new types of dolls could be mass-produced at a lower price. The earliest materials were rubber and celluloid. From the mid-20th century, soft vinyl became the dominant material, in particular for children's dolls. Beginning in the 20th century, both porcelain and plastic dolls are made directly for the adult collectors market. Synthetic resins such as polyurethane resemble porcelain in texture and are used for collectible dolls.

Colloquially the terms porcelain doll, bisque doll and china doll are sometimes used interchangeably. But collectors make a distinction between china dolls, made of glazed porcelain, and bisque dolls, made of unglazed bisque or biscuit porcelain. A typical antique china doll has a white glazed porcelain head with painted molded hair and a body made of cloth or leather. The name comes from china being used to refer to the material porcelain. They were mass-produced in Germany, peaking in popularity between 1840 and 1890 and selling in the millions. Parian dolls were also made in Germany, from around 1860 to 1880. They are made of white porcelain similar to china dolls but the head is not dipped in glaze and has a matte finish. Bisque dolls are characterized by their realistic, skin-like matte finish. They had their peak of popularity between 1860 and 1900 with French and German dolls. Antique German and French bisque dolls from the 19th century were often made as children's playthings, but contemporary bisque dolls are predominantly made directly for the collectors market. Realistic, lifelike wax dolls were popular in Victorian England.

Up through the middle of the 19th century, European dolls were predominantly made to represent grown-ups. Childlike dolls and the later ubiquitous baby doll did not appear until around 1850. But, by the late 19th century, baby and childlike dolls had overtaken the market. By about 1920, baby dolls typically were made of composition with a cloth body. The hair, eyes, and mouth were painted. A voice box was sewn into the body that cried ma-ma when the doll was tilted, giving them the name Mama dolls. During 1923, 80% of all dolls sold to children in the United States were Mama dolls.

Paper dolls are cut out of paper, with separate clothes that are usually held onto the dolls by folding tabs. They often reflect contemporary styles, and 19th century ballerina paper dolls were among the earliest celebrity dolls. The 1930s Shirley Temple doll sold millions and was one of the most successful celebrity dolls. Small celluloid Kewpie dolls, based on illustrations by Rose O'Neill, were popular in the early 20th century. Madame Alexander created the first collectible doll based on a licensed character – Scarlett O'Hara from Gone with the Wind.

Contemporary dollhouses have their roots in European baby house display cases from the 17th century. Early dollhouses were all handmade, but, following the Industrial Revolution and World War II, they were increasingly mass-produced and became more affordable. Children's dollhouses during the 20th century have been made of tin litho, plastic, and wood. Contemporary houses for adult collectors are typically made of wood.

The earliest modern stuffed toys were made in 1880. They differ from earlier rag dolls in that they are made of plush fur-like fabric and commonly portray animals rather than humans. Teddy bears first appeared in 1902–1903.

Black dolls have been designed to resemble dark-skinned persons varying from stereotypical to more accurate portrayals. Rag dolls made by American slaves served as playthings for slave children. Golliwogg was a children's book rag doll character in the late 19th century that was widely reproduced as a toy. The doll has very black skin, eyes rimmed in white, clown lips, and frizzy hair, and has been described as an anti-black caricature. Early mass-produced black dolls were typically dark versions of their white counterparts. The earliest American black dolls with realistic African facial features were made in the 1960s.

Fashion dolls are primarily designed to be dressed to reflect fashion trends and are usually modeled after teen girls or adult women. The earliest fashion dolls were French bisque dolls from the mid-19th century. Contemporary fashion dolls are typically made of vinyl. Barbie, from the American toy company Mattel, dominated the market from her inception in 1959. Bratz was the first doll to challenge Barbie's dominance, reaching forty percent of the market in 2006.

Plastic action figures, often representing superheroes, are primarily marketed to boys. Fashion dolls and action figures are often part of a media franchise that may include films, TV, video games and other related merchandise. Bobblehead dolls are collectible plastic dolls with heads connected to the body by a spring or hook in such a way that the head bobbles. They often portray baseball players or other athletes.

Modern era

With the introduction of computers and the Internet, virtual and online dolls appeared. These are often similar to traditional paper dolls and enable users to design virtual dolls and drag and drop clothes onto dolls or images of actual people to play dress up. These include KiSS, Stardoll and Dollz.

Also with the advent of the Internet, collectible dolls are customized and sold or displayed online. Reborn dolls are vinyl dolls that have been customized to resemble a human baby with as much realism as possible. They are often sold online through sites such as eBay. Asian ball-jointed dolls (BJDs) are cast in polyurethane synthetic resin in a style that has been described as both realistic and influenced by anime. Asian BJDs and Asian fashion dolls such as Pullip and Blythe are often customized and photographed. The photos are shared in online communities.

Uses, appearances and issues

Since ancient times, dolls have played a central role in magic and religious rituals and have been used as representations of deities. Dolls have also traditionally been toys for children. Dolls are also collected by adults, for their nostalgic value, beauty, historical importance or financial value. Antique dolls originally made as children's playthings have become collector's items. Nineteenth-century bisque dolls made by French manufacturers such as Bru and Jumeau may be worth almost $22,000 today.

Dolls have traditionally been made as crude, rudimentary playthings as well as with elaborate, artful design. They have been created as folk art in cultures around the globe, and, in the 20th century, art dolls began to be seen as high art. Artist Hans Bellmer made surrealistic dolls that had interchangeable limbs in 1930s and 1940s Germany as opposition to the Nazi party's idolization of a perfect Aryan body. East Village artist Greer Lankton became famous in the 1980s for her theatrical window displays of drug addicted, anorexic and mutant dolls.

Lifelike or anatomically correct dolls are used by health professionals, medical schools and social workers to train doctors and nurses in various health procedures or investigate cases of all sexual abuse of children. Artists sometimes use jointed wooden mannequins in drawing the human figure. Many ordinary doll brands are also anatomically correct, although most types of dolls are degenitalized.

Egli-Figuren are a type of doll that originated in Switzerland in 1964 for telling Bible stories.

In Western society, a gender difference in the selection of toys has been observed and studied. Action figures that represent traditional masculine traits are popular with boys, who are more likely to choose toys that have some link to tools, transportation, garages, machines and military equipment. Dolls for girls tend to represent feminine traits and come with such accessories as clothing, kitchen appliances, utensils, furniture and jewelry.

Pediophobia is a fear of dolls or similar objects. Psychologist Ernst Jentsch theorized that uncanny feelings arise when there is an intellectual uncertainty about whether an object is alive or not. Sigmund Freud further developed on these theories. Japanese roboticist Masahiro Mori expanded on these theories to develop the uncanny valley hypothesis: if an object is obviously enough non-human, its human characteristics will stand out and be endearing; however, if that object reaches a certain threshold of human-like appearance, its non-human characteristics will stand out, and be disturbing.

Doll hospitals

A doll hospital is a workshop that specializes in the restoration or repair of dolls. Doll hospitals can be found in countries around the world. One of the oldest doll hospitals was established in Lisbon, Portugal in 1830, and another in Melbourne, reputedly the first such establishment in Australia, was founded in 1888. There is a Doll Doctors Association in the United States. Henri Launay, who has been repairing dolls at his shop in northeast Paris for 43 years, says he has restored over 30,000 dolls in the course of his career. Most of the clients are not children, but adults in their 50s and 60s. Some doll brands, such as American Girl and Madame Alexander, also offer doll hospital services for their own dolls.

Dolls and children's tales

Many books deal with dolls tales, including Wilhelmina. The Adventures of a Dutch Doll, by Nora Pitt-Taylor, pictured by Gladys Hall. Rag dolls have featured in a number of children's stories, such as the 19th century character Golliwogg in The Adventures of Two Dutch Dolls and a Golliwogg by Bertha Upton and Florence K. Upton and Raggedy Ann in the books by Johnny Gruelle, first published in 1918. The Lonely Doll is a 1957 children's book by Canadian author Dare Wright. The story, told through text and photographs, is about a doll named Edith and two teddy bears.

ag_social.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2292 2024-09-08 21:22:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2292) Campfire

Gist

i) an outdoor fire for warmth or cooking, as at a camp.
ii) a gathering around such a fire.
iii) a reunion of soldiers, scouts, etc.

Summary

Camping is a recreational activity in which participants take up temporary residence in the outdoors, usually using tents or specially designed or adapted vehicles for shelter. Camping was at one time only a rough, back-to-nature pastime for hardy open-air lovers, but it later became the standard holiday for vast numbers of ordinary families.

History

The founder of modern recreational camping was Thomas Hiram Holding, who wrote the first edition of The Camper’s Handbook in 1908. His urge to camp derived from his experiences as a boy: in 1853 he crossed the prairies of the United States in a wagon train, covering some 1,200 miles (1,900 km) with a company of 300. In 1877 he camped with a canoe on a cruise in the Highlands of Scotland, and he made a similar trip the next year. He wrote two books on these ventures. Later he used a bicycle as his camping vehicle and wrote Cycle and Camp (1898).

Holding founded the first camping club in the world, the Association of Cycle Campers, in 1901. By 1907 it had merged with a number of other clubs to form the Camping Club of Great Britain and Ireland. Robert Falcon Scott, the famous Antarctic explorer, became the first president of the Camping Club in 1909.

After World War I, Robert Baden-Powell, founder of the Boy Scouts and the Girl Guides, became president of the Camping Club of Great Britain and Ireland, which fostered the establishment of camping organizations in a number of western European countries. In 1932 the International Federation of Camping and Caravanning (Fédération Internationale de Camping et de Caravanning; FICC) was formed—the first international camping organization.

In North America individuals camped in the wilderness for recreation from the early 1870s, traveling on foot, on horseback, or by canoe; but there was no organized camping. Many organizations, such as the Adirondack Mountain Club (founded 1922), the Appalachian Mountain Club (1876), and the Sierra Club (1892), have catered to campers for a long time. However, the organization of campers on a large scale did not develop until after World War II, when increased leisure time and the advent of camping with motorized vehicles caused a tremendous growth in the activity.

The majority of organized campers in North America belong to local clubs, but there are two large-scale national organizations in the United States (National Campers and Hikers Association and North American Family Campers Association) and one in Canada (Canadian Federation of Camping and Caravanning).

Individual camping is very popular in Australia and New Zealand, but organized facilities are relatively few compared with those in North America. Recreational camping continues to increase in popularity in Africa and portions of Asia.

Youth camping

Organized camping of another kind started in the United States in 1861 with a boys’ camp, run by Frederick William Gunn and his wife at Milford-on-the-Sound for students of the Gunnery School for Boys in Washington, Conn. Its success was immediate and was repeated for 18 successive years. Other similar camps began to develop. The first girls’ camp was established in 1888 by Luther Halsey Gulick and his wife on the Thames River in Connecticut.

When the Boy Scouts of America was formed in 1910 by Ernest Thompson Seton, it incorporated camping as a major part of its program. Similar emphasis on camping was to be found in the Girl Guides (founded in Great Britain in 1910), the Camp Fire Boys and Girls (U.S., 1910), and the Girl Scouts (U.S., 1912; patterned after the Girl Guides). Most other organizations concerned with young people, such as the Young Men’s Christian Association (YMCA), the Young Women’s Christian Association (YWCA), and many others, also undertook camp development as an important part of their activities.

Modern camping

All forms of camping, from primitive to motorized, continue to grow in popularity, particularly in the United States, Canada, and western Europe. Much of this growth is the result of the proliferation of campsites for recreational vehicles (RVs). In particular, many public and commercial campsites cater to RVs by setting aside paved parking regions in picturesque locations. Camping on public land is especially popular in the United States and Canada, where federal and regional government agencies strive to meet the burgeoning public demand. Commercial RV campgrounds typically have electrical and water hookups that provide most of the conveniences of home in an outdoor setting.

There have been many technical advances in camping materials and gear. Lightweight nylon tents are easier to pack and set up than their canvas predecessors, they offer superior protection from rain, and they are also easier to carry, which has facilitated a boom in hiking and camping. Lightweight aluminum cookware and portable stoves have also reduced the overall weight of gear for primitive campers.

Details

A campfire is a fire at a campsite that provides light and warmth, and heat for cooking. It can also serve as a beacon, and an insect and predator deterrent. Established campgrounds often provide a stone or steel fire ring for safety. Campfires are a popular feature of camping. At summer camps, the word campfire often refers to an event (ceremony, get together, etc.) at which there is a fire. Some camps refer to the fire itself as a campfire.

History:

First campfire

A new analysis of burned antelope bones from caves in Swartkrans, South Africa, confirms that Australopithecus robustus and/or Homo erectus built campfires roughly 1.6 million years ago. Nearby evidence within Wonderwerk Cave, at the edge of the Kalahari Desert, has been called the oldest known controlled fire. Microscopic analysis of plant ash and charred bone fragments suggests that materials in the cave were not heated above about 1,300 °F (704 °C). This is consistent with preliminary findings that the fires burned grasses, brush, and leaves. Such fuel would not produce hotter flames. The data suggests humans were cooking prey by campfire as far back as the first appearance of Homo erectus 1.9 million years ago.

Safety:

Finding a site

Ideally, campfires should be made in a fire ring. If a fire ring is not available, a temporary fire site may be constructed. Bare rock or unvegetated ground is ideal for a fire site. Alternatively, turf may be cut away to form a bare area and carefully replaced after the fire has cooled to minimize damage. Another way is to cover the ground with sand, or other soil mostly free of flammable organic material, to a depth of a few inches. A ring of rocks is sometimes constructed around a fire. Fire rings, however, do not fully protect material on the ground from catching fire. Flying embers are still a threat, and the fire ring may become hot enough to ignite material in contact with it, or the heat the water to a vapor thereby cracking the rocks.

Safety measures

Campfires can spark wildfires. As such, it is important for the fire builder to take multiple safety precautions, including:

* Avoiding building campfires under hanging branches or over steep slopes, and clear a ten-foot diameter circle around the fire of all flammable debris.
* Having enough water nearby and a shovel to smother an out-of-control fire with dirt.
* Minimizing the size of the fire to prevent problems from occurring.
* Never leaving a campfire unattended.
* When extinguishing a campfire, using plenty of water or dirt, then stirring the mixture and add more water, then check that there are no burning embers left whatsoever.
* Never bury hot coals, as they can continue to burn and cause root fires or wildfires. Be aware of roots if digging a hole for your fire.
* Making sure the fire pit is large enough for the campfire and there are no combustibles near the campfire, and avoiding the construction of the campfire on a windy day.

Types of fuel

There are three types of material involved in building a fire without manufactured fuels or modern conveniences such as lighters:

Pitchwood from a fir stump

Tinder lights easily and is used to start an enduring campfire. It is anything that can be lit with a spark and is usually classified as being thinner than your little finger. The tinder of choice before matches and lighters was amadou next to flint and steel. A few decent natural tinders are cotton, birch bark, cedar bark, and fatwood, where available; followed by dead, dry pine needles or grass; a more comprehensive list is given in the article on tinder. Though not natural steel wool make excellent tinder and can be started with steel and flint, or a 9 volt battery without difficulty.
Kindling wood is an arbitrary classification including anything bigger than tinder but smaller than fuel wood. In fact, there are gradations of kindling, from sticks thinner than a finger to those as thick as a wrist. A quantity of kindling sufficient to fill a hat may be enough, but more is better. A faggot is a related term indicating a bundle of small branches used to feed a small fire or continue developing a bigger fire out of a small one.
* Fuel wood can be different types of timber. Timber ranges from small logs two or three inches (76 mm) across to larger logs that can burn for hours. It is typically difficult to gather without a hatchet or other cutting tool. In heavily used campsites, fuel wood can be hard to find, so it may have to purchased at a nearby store or be brought from home. However, untreated wood should not be transported due to the probability that invasive species of bugs will be transported with it. Heat-treated wood such as kiln-dried lumber is safe to transport. In the United States, areas that allow camping, like state parks and national parks, often let campers collect firewood lying on the ground. Some parks do not do this for various reasons, e.g. if they have erosion problems from campgrounds near dunes. Parks almost always forbid cutting living trees, and may also prohibit collecting dead parts of standing trees.
* In most realistic cases nowadays, non-natural additions to the fuels mentioned above will be used. Often, charcoal lighters like hexamine fuel tablets or ethyl alcohol will be used to start the fire, as well as various types of scrap paper. With the proliferation of packaged food, it is quite likely that plastics will be incinerated as well, a practice that not only produces toxic fumes but will also leave polluted ashes behind because of incomplete combustion at too-low open fire temperatures.

Construction styles

There are a variety of designs to choose from in building a campfire. A functional design is important in the early stages of a fire. Most of them make no mention of fuelwood—in most designs, fuelwood is never placed on a fire until the kindling is burning strongly.

Teepee

The tipi (or teepee) fire-build takes some patience to construct. First, the tinder is piled up in a compact heap. The smaller kindling is arranged around it, like the poles of a tipi. For added strength, it may be possible to lash some of the sticks together. A tripod lashing is quite difficult to execute with small sticks, so a clove hitch should suffice. (Synthetic rope should be avoided since it produces pollutants when it burns.) Then the larger kindling is arranged above the smaller kindling, taking care not to collapse the tipi. A separate tipi as a shell around the first one may work better. Tipi fires are excellent for producing heat to keep people warm. The gases from the bottom quickly come to the top as you add more sticks.

One downside to a Tipi fire is that when it burns, the logs become unstable and can fall over. This is especially concerning with a large fire.

Log cabin

A log cabin fire-build likewise begins with a tinder pile. The kindling is then stacked around it, as in the construction of a log cabin. The first two kindling sticks are laid parallel to each other, on opposite sides of the tinder pile. The second pair is laid on top of the first, at right angles to it, and also on opposite sides of the tinder. More kindling is added in the same manner. The smallest kindling is placed over the top of the assembly. Of all the fire-builds, the log cabin is the least vulnerable to premature collapse, but it is also inefficient because it makes the worst use of convection to ignite progressively larger pieces of fuel. However, these qualities make the log cabin an ideal cooking fire as it burns for a long period of time and can support cookware.

A variation on the log cabin starts with two pieces of fuelwood with a pile of tinder between them, and small kindling laid over the tops of the logs, above the tinder. The tinder is lit, and the kindling is allowed to catch fire. When it is burning briskly, it is broken and pushed down into the consumed tinder, and the larger kindling is placed over the top of the logs. When that is burning well, it is also pushed down. Eventually, a pile of kindling burns between two pieces of fuelwood, and soon the logs catch fire from it.

Another variation is called the funeral pyre method because it is used for building funeral pyres. Its main difference from the standard log cabin is that it starts with thin pieces and moves up to thick pieces. If built on a large scale, this type of fire-build collapses in a controlled manner without restricting the airflow.

Hybrid

A hybrid fire combines the elements of both the tipi and the log cabin creating an easily lit yet stable fire structure. The hybrid is made by first erecting a small tipi and then proceeding to construct a log cabin around it. This fire structure combines benefits of both fire types: the tipi allows the fire to ignite easily and the log cabin sustains the fire for a long time.

Cross-fire

A cross-fire is built by positioning two pieces of wood with the tinder in between. Once the fire is burning well, additional pieces of wood are placed on top in layers that alternate directions. This type of fire creates coals suitable for cooking.

Lean-to

A lean-to fire-build starts with the same pile of tinder as the tipi fire-build. Then, a long, thick piece of kindling is driven into the ground at an angle, so that it overhangs the tinder pile. The smaller pieces of kindling are leaned against the big stick so that the tinder is enclosed between them.

In an alternative method, a large piece of fuelwood or log can be placed on the ground next to the tinder pile. Then kindling is placed with one end propped up by the larger piece of fuelwood, and the other resting on the ground so that the kindling is leaning over the tinder pile. This method is useful in very high winds, as the piece of fuel wood acts as a windbreak.

Rakovalkea

The traditional Finnish rakovalkea (lit. 'slit bonfire'), or nying in Scandinavian languages, also called by English terms long log fire or gap fire, is constructed by placing one long and thick piece of fuelwood (log) atop another, parallel, and bolstering them in place with four sturdy posts driven into the ground. Traditionally, whole un-split tree trunks provide the fuelwood. Kindling and tinder are placed between the logs in sufficient quantity (while avoiding the very ends) to raise the upper log and allow ventilation. The tinder is always lit at the center so the bolstering posts near the ends do not burn prematurely.

The rakovalkea has two significant features. First, it burns slowly but steadily when lit; it does not require arduous maintenance, but burns for a very long time. A well constructed rakovalkea of two thick logs of two meters in length can warm two lean-to shelters for a whole sleeping shift. The construction causes the logs themselves to protect the fire from the wind. Thus, exposure to smoke is unlikely for the sleepers; nevertheless someone should always watch in case of an emergency. Second, it can be easily scaled to larger sizes (for a feast) limited only by the length of available tree trunks. The arrangement is also useful as beacon fire, i.e. a temporary light signal for ships far in the sea.

Swedish torch

The Swedish torch (Schwedenfackel or Schwedenfeuer) is also known by other names, including Swedish (log) candle, and Swedish log stove.

This fire is unique because it uses only one piece of fairly-sized wood as its fuel. The log is either partially cut (though some variants involve completely splitting it) and then set upright. Ideally, the log should be cut evenly and placed on a level surface for stability. Tinder and kindling are added to the preformed chamber created by the initial cuts. Eventually, the fire becomes self-feeding. The flat, circular top provides a surface for placing a kettle or pan for cooking, boiling liquids, and more. In some instances, the elevated position of the fire can serve as a better beacon than a typical ground-based campfire.

Keyhole fire

A keyhole fire is made in a keyhole-shaped fire ring, and is used in cooking. The large round area is used to build a fire to create coals. As coals develop, they are scraped into the rectangular area used for cooking.

Top lighter

A "top lighter" fire is built similar to a log cabin or pyre, but instead of the tinder and kindling being placed inside the cabin, it is placed in a tipi on top. The small tipi is lighted on top, and the coals eventually fall down into the log cabin. Outdoor youth organizations often build these fires for "council fires" or ceremonial fires. They burn predictably, and with some practice a builder can estimate how long they will last. They also do not throw off much heat, which isn't needed for a ceremonial fire. The fire burns from the top down, with the layer of hot coals and burning stubs igniting the next layer down.

Another variation to the top lighter, log cabin, or pyre is known by several names, most notably the pyramid, self-feeding, and upside-down [method]. The reasoning for this method are twofold. First, the layers of fuelwood take in the heat from the initial tinder/kindling, therefore, it is not lost to the surrounding ground. In effect, the fire is "off the ground", and burns its way down through its course. And secondly, this fire type requires minimal labor, thereby making it ideal as a fire of choice before bedding down for the evening without having to get up periodically to add fuelwood and/or stoke the fire to keep it going. Start by adding the largest fuelwood in a parallel "layer", then continue to add increasingly smaller and smaller fuelwood layers perpendicularly to the last layer. Once enough wood is piled, there should be a decent "platform" to make the tipi [tinder/kindling] to initiate the fire.

Dakota smokeless pit fire

A Dakota smokeless pit fire is a tactical fire used by the United States military as the flame produces a low light signature, reduced smoke, and is easier to ignite under strong wind conditions. Two small holes are dug in the ground: one vertical for the firewood and the other slanted to the bottom of the first hole to provide a draft of air for nearly complete combustion. Optional are flat stones to partially cover the first hole and provide support for cookery, and a tree over the pits to disperse the smoke.

Star Fire

A Star Fire, or Indian Fire, is the fire design often depicted as the campfire of the old West. Someone lays six or so logs out like the spokes of a wheel (star-shaped). They start the fire at the "hub," and push each log towards the center as the flames consume the ends.

Ignition

Once the fire is built, the tinder is lighted, using one of several methods

* smoking black powder produced by friction between a stick, or bow drill, or pump drill and a hole or crack on dry wood,
* a magnifying glass focusing sunlight,
* smoking material produced by a fire piston,
* smoking black powder produced by a bamboo fire saw
* smoking material produced by a fire roll (small amount of cotton mixed with ash or iron rust, rolled vigorously between two flat stones or planks, until it starts smoking),
* smoking material produced by a piece of flint or ferro-rod struck against steel over amadou or other initial tinder, or
* an ignition device, such as a match or a lighter.

A reasonably skilful fire-builder using reasonably good material only needs one match. The tinder burns brightly, but reduced to glowing embers within half a minute. If the kindling does not catch fire, the fire-builder must gather more tinder, determine what went wrong and try to fix it.

One of five problems can prevent a fire from lighting properly: wet wood, wet weather, too little tinder, too much wind, or a lack of oxygen. Rain will douse a fire, but a combination of wind and fog also has a stifling effect. Metal fire rings generally do a good job of keeping out wind, but some of them are so high as to impede the circulation of oxygen in a small fire. To make matters worse, these tall fire rings also make it very difficult to blow on the fire properly.

A small, enclosed fire that has slowed down may require vigorous blowing to get it going again, but excess blowing can extinguish a fire. Most large fires easily create their own circulation, even in unfavourable conditions, but the variant log-cabin fire-build suffers from a chronic lack of air so long as the initial structure is maintained.

Once large kindling is burning, all kindling is placed in the fire, then the fuel wood is placed on top of it (unless, as in the rakovalkea fire-build, it is already there).

Activities

Campfires have been used for cooking since time immemorial. Possibly the simplest method of cooking over a campfire and one of the most common is to roast food on long skewers that can be held above red glowing embers, or on the side near the flames (not over flames in order to avoid soot contamination and burnt food). This is a popular technique for cooking hot dogs or toasting marshmallows for making s'mores. This type of cooking over the fire typically consists of comfort foods that are easy to prepare. There is also no clean up involved unlike an actual kitchen. Another technique is to use pie irons—small iron molds with long handles. Campers put slices of bread with some kind of filling into the molds and put them over hot coals to cook. Campers sometimes use elaborate grills, cast iron pots, and fire irons to cook. Often, however, they use portable stoves for cooking instead of campfires.

Other practical, though not commonly needed, applications for campfires include drying wet clothing, alleviating hypothermia, and distress signaling. Most campfires, though, are exclusively for recreation, often as a venue for conversation, storytelling, or song. Another traditional campfire activity involves impaling marshmallows on sticks or uncoiled wire coat hangers, and roasting them over the fire. Roasted marshmallows may also be used for s'mores.

Dangers

Beside the danger of people receiving burns from the fire or embers, campfires may spread into a larger fire. A campfire may burn out of control in two basic ways: on the ground or in the trees. Dead leaves or pine needles on the ground may ignite from direct contact with burning wood, or from thermal radiation. If a root, particularly a dead one, is exposed to fire, it may smoulder underground and ignite the parent tree long after the original fire is doused and the campers have left the area. Alternatively, airborne embers (or their smaller kin, sparks) may ignite dead material in overhanging branches. This latter threat is less likely, but a fire in a branch is extremely difficult to put out without firefighting equipment, and may spread more quickly than a ground fire.

Embers may simply fall off logs and float away in the air, or exploding pockets of sap may eject them at high speed. With these dangers in mind, some places prohibit all open fires, particularly at times prone to wildfires.

Many public camping areas prohibit campfires. Public areas with large tracts of woodland usually have signs that indicate the fire danger level, which usually depends on recent rain and the amount of deadfall or dry debris. Even in safer times, it is common to require registration and permits to build a campfire. Such areas are often kept under observation by rangers, who will dispatch someone to investigate any unidentified plume of smoke.

Extinguishing the fire

Leaving a fire unattended can be dangerous. Any number of accidents might occur in the absence of people, leading to property damage, personal injury or possibly a wildfire. Ash is a good insulator, so embers left overnight only lose a fraction of their heat. It is often possible to restart the new day's fire using the embers.

To properly cool a fire, water is splashed on all embers, including places that are not glowing red. Splashing the water is both more effective and efficient in extinguishing the fire. The water boils violently and carries ash in the air with it, dirtying anything nearby but not posing a safety hazard. Water is continuously poured until the hissing stops, then the ashes are stirred to ensure that water reaches the entire fire, and more water is added if necessary. When the fire is fully extinguished, the ashes are cool to the touch.

If water is scarce, sand is used to deprive the fire of oxygen. Sand works well, but is less effective than water at absorbing heat. Once the fire is covered thoroughly with sand, water is then added over the fire.

When winter or "snow" camping with an inch or more of snow on the ground, neither of the above protocols are necessary—simply douse visible flames before leaving. In lightly used wilderness areas, the area around the campfire is cleaned up to make it look untouched after the fire is extinguished.

Campfire ashes are sometimes used in ceremonies like the Scouting campfire ash ceremony.

campfire_inspiration-1536x864.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2293 2024-09-09 00:42:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2293) Picture tube

Gist

A picture tube is a cathode-ray tube on which the picture appears in a television.

In a television picture tube, the electrons shot from the electron gun strike special phosphors on the inside surface of the screen, and these emit light, which thereby re-creates the televised images.

Summary

A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.

In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.

The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors.

Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas 40–45 inches (100–110 cm) was about the largest size of a CRT.

A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.

Details:

Basic structure

A typical television screen is located inside a slightly curved glass plate that closes the wide end, or face, of a highly evacuated, funnel-shaped CRT. Picture tubes vary widely in size and are usually measured diagonally across the tube face. Tubes having diagonals from as small as 7.5 cm (3 inches) to 46 cm (18 inches) are used in typical portable receivers, whereas tubes measuring from 58 to 69 cm (23 to 27 inches) are used in table- and console-model receivers. Picture tubes as large as 91 cm (36 inches) are used in very large console-model receivers, and rear-screen projection picture tubes are used in even larger consoles.

The screen itself, in monochrome receivers, is typically composed of two fluorescent materials, such as silver-activated zinc sulfide and silver-activated zinc cadmium sulfide. These materials, known as phosphors, glow with blue and yellow light, respectively, under the impact of high-speed electrons. The phosphors are mixed, in a fine dispersion, in such proportion that the combination of yellow and blue light produces white light of slightly bluish cast. A water suspension of these materials is settled on the inside of the faceplate of the tube during manufacture, and this coating is overlaid with a film of aluminum sufficiently thin to permit bombarding electrons to pass without hindrance. The aluminum provides a mirror surface that prevents backward-emitted light from being lost in the interior of the tube and reflects it forward to the viewer.

The colour picture tube (shown in the figure) is composed of three sets of individual phosphor dots, which glow respectively in the three primary colours (red, blue, and green) and which are uniformly interspersed over the screen. At the opposite, narrow end of the tube are three electron guns, cylindrical metal structures that generate and direct three separate streams of free electrons, or electron beams. One of the beams is controlled by the red primary-colour signal and impinges on the red phosphor dots, producing a red image. The second beam produces a blue image, and the third, a green image.

Electron guns

At the rear of each electron gun is the cathode, a flat metal support covered with oxides of barium and strontium. These oxides have a low electronic work function; when heated by a heater coil behind the metal support, they liberate electrons. In the absence of electric attraction, the free electrons form a cloud immediately in front of the oxide surface.

Directly in front of the cathode is a cylindrical sleeve that is made electrically positive with respect to the cathode (the element that emits the electrons). The positively charged sleeve (the anode) draws the negative electrons away from the cathode, and they move down the sleeve toward the viewing screen at the opposite end of the tube. They are intercepted, however, by control electrodes, flat disks having small circular apertures at their centre. Some of the moving electrons pass through the apertures; others are held back.

The television picture signal is applied between the control electrode and the cathode. During those portions of the signal wave that make the potential of the control electrode less negative, more electrons are permitted to pass through the control aperture, whereas during the more negative portions of the wave, fewer electrons pass. The receiver’s brightness control applies a steady (though adjustable) voltage between the control electrode and the cathode. This voltage determines the average number of electrons passing through the aperture, whereas the picture signal causes the number of electrons passing through the aperture to vary from the average and thus controls the brightness of the spot produced on the fluorescent screen.

As the electrons emerge from the control electrode, each electron experiences a force that directs it toward the centre of the viewing screen. From the aperture, the controlled stream of electrons passes into the glass neck of the tube. Inside the latter is a graphite coating, which extends throughout the funnel of the tube and connects to the back coating of the phosphor screen. The full value of positive high voltage (typically 15,000 volts) is applied to this coating, and it therefore attracts and accelerates the electrons from the sleeve, along the neck and into the funnel, and toward the screen of the tube. The electron beam is thus brought to focus on the screen, and the light produced there is the scanning spot. Additional focusing may be provided by an adjustable permanent magnet surrounding the neck of the tube. The scanning spot must be intrinsically very brilliant, since (by virtue of the integrating property of the eye) the light in the spot is effectively spread out over the whole area of the screen during scanning.

Deflection coils

Scanning is accomplished by two sets of electromagnet coils. These coils must be precisely designed to preserve the focus of the scanning spot no matter where it falls on the screen, and the magnetic fields they produce must be so distributed that deflections occur at uniform velocities.

Deflection of the beam occurs by virtue of the fact that an electron in motion through a magnetic field experiences a force at right angles both to its direction of motion and to the direction of the magnetic lines of force. The deflecting magnetic field is passed through the neck of the tube at right angles to the electron-beam direction. The beam thus incurs a force tending to change its direction at right angles to its motion, the amount of the force being proportional to the amount of current flowing in the deflecting coils.

To cause uniform motion along each line, the current in the horizontal deflection coil, initially negative, becomes steadily smaller, reaching zero when the spot passes the centre of the line and then increasing in the positive direction until the end of the line is reached. The current is then reversed and very rapidly goes through the reverse sequence of values, bringing the scanning spot to the beginning of the next line. The rapid rate of change of current during the retrace motions causes pulses of a high voltage to appear across the circuit that feeds current to the coil, and the succession of these pulses, smoothed into direct current by a rectifier tube, serves as the high-voltage power supply.

A similar action in the vertical deflection coils produces the vertical scanning motion. The two sets of deflection coils are combined in a structure known as the deflection yoke, which surrounds the neck of the picture tube at the junction of the neck with the funnel section.

The design trend in picture tubes has called for wider funnel sections and shallower overall depth from electron gun to viewing screen, resulting in correspondingly greater angles of deflection. The increase in deflection angle from 55° in the first (1946) models to 114° in models produced nowadays has required corresponding refinement of the deflection system because of the higher deflection currents required and because of the greater tendency of the scanning spot to go out of focus at the edges of the screen.

Shadow masks and aperture grilles

The sorting out of the three beams so that they produce images of only the intended primary colour is performed by a thin steel mask that lies directly behind the phosphor screen. This mask contains about 200,000 precisely located holes, each accurately aligned with three different coloured phosphor dots on the screen in front of it. Electrons from the three guns pass together through each hole, but each electron beam is directed at a slightly different angle. The angles are such that the electrons arising from the gun controlled by the red primary-colour signal fall only on the red dots, being prevented from hitting the blue and green dots by the shadowing action of the mask. Similarly, the “blue” and “green” electrons fall only on the blue and green dots, respectively. The colour dots of which each image is formed are so small and so uniformly dispersed that the eye does not detect their separate presence, although they are readily visible through a magnifying glass. The primary colours in the three images thereby mix in the mind of the viewer, and a full-colour rendition of the image results. A major improvement consists in surrounding each colour dot with an opaque black material, so that no light can emerge from the portions of the screen between dots. This permits the screen to produce a brighter image while maintaining the purity of the colours.

This type of colour tube is known as the shadow-mask tube. It has several shortcomings: (1) electrons intercepted by the mask cannot produce light, and the image brightness is thereby limited; (2) great precision is needed to achieve correct alignment of the electron beams, the mask holes, and the phosphor dots at all points in the scanning pattern; and (3) precisely congruent scanning patterns, as among the three beams, must be produced. In the late 1960s a different type of mask, the aperture grille, was introduced in the Sony Corporation’s Trinitron tube. In Trinitron-type tubes the shadow-mask is replaced by a metal grille having short vertical slots extending from the top to the bottom of the screen (see the figure). The three electron beams pass through the slots to the coloured phosphors, which are in the form of vertical stripes aligned with the slots. The slots direct the majority of the electrons to the phosphors, causing a much lower percentage of the electrons to be intercepted by the grille, and a brighter picture results.

Liquid crystal displays

The CRT offers a high-quality, bright image at a reasonable cost, and it has been the workhorse of receivers since television began. However, it is also large, bulky, and breakable, and it requires extremely high voltages to accelerate the electron beam as well as large currents to deflect the beam. The search for its replacement has led to the development of other display technologies, the most promising of which thus far are liquid crystal displays (LCDs).

The physics of liquid crystals are discussed in the article liquid crystal, and LCDs are described in detail in the article liquid crystal display. LCDs for television employ the nematic type of liquid crystal, whose molecules have elongated cigar shapes that normally lie in planes parallel to one another—though they can be made to change their orientation under the influence of an electric field or magnetic field. Nematic crystal molecules tend to be influenced in their alignment by the walls of the container in which they are placed. If the molecules are sandwiched between two glass plates rubbed in the same direction, the molecules will align themselves in that direction, and if the two plates are twisted 90° relative to each other, the molecules close to each plate will move accordingly, resulting in the twisted-nematic layout shown in the figure. In LCDs the glass plates are light-polarizing filters, so that polarized light passing through the bottom plate will twist 90° along with the molecules, enabling it to emerge through the filter of the top plate. However, if an external electric field is applied across the assembly, the molecules will realign along the field, in effect untwisting themselves. The polarization of the incoming light will not be changed, so that it will not be able to pass through the second filter.

Applied to only a small portion of a liquid crystal, an external electric field can have the effect of turning on or off a small picture element, or pixel. An entire screen of pixels can be activated through an “active matrix” LCD, in which a grid of thousands of thin-film transistors and capacitors is plated transparently onto the surface of the LCD in order to cause specific portions of the crystal to respond rapidly. A colour LCD uses three elements, each with its own primary-colour filter, to create a colour display.

Because LCDs do not emit their own light, they must have a source of illumination, usually a fluorescent tube for backlighting. It takes time for the liquid crystal to respond to electric charge, and this can cause blurring of motion from frame to frame. Also, the liquid nature of the crystal means that adjacent areas cannot be completely isolated from one another, a problem that reduces the maximum resolution of the display. However, LCDs can be made very thin, lightweight, and flat, and they consume very little electric power. These are strong advantages over the CRT. But large LCDs are still extremely expensive, and they have not managed to displace the picture tube from its supreme position among television receivers. LCDs are used mostly in small portable televisions and also in handheld video cameras (camcorders).

Plasma display panels

Plasma display panels (PDPs) overcome some of the disadvantages of both CRTs and LCDs. They can be manufactured easily in large sizes (up to 125 cm, or 50 inches, in diagonal size), are less than 10 cm (4 inches) thick, and have wide horizontal and vertical viewing angles. Being light-emissive, like CRTs, they produce a bright, sharply focused image with rich colours. But much larger voltages and power are required for a plasma television screen (although less than for a CRT), and, as with LCDs, complex drive circuits are needed to access the rows and columns of the display pixels. Large PDPs are being manufactured particularly for wide-screen, high-definition television.

The basic principle of a plasma display, shown in the diagram, is similar to that of a fluorescent lamp or neon tube. An electric field excites the atoms in a gas, which then becomes ionized as a plasma. The atoms emit photons at ultraviolet wavelengths, and these photons collide with a phosphor coating, causing the phosphor to emit visible light.

A large matrix of small, phosphor-coated cells is sandwiched between two large plates of glass, with each cluster of red, green, and blue cells forming the three primary colours of a pixel. The space between the plates is filled with a mixture of inert gases, usually neon and xenon (Ne-Xe) or helium and xenon (He-Xe). A matrix of electrodes is deposited on the inner surfaces of the glass and is insulated from the gas by dielectric coatings. Running horizontally on the inner surface of the front glass are pairs of transparent electrodes, each pair having one “sustain” electrode and one “discharge” electrode. The rear glass is lined with vertical columns of “addressable” electrodes, running at right angles to the electrodes on the front plate. A plasma cell, or subpixel, occurs at the intersection of a pair of transparent sustain and discharge electrodes and an address electrode. An alternating current is applied continuously to the sustain electrode, the voltage of this current carefully chosen to be just below the threshold of a plasma discharge. When a small extra voltage is then applied across the discharge and address electrodes, the gas forms a weakly ionized plasma. The ionized gas emits ultraviolet radiation, which then excites nearby phosphors to produce visible light. Three cells with phosphors corresponding to the three primary colours form a pixel. Each individual cell is addressed by applying voltages to the appropriate horizontal and vertical electrodes.

The discharge-address voltage consists of a series of short pulses that are varied in their width—a form of pulse code modulation. Although each pulse produces a very small amount of light, the light generated by tens of thousands of pulses per second is substantial when integrated by the human eye.

Additional Information

A cathode-ray tube (CRT) is a specialized vacuum tube in which images are produced when an electron beam strikes a phosphorescent surface. Most desktop computer displays make use of CRTs. The CRT in a computer display is similar to the "picture tube" in a television receiver.

A cathode-ray tube consists of several basic components, as illustrated below. The electron gun generates an arrow beam of electrons. The anodes accelerate the electrons. Deflecting coils produce an extremely low frequency electromagnetic field that allows for constant adjustment of the direction of the electron beam. There are two sets of deflecting coils: horizontal and vertical.(In the illustration, only one set of coils is shown for simplicity.) The intensity of the beam can be varied. The electron beam produces a tiny, bright visible spot when it strikes the phosphor-coated screen.

To produce an image on the screen, complex signals are applied to the deflecting coils, and also to the apparatus that controls the intensity of the electron beam. This causes the spot to race across the screen from right to left, and from top to bottom, in a sequence of horizontal lines called the raster. As viewed from the front of the CRT, the spot moves in a pattern similar to the way your eyes move when you read a single-column page of text. But the scanning takes place at such a rapid rate that your eye sees a constant image over the entire screen.

The illustration shows only one electron gun. This is typical of a monochrome, or single-color, CRT. However, virtually all CRTs today render color images. These devices have three electron guns, one for the primary color red, one for the primary color green, and one for the primary color blue. The CRT thus produces three overlapping images: one in red (R), one in green (G), and one in blue (B). This is the so-called RGB color model.

In computer systems, there are several display modes, or sets of specifications according to which the CRT operates. The most common specification for CRT displays is known as SVGA (Super Video Graphics Array). Notebook computers typically use liquid crystal display. The technology for these displays is much different than that for CRTs.

computer-graphics-cathode-ray-tube.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2294 2024-09-09 20:07:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2294) Almond

Gist

The health benefits of almonds include lower blood sugar levels, reduced blood pressure, and lower cholesterol levels. They can also reduce hunger and promote weight loss. Overall, almonds are as close to perfect as a food can get, with some considerations.

Almonds are a good source of copper, which plays a role in skin and hair pigmentation. Linoleic acid, an essential fatty acid, helps prevent skin dryness. A one ounce serving of almonds has 3.5 grams of linoleic acid.

Summary

Almond, (Prunus dulcis), economically important crop tree and its edible seed. Native to southwestern Asia, almonds are grown primarily in Mediterranean climates between 28° and 48° N and between 20° and 40° S. California produces nearly 80 percent of the world’s supply. Almonds grown as nuts may be eaten raw, blanched, or roasted and are commonly used in confectionery baking. In Europe almonds are used to make marzipan, a sweet paste used in pastries and candy, and in Asia almonds are often used in meat, poultry, fish, and vegetarian dishes. Almonds are high in protein and fat and provide small amounts of iron, calcium, phosphorus, and vitamins A, B complex, and E.

Physical description

Almond trees are deciduous with a hardy dormancy. Typically growing 3–4.5 meters (10–15 feet) tall, the trees are strikingly beautiful when in flower; they produce fragrant, five-petaled, light pink to white flowers from late January to early April north of the Equator. The flowers are self-incompatible and thus require insect pollinators to facilitate cross-pollination with other cultivars. The growing fruit (a drupe) resembles a peach until it approaches maturity; as it ripens, the leathery outer covering, or hull, splits open, curls outward, and discharges the pit. Despite their common label, almonds are not true nuts (a type of dry fruit) but rather seeds enclosed in a hard fruit covering.

Cultivation

California produces nearly 80 percent of the world’s supply of almonds. The annual pollination of the state’s almond orchards is the largest managed pollination event in the world, with more than 1.1 million beehives brought to California each year.

The sweet almond is cultivated extensively in certain favorable regions, though nut crops are uncertain wherever frosts are likely to occur during flowering. While more than 25 types of almonds are grown in California, Marcona and Valencia almonds come from Spain, and Ferragnes are imported from Greece. Old World almond cultivation was characterized by small plantings mainly for family use; trees interplanted with other crops; variability in age, condition, and bearing capacity of individual trees; and hand labur, often with crude implements. Modern almond growers are typically more industrial, with vast orchards of at least three types of trees the same age. Mechanized tree shakers are often used to expedite harvesting, and many growers must rent hives of western honeybees during flowering season to pollinate their trees. Indeed, the annual pollination of the almonds in California is the largest managed pollination event in the world, with more than 1.1 million beehives brought to the state each year. Colony collapse disorder (CCD), which has led to a global decline of honeybee populations, threatens the multibillion dollar industry.

Sweet almonds and bitter almonds

Kingdom: Plantae
Clade: Angiosperm
Order: Rosales
Family: Rosaceae
Genus: Prunus

There are two varieties: sweet almond (P. dulcis, variety dulcis) and bitter almond (P. dulcis, variety amara). Sweet almonds are the familiar edible type consumed as nuts and used in cooking or as a source of almond oil or almond meal. The oil of bitter almonds is used in the manufacture of flavoring extracts for foods and liqueurs such as amaretto, though prussic acid must first be removed.

Bitter almonds and sweet almonds have similar chemical composition. Both types contain between 35 and 55 percent of fixed oil (nonvolatile oil), and both feature the enzyme emulsin, which yields glucose in the presence of water. Bitter almonds have amygdalin, which is present only in trace amounts in sweet almonds, and the oil of bitter almonds contains benzaldehyde and prussic (hydrocyanic) acid.

Details

The almond (Prunus amygdalus, syn. Prunus dulcis) is a species of tree from the genus Prunus. Along with the peach, it is classified in the subgenus Amygdalus, distinguished from the other subgenera by corrugations on the shell (endocarp) surrounding the seed.

The fruit of the almond is a drupe, consisting of an outer hull and a hard shell with the seed, which is not a true nut. Shelling almonds refers to removing the shell to reveal the seed. Almonds are sold shelled or unshelled. Blanched almonds are shelled almonds that have been treated with hot water to soften the seedcoat, which is then removed to reveal the white embryo. Once almonds are cleaned and processed, they can be stored over time. Almonds are used in many cuisines, often featuring prominently in desserts, such as marzipan.

The almond tree prospers in a moderate Mediterranean climate with cool winter weather. Almond is rarely found wild in its original setting. Almonds were one of the earliest domesticated fruit trees, due to the ability to produce quality offspring entirely from seed, without using suckers and cuttings. Evidence of domesticated almonds in the Early Bronze Age has been found in the archeological sites of the Middle East, and subsequently across the Mediterranean region and similar arid climates with cool winters.

California produces about 80% of the world's almond supply. Due to high acreage and water demand for almond cultivation, and need for pesticides, California almond production may be unsustainable, especially during the persistent drought and heat from climate change in the 21st century. Droughts in California have caused some producers to leave the industry, leading to lower supply and increased prices.

Description

The almond is a deciduous tree growing to 3–4.5 metres (10–15 feet) in height, with a trunk of up to 30 centimetres (12 inches) in diameter. The young twigs are green at first, becoming purplish where exposed to sunlight, then gray in their second year. The leaves are 8–13 cm (3–5 in) long, with a serrated margin and a 2.5 cm (1 in) petiole.

The fragrant flowers are white to pale pink, 3–5 cm (1–2 in) diameter with five petals, produced singly or in pairs and appearing before the leaves in early spring. Almond trees thrive in Mediterranean climates with warm, dry summers and mild, wet winters. The optimal temperature for their growth is between 15 and 30 °C (59 and 86 °F) and the tree buds have a chilling requirement of 200 to 700 hours below 7.2 °C (45.0 °F) to break dormancy.

Almonds begin bearing an economic crop in the third year after planting. Trees reach full bearing five to six years after planting. The fruit matures in the autumn, 7–8 months after flowering.

The almond fruit is 3.5–6 cm (1+3/8 – 2+3/8 in) long. It is not a nut but a drupe. The outer covering, consisting of an outer exocarp, or skin, and mesocarp, or flesh, fleshy in other members of Prunus such as the plum and cherry, is instead a thick, leathery, gray-green coat (with a downy exterior), called the hull. Inside the hull is a woody endocarp which forms a reticulated, hard shell (like the outside of a peach pit) called the pyrena. Inside the shell is the edible seed, commonly called a nut. Generally, one seed is present, but occasionally two occur. After the fruit matures, the hull splits and separates from the shell, and an abscission layer forms between the stem and the fruit so that the fruit can fall from the tree. During harvest, mechanized tree shakers are used to expedite fruits falling to the ground for collection.

Taxonomy:

Sweet and bitter almonds

The seeds of Prunus dulcis var. dulcis are predominantly sweet but some individual trees produce seeds that are somewhat more bitter. The genetic basis for bitterness involves a single gene, the bitter flavor furthermore being recessive, both aspects making this trait easier to domesticate. The fruits from Prunus dulcis var. amara are always bitter, as are the kernels from other species of genus Prunus, such as apricot, peach and cherry (although to a lesser extent).

The bitter almond is slightly broader and shorter than the sweet almond and contains about 50% of the fixed oil that occurs in sweet almonds. It also contains the enzyme emulsin which, in the presence of water, acts on the two soluble glucosides amygdalin and prunasin yielding glucose, cyanide and the essential oil of bitter almonds, which is nearly pure benzaldehyde, the chemical causing the bitter flavor. Bitter almonds may yield 4–9 milligrams of hydrogen cyanide per almond and contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. The origin of cyanide content in bitter almonds is via the enzymatic hydrolysis of amygdalin. P450 monooxygenases are involved in the amygdalin biosynthetic pathway. A point mutation in a bHLH transcription factor prevents transcription of the two cytochrome P450 genes, resulting in the sweet kernel trait.

Etymology

The word almond is a loanword from Old French almande or alemande, descended from Late Latin amandula, amindula, modified from Classical Latin amygdala, which is in turn borrowed from Ancient Greek amygdálē. (cf. amygdala, an almond-shaped portion of the brain). Late Old English had amygdales 'almonds'.

The adjective amygdaloid (literally 'like an almond, almond-like') is used to describe objects which are roughly almond-shaped, particularly a shape which is part way between a triangle and an ellipse. For example, the amygdala of the brain uses a direct borrowing of the Greek term amygdalē.

Origin and distribution

The precise origin of the almond is controversial due to estimates for its emergence across wide geographic regions. Sources indicate that its origins were in Central Asia between Iran, Turkmenistan, Tajikistan, Kurdistan, Afghanistan, and Iraq, or in an eastern Asian subregion between Mongolia and Uzbekistan. In other assessments, both botanical and archaeological evidence indicates that almonds originated and were first cultivated in West Asia, particularly in countries of the Levant. Other estimates specified Iran and Anatolia (present day Turkey) as origin locations of the almond, with botanical evidence for Iran as a possible origin center.

The wild form of domesticated almond also grew in parts of the Levant. Almond cultivation was spread by humans centuries ago along the shores of the Mediterranean Sea into northern Africa and southern Europe, and more recently to other world regions, notably California.

Selection of the sweet type from the many bitter types in the wild marked the beginning of almond domestication. The wild ancestor of the almond used to breed the domesticated species is unknown. The species Prunus fenzliana may be the most likely wild ancestor of the almond, in part because it is native to Armenia and western Azerbaijan, where it was apparently domesticated. Wild almond species were grown by early farmers, "at first unintentionally in the garbage heaps, and later intentionally in their orchards".

Additional Information

Almonds contain nutrients that may help prevent cancer, strengthen bones, promote heart health, and more. However, almonds may not be good for everyone.

People can eat almonds raw or toasted as a snack or add them to sweet or savory dishes. They are also available sliced, flaked, slivered, as flour, oil, butter, or almond milk.

People call almonds a nut, but they are seeds, rather than a true nut.

Almond trees may have been one of the earliest trees that people cultivated. In Jordan, archaeologists have found evidence of domesticated almond trees dating back some 5,000 years.

Scientifically called Prunus dulcis, almonds are nutrient-dense nuts. Almonds can be consumed whole, chopped, sliced, or ground into almond flour or almond butter. They can even be made into almond milk.

This satisfying nut truly deserves its superfood status. It's full of antioxidants and other beneficial nutrients. It can help keep you healthy by preventing disease and supporting weight management. It may even do wonders for your skin, as well.

Are Good Sources of Antioxidants

Almonds are packed with antioxidants like vitamin E. Vitamin E protects your body from free radicals, which can harm cells, tissues, and organs.

Vitamin E also supports immunity, reduces inflammation, helps widen blood vessels to improve blood flow, and is linked to protection against neurodegenerative conditions, including Alzheimer's.

Among the health-promoting benefits of almonds are their natural antioxidant and anti-inflammatory properties. The antioxidants in almonds play an important role in protection from chronic diseases.

Eating almonds benefits overall health, too. The frequent consumption of almonds has been associated with a reduced risk of various diseases, including obesity, hypertension, diabetes, and metabolic syndrome.

Are Nutrient Powerhouses

Almonds are loaded with healthy nutrients. They're excellent sources of monounsaturated and polyunsaturated fats—aka the healthy fats. Unsaturated fats can help you lower your LDL cholesterol. You'll find them in most vegetable oils that are liquid at room temperature.

Magnesium is another nutrient found in large amounts in almonds. Magnesium plays a role in nerve and muscle function, keeps the heartbeat steady, and helps bones remain strong. It also supports a healthy immune system.

Can Boost Gut-Health

Almonds may not necessarily change the types of bacteria in your gut, but they may help your gut bacteria work better.

A 2022 study found adults who ate almonds had more butyrate than those who didn't, which suggests well-functioning gut bacteria. Butyrate is a type of fatty acid produced when your gut microbes process the dietary fiber your body can't digest.

Almonds and almond skin are considered prebiotics because they help your beneficial gut bacteria flourish. When your gut bacteria flourish, they produce more butyrate. Butyrate has a positive effect on health and may even be able to help prevent and treat some metabolic diseases.

Help Protect the Heart

Almonds protect your heart in several ways. According to a 2018 study in Nutrients, the nuts have been shown to maintain or increase "good" heart-protective HDL cholesterol, while lowering "bad" LDL levels.

Almonds help reduce blood pressure. High blood pressure, or hypertension, puts additional stress on your organs, including your heart and vascular system.

Almonds and other nuts can also improve vascular function, meaning they help blood vessels relax and reduce artery stiffness.

Don't skip out on the almonds if you have high cholesterol. Research has shown people with high cholesterol who included almonds in their diet had reduced LDL levels while maintaining HDL levels compared to those who didn't eat almonds. The almond eaters also had reductions in belly and leg fat.

May Help Regulate Weight

Almonds are some of the best nuts to consume if you're trying to manage your weight. Almonds have been shown to improve body mass index, waist circumference, and the fat that builds up around your midsection and organs.

In addition, almonds help suppress your hunger. You may find yourself eating fewer other foods as a result. Almonds can help you control your blood sugar and use more energy at rest.

May Support Skin Health

If you've gone through menopause, you may want to include almonds in your diet. Research on post-menopausal study participants showed that those who included almonds in their diet had fewer wrinkles and better skin color after 16 weeks.

Nutrition of Almonds

Compared to other nuts, almonds have the highest or nearly the highest amounts of fiber, protein, monounsaturated and polyunsaturated fats, magnesium, calcium, iron, and folate, among other nutrients.

In 100 grams (g) of raw almonds—about three-quarters of a cup—you'll get the following nutrients:

Calories: about 600
Fat: 51.1g
Fiber: 10.8g
Protein: 21.4g
Biotin: 57 micrograms (µg)
Calcium: 254 milligrams (mg)
Phosphorus: 503mg
Magnesium: 258mg
Copper: 0.91mg

Adding salt to those almonds and roasting them gives you the following nutrients:

Calories: about 640
Fat: 57.8g
Fiber: 11g
Calcium: 273mg
Phosphorus: 456mg
Magnesium: 258mg
Copper: 0.87mg

almonds-9e25ce7.jpg?quality=90&webp=true&resize=375,341


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2295 2024-09-10 16:17:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2295) Integumentary System

Gist

The integumentary system is the largest organ of the body that forms a physical barrier between the external environment and the internal environment that it serves to protect and maintain. The integumentary system includes the epidermis, dermis, hypodermis, associated glands, hair, and nails.

Summary

The integumentary system is the largest organ of the body that forms a physical barrier between the external environment and the internal environment that it serves to protect and maintain.

The integumentary system includes

* Skin (epidermis, dermis)
* Hypodermis
* Associated glands
* Hair
* Nails.

In addition to its barrier function, this system performs many intricate functions such as body temperature regulation, cell fluid maintenance, synthesis of Vitamin D, and detection of stimuli. The various components of this system work in conjunction to carry out these functions.

General Function

The integumentary system has several functions that provide several purposes:

* Physical protection: The integumentary is the covering of the human body and its' most apparent function is physical protection: skin - a tightly knit network of cells, with each layer contributing to its strength. The epidermis has an outermost layer created by layers of dead keratin that can withstand wear and tear of the outer environment, the dermis provides the epidermis with blood supply and has nerves that bring danger to attention amongst other functions; hypodermis provides physical cushioning to any mechanical trauma through adipose storage; glands secrete protective films throughout the body; nails protect the digits; hairs throughout the body filter harmful particles from entering the eyes, ears, nose, etc.
* Immunity: The skin is the body’s first line of defense acting as a physical barrier preventing direct entry of pathogens. Antimicrobial peptides (AMPs) and lipids on the skin also act as a biomolecular barrier that disrupts bacterial membranes. Resident immune cells, both myeloid and lymphoid cells are present in the skin, and some, eg Langerhans cells or dermal dendritic cells, can travel to the periphery and activate the greater immune system
* Wound healing: When our body undergoes trauma with a resulting injury, the integumentary system orchestrates the wound healing process through hemostasis, inflammation, proliferation, and remodeling.
* Thermoregulation: The skin has a large surface area that is highly vascularized, which allows it to conserve and release heat through vasoconstriction and vasodilation, respectively.
* Vitamin D synthesis: The primary sources of vitamin D are sun exposure and oral intake (crucial for bone health).
* Sensation- Skin innervation is by various types of sensory nerve endings that discriminate pain, temperature, touch, and vibration. Each type of receptor and nerve fiber varies in its adaptive and conductive speeds, leading to a wide range of signals that can be integrated to create an understanding of the external environment and help the body to react appropriately.

Details

The integumentary system is the set of organs forming the outermost layer of an animal's body. It comprises the skin and its appendages, which act as a physical barrier between the external environment and the internal environment that it serves to protect and maintain the body of the animal. Mainly it is the body's outer skin.

The integumentary system includes skin, hair, scales, feathers, hooves, claws, and nails. It has a variety of additional functions: it may serve to maintain water balance, protect the deeper tissues, excrete wastes, and regulate body temperature, and is the attachment site for sensory receptors which detect pain, sensation, pressure, and temperature.

Structure:

Skin

The skin is one of the largest organs of the body. In humans, it accounts for about 12 to 15 percent of total body weight and covers 1.5 to 2 m^2 of surface area.

The skin (integument) is a composite organ, made up of at least two major layers of tissue: the epidermis and the dermis. The epidermis is the outermost layer, providing the initial barrier to the external environment. It is separated from the dermis by the basement membrane (basal lamina and reticular lamina). The epidermis contains melanocytes and gives color to the skin. The deepest layer of the epidermis also contains nerve endings. Beneath this, the dermis comprises two sections, the papillary and reticular layers, and contains connective tissues, vessels, glands, follicles, hair roots, sensory nerve endings, and muscular tissue.

Between the integument and the deep body musculature there is a transitional subcutaneous zone made up of very loose connective and adipose tissue, the hypodermis. Substantial collagen bundles anchor the dermis to the hypodermis in a way that permits most areas of the skin to move freely over the deeper tissue layers.

Epidermis

The epidermis is the strong, superficial layer that serves as the first line of protection against the outer environment. The human epidermis is composed of stratified squamous epithelial cells, which further break down into four to five layers: the stratum corneum, stratum granulosum, stratum spinosum and stratum basale. Where the skin is thicker, such as in the palms and soles, there is an extra layer of skin between the stratum corneum and the stratum granulosum, called the stratum lucidum. The epidermis is regenerated from the stem cells found in the basal layer that develop into the corneum. The epidermis itself is devoid of blood supply and draws its nutrition from its underlying dermis.

Its main functions are protection, absorption of nutrients, and homeostasis. In structure, it consists of a keratinized stratified squamous epithelium; four types of cells: keratinocytes, melanocytes, Merkel cells, and Langerhans cells.

The predominant cell keratinocyte, which produces keratin, a fibrous protein that aids in skin protection, is responsible for the formation of the epidermal water barrier by making and secreting lipids. The majority of the skin on the human body is keratinized, with the exception of the lining of mucous membranes, such as the inside of the mouth. Non-keratinized cells allow water to "stay" atop the structure.

The protein keratin stiffens epidermal tissue to form fingernails. Nails grow from a thin area called the nail matrix at an average of 1 mm per week. The lunula is the crescent-shape area at the base of the nail, lighter in color as it mixes with matrix cells. Only primates have nails. In other vertebrates, the keratinizing system at the terminus of each digit produces claws or hooves.

The epidermis of vertebrates is surrounded by two kinds of coverings, which are produced by the epidermis itself. In fish and aquatic amphibians, it is a thin mucus layer that is constantly being replaced. In terrestrial vertebrates, it is the stratum corneum (dead keratinized cells). The epidermis is, to some degree, glandular in all vertebrates, but more so in fish and amphibians. Multicellular epidermal glands penetrate the dermis, where they are surrounded by blood capillaries that provide nutrients and, in the case of endocrine glands, transport their products.

Dermis

The dermis is the underlying connective tissue layer that supports the epidermis. It is composed of dense irregular connective tissue and areolar connective tissue such as a collagen with elastin arranged in a diffusely bundled and woven pattern.

The dermis has two layers: the papillary dermis and the reticular layer. The papillary layer is the superficial layer that forms finger-like projections into the epidermis (dermal papillae), and consists of highly vascularized, loose connective tissue. The reticular layer is the deep layer of the dermis and consists of the dense irregular connective tissue. These layers serve to give elasticity to the integument, allowing stretching and conferring flexibility, while also resisting distortions, wrinkling, and sagging. The dermal layer provides a site for the endings of blood vessels and nerves. Many chromatophores are also stored in this layer, as are the bases of integumental structures such as hair, feathers, and glands.

Hypodermis

The hypodermis, otherwise known as the subcutaneous layer, is a layer beneath the skin. It invaginates into the dermis and is attached to the latter, immediately above it, by collagen and elastin fibers. It is essentially composed of a type of cell known as adipocytes, which are specialized in accumulating and storing fats. These cells are grouped together in lobules separated by connective tissue.

The hypodermis acts as an energy reserve. The fats contained in the adipocytes can be put back into circulation, via the venous route, during intense effort or when there is a lack of energy-providing substances, and are then transformed into energy. The hypodermis participates, passively at least, in thermoregulation since fat is a heat insulator.

Functions

The integumentary system has multiple roles in maintaining the body's equilibrium. All body systems work in an interconnected manner to maintain the internal conditions essential to the function of the body. The skin has an important job of protecting the body and acts as the body's first line of defense against infection, temperature change, and other challenges to homeostasis.

Its main functions include:

* Protect the body's internal living tissues and organs
* Protect against invasion by infectious organisms
* Protect the body from dehydration
* Protect the body against abrupt changes in temperature, maintain homeostasis
* Help excrete waste materials through perspiration
* Act as a receptor for touch, pressure, pain, heat, and cold (see Somatosensory system)
* Protect the body against sunburns by secreting melanin
* Generate vitamin D through exposure to ultraviolet light
* Store water, fat, glucose, vitamin D
* Maintenance of the body form
* Formation of new cells from stratum germinativum to repair minor injuries
* Protect from UV rays.
* Regulates body temperature
* It distinguishes, separates, and protects the organism from its surroundings.

Small-bodied invertebrates of aquatic or continually moist habitats respire using the outer layer (integument). This gas exchange system, where gases simply diffuse into and out of the interstitial fluid, is called integumentary exchange.

Additional Information

Integument, in biology, network of features that forms the covering of an organism. The integument delimits the body of the organism, separating it from the environment and protecting it from foreign matter. At the same time it gives communication with the outside, enabling an organism to live in a particular environment.

Among unicellular organisms, such as bacteria and protozoans, the integument corresponds to the cell membrane and any secreted coating that the organism produces. In most invertebrate animals a layer (or layers) of surface (epithelial) cells—often with additional secreted coatings—constitutes the integument. Among the vertebrates the boundary covering—with a variety of derived elements such as scales, feathers, and hair—has assumed the complexity of an organ system, the integumentary system.

The integument is composed of layers that may be of single cell thickness, as in many invertebrates, or multiple cell thickness, as in some invertebrates and all vertebrates. In every case the cells that give rise to the integuments belong to that class of tissue called epithelium, which in most animals is called epidermis. Underlying the epidermis and supplying it with nourishment is the dermis. In addition to the cellular layers, the integument often includes a noncellular coating, or cuticle, that is secreted by the epidermis. Such coatings are found in most invertebrates. The vertebrate skin has generated many kinds of glands and a variety of horny structures, but it lacks coatings.

The wide diversity of integuments among vertebrates further exemplifies the adaptive character of the body covering: from the almost impenetrable shield of an armadillo and the dense furry coat of an Arctic bear to the slimy, scaled covering of a cod and the exceptionally smooth skin of a porpoise. Amphibians and fishes often have mucous glands that lubricate their skins and prevent waterlogging and deterioration. Reptiles have thick, leathery skins that help reduce water loss and serve as an armour against enemies. Birds use their feathers—skin derivatives—to fly and to insulate their bodies. The hairy or furry coats of many terrestrial mammals insulate them, shed water, and provide a dense guard against injury.

Human skin, in human anatomy, the covering, or integument, of the body’s surface that both provides protection and receives sensory stimuli from the external environment. The skin consists of three layers of tissue: the epidermis, an outermost layer that contains the primary protective structure, the stratum corneum; the dermis, a fibrous layer that supports and strengthens the epidermis; and the subcutis, a subcutaneous layer of fat beneath the dermis that supplies nutrients to the other two layers and that cushions and insulates the body.

Distinctive features

The apparent lack of body hair immediately distinguishes human beings from all other large land mammals. Regardless of individual or racial differences, the human body seems to be more or less hairless, in the sense that the hair is so vestigial as to seem absent; yet in certain areas hair grows profusely. These relatively hairy places may be referred to as epigamic areas, and they are concerned with social and sexual communication, either visually or by scent from glands associated with the hair follicles.

The characteristic features of skin change from the time of birth to old age. In infants and children it is velvety, dry, soft, and largely free of wrinkles and blemishes. Children younger than two years sweat poorly and irregularly; their sebaceous glands function minimally. At adolescence hair becomes longer, thicker, and more pigmented, particularly in the scalp, axillae, pubic eminence, and the male face. General skin pigmentation increases, localized pigmented foci appear mysteriously, and acne lesions often develop. Hair growth, sweating, and sebaceous secretion begin to blossom. As a person ages, anatomical and physiological alterations, as well as exposure to sunlight and wind, leave skin, particularly that not protected by clothing, dry, wrinkled, and flaccid.

Human skin, more than that of any other mammal, exhibits striking topographic differences. An example is the dissimilarity between the palms and the backs of the hands and fingers. The skin of the eyebrows is thick, coarse, and hairy; that on the eyelids is thin, smooth, and covered with almost invisible hairs. The face is seldom visibly haired on the forehead and cheekbones. It is completely hairless in the vermilion border of the lips, yet coarsely hairy over the chin and jaws of males. The surfaces of the forehead, cheeks, and nose are normally oily, in contrast with the relatively greaseless lower surface of the chin and jaws. The skin of the chest, pubic region, scalp, axillae, abdomen, soles of the feet, and ends of the fingers varies as much structurally and functionally as it would if the skin in these different areas belonged to different animals.

The skin achieves strength and pliability by being composed of numbers of layers oriented so that each complements the others structurally and functionally. To allow communication with the environment, countless nerves—some modified as specialized receptor end organs and others more or less structureless—come as close as possible to the surface layer, and nearly every skin organ is enwrapped by skeins of fine sensory nerves.

Integumentary-System.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2296 2024-09-11 00:49:23

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2296) Electric Stove

Gist

An electric stove, electric cooker or electric range is a stove with an integrated electrical heating device to cook and bake.

Electric stove tops use coils to heat the surface of the stove top to transfer heat directly into the cookware. Many modern electric appliances have radiant elements installed beneath a smooth glass surface that has the added benefit of being easy to clean.

Lloyd Groff Copeman (December 28, 1881 – July 5, 1956) was an American inventor who devised the first electric stove and the flexible rubber ice cube tray, among other products.

Summary

What are the advantages and disadvantages of an electric stove?

* Burners are safer to use but remain hotter for longer after turning off.
* Electric ovens transmit heat in a more controlled way because they have a convertor.
* Oven takes a long time to preheat
* Not easy to handle the stove like that of the gas when it reaches the desired temperature.

What type of electric stove is best?

One of the most popular types of electric ranges these days is the glass top electric range. They're easy to clean, they heat up quickly, and they provide an even cooking surface. Plus, they look pretty stylish in a modern kitchen! But if you prefer a more traditional look, coil ranges can also be a great option. Not easy to handle the stove like that of the gas when it reaches the desired temperature.

Is it good to use electric stove?

While electric stoves are cleaner when it comes to indoor pollution, the truth is that they do consume more energy than gas stoves. And, many of us still have no choice but to get our electricity from dirty sources. If you can, make sure the electricity you're using comes from clean sources.

Is electric better than gas stove?

Both gas and electric ranges have advantages, depending on what and how you cook. Gas ranges offer more responsive heat control for switching between searing meats or stir-frying veggies, while the dry, even heat of electric range ovens may work better for certain baked goods.

Details

An electric stove, electric cooker or electric range is a stove with an integrated electrical heating device to cook and bake. Electric stoves became popular as replacements for solid-fuel (wood or coal) stoves which required more labor to operate and maintain. Some modern stoves come in a unit with built-in extractor hoods.

The stove's one or more "burners" (heating elements) may be controlled by a rotary switch with a finite number of positions (which may be marked out by numbers such as 1 to 10, or by settings such as Low, Medium and High), each of which engages a different combination of resistances and hence a different heating power; or may have an "infinite switch" called a simmerstat that allows constant variability between minimum and maximum heat settings. Some stove burners and controls incorporate thermostats.

History:

Early patents

On September 20, 1859, George B. Simpson was awarded US patent #25532 for an 'electro-heater' surface heated by a platinum-wire coil powered by batteries. In his words, useful to "warm rooms, boil water, cook victuals...".

Canadian inventor Thomas Ahearn filed patent #39916 in 1892 for an "Electric Oven," a device he probably employed in preparing a meal for an Ottawa hotel that year. Ahearn and Warren Y. Soper were owners of Ottawa's Chaudiere Electric Light and Power Company. The electric stove was showcased at the Chicago World's Fair in 1893, where an electrified model kitchen was shown. Unlike the gas stove, the electrical stove was slow to catch on, partly due to the unfamiliar technology, and the need for cities and towns to be electrified.

In 1897, William Hadaway was granted US patent # 574537 for an "Automatically Controlled Electric Oven".

Kalgoorlie Stove

In November 1905, David Curle Smith, the Municipal Electrical Engineer of Kalgoorlie, Western Australia, applied for a patent (Aust Patent No 4699/05) for a device that adopted (following the design of gas stoves) what later became the configuration for most electric stoves: an oven surmounted by a hotplate with a grill tray between them. Curle Smith's stove did not have a thermostat; heat was controlled by the number of the appliance's nine elements that were switched on.

After the patent was granted in 1906, manufacturing of Curle Smith's design commenced in October that year. The entire production run was acquired by the electricity supply department of Kalgoorlie Municipality, which hired out the stoves to residents. About 50 appliances were produced before cost overruns became a factor in Council politics and the project was suspended. This was the first time household electric stoves were produced with the express purpose of bringing "cooking by electricity ... within the reach of anyone". There are no extant examples of this stove, many of which were salvaged for their copper content during World War I.

To promote the stove, David Curle Smith's wife, H. Nora Curle Smith (née Helen Nora Murdoch, and a member of the Murdoch family prominent in Australian public life), wrote a cookbook containing operating instructions and 161 recipes. Thermo-Electrical Cooking Made Easy, published in March 1907, is therefore the world's first cookbook for electric stoves.

Since 1908

Three companies, in the United States, began selling electric stoves in 1908. However, sales and public acceptance were slow to develop. Early electric stoves were unsatisfactory due to the cost of electricity (compared with wood, coal, or city gas), limited power available from the electrical supply company, poor temperature regulation, and short life of heating elements. The invention of nichrome alloy for resistance wires improved the cost and durability of heating elements. As late as the 1920s, an electric stove was still considered a novelty.

By the 1930s, the maturing of the technology, the decreased cost of electric power and modernized styling of electric stoves had greatly increased their acceptance. The electrical stove slowly began to replace the gas stove, especially in household kitchens.

Electric stoves and other household appliances were marketed by electrical utilities to build demand for electric power. During the expansion of rural electrification, demonstrations of cooking on an electric stove were popular.

Variants

Early electric stoves had resistive heating coils which heated iron hotplates, on top of which the pots were placed. Eventually, composite heating elements were introduced, with the resistive wires encased in hollow metal tubes packed with magnesite. These tubes, arranged in a spiral, support the cookware directly.

In the 1970s, glass-ceramic cooktops started to appear. Glass-ceramic has very low thermal conductivity and a near-zero coefficient of thermal expansion, but lets infrared radiation pass very well. Electrical heating coils or halogen lamps are used as heating elements. Because of its physical characteristics, the cooktop heats more quickly, less afterheat remains, and only the plate heats up while the adjacent surface remains cool. These cooktops have a smooth surface and are thus easier to clean, but are markedly more expensive.

A third technology is the induction stove, which also has a smooth glass-ceramic surface. Only ferromagnetic cookware works with induction stoves, which heat by dint of electromagnetic induction.

Electricity consumption

Typical electricity consumption of one heating element depending on size is 1–3 kW.

Additional Information:

Stove

Stove, device used for heating or cooking. The first of historical record was built in 1490 in Alsace, entirely of brick and tile, including the flue. The later Scandinavian stove had a tall, hollow iron flue containing iron baffles arranged to lengthen the travel of the escaping gases in order to extract maximum heat. The Russian stove had as many as six thick-walled masonry flues; it is still widely used in northern countries. The stove is often installed at the intersection of interior partition walls in such a manner that a portion of the stove and the flue is inside each of four rooms; a fire is maintained until the stove and flues are hot, and then the fire is extinguished and the flues closed, storing the heat.

The first manufactured cast-iron stove was produced at Lynn, Mass., in 1642. This stove had no grates and was little more than a cast-iron box. About 1740 Benjamin Franklin invented the “Pennsylvania fireplace,” which incorporated the basic principles of the heating stove. The Franklin stove burned wood on a grate and had sliding doors that could be used to control the draft (flow of air) through it. Because the stove was relatively small, it could be installed in a large fireplace or used free-standing in the middle of a room by connecting it to a flue. The Franklin stove warmed farmhouses, city dwellings, and frontier cabins throughout North America. Its design influenced the potbellied stov which was a familiar feature in some homes well into the 20th century. The first round cast-iron stoves with grates for cooking food on them were manufactured by Isaac Orr at Philadelphia, Pa., in 1800. The base-burning stove for burning anthracite coal was invented in 1833 by Jordan A. Mott.

Cooking became the predominant function of stoves in the 20th century as central heating became the norm in the developed world. Iron cooking stoves using wood, charcoal, or coal tended to radiate large amounts of heat that made the kitchen unpleasantly hot during the summertime, however. In the 20th century they were replaced by steel ranges or ovens that are heated by natural gas or electricity.

electric-stove-safety.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2297 2024-09-11 21:34:21

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2297) Cardamom

Gist

What is the benefit of cardamom?

The seeds are used to make medicine. Cardamom is used for digestion problems including heartburn, intestinal spasms, irritable bowel syndrome (IBS), intestinal gas, constipation, liver and gallbladder complaints, and loss of appetite.

Is cardamom hot or cold?

Cardamom seeds are recommended as cooling spices because they help calm the stomach acids and may provide relief from ulcers. Cardamom has a cooling effect on the body, making it suitable for summer. We recommend using organic and natural cardamom whenever possible.

Summary

Cardamom is a spice consisting of whole or ground dried fruits, or seeds, of Elettaria cardamomum, a herbaceous perennial plant of the ginger family (Zingiberaceae). The seeds have a warm, slightly pungent, and highly aromatic flavour somewhat reminiscent of camphor. They are a popular seasoning in South Asian dishes, particularly curries, and in Scandinavian pastries.

Introduced to Europe in the mid-16th century, cardamom bears a name that blends the Greek words for “spice” and “cress.” The name is sometimes mistakenly applied to similar spices in the ginger family, but it properly describes two related varieties of the spice, black and green, the latter being the more common. Black cardamom is aromatic and smoky, whereas green cardamom has a milder flavour.

Physical description

Leafy shoots of the cardamom plant arise 1.5 to 6 metres (5 to 20 feet) from the branching rootstock. Flowering shoots, approximately 1 metre (3 feet) long, may be upright or sprawling; each bears numerous flowers about 5 cm (2 inches) in diameter with greenish petals and a purple-veined white lip. The whole fruit, 0.8 to 1.5 cm, is a green three-sided oval capsule containing 15 to 20 dark, reddish brown to brownish black, hard, angular seeds. The essential oil occurs in large parenchyma cells underlying the epidermis of the seed coat. The essential oil content varies from 2 to 10 percent; its principal components are cineole and α-terpinyl acetate.

Cultivation and processing

Cardamom fruits may be collected from wild plants, native to the moist forests of southern India, but most cardamom is cultivated in India, Sri Lanka, and Guatemala. The fruits are picked or clipped from the stems just before maturity, cleansed, and dried in the sun or in a heated curing chamber. Cardamom may be bleached to a creamy white colour in the fumes of burning sulfur. After curing and drying, the small stems of the capsules are removed by winnowing. Decorticated cardamom consists of husked dried seeds.

Cooking uses and health benefits

The cardamom pod, which contains hard, black seeds, is sometimes added whole to dishes. More commonly, the pods are opened and the seeds are removed, then roasted in an oven or a skillet. These seeds contain the spice’s essential oil, which gives it its flavour and scent, with hints of mint and lemon. The seeds are ground with a mortar and pestle, then added to South Asian foods such as curry and chai. Cardamom is a characteristic ingredient in Middle Eastern cuisine as well. It also figures in pastries, especially in the Scandinavian countries, where it is also used as a flavouring for coffee and tea. The spice mixes well with cinnamon, as well as nutmeg and cloves. It is also an ingredient in the Indian spice blend called garam masala.

Cardamom contains vitamin C, niacin, magnesium, and potassium. Apart from its distinctive flavour, cardamom contains high levels of antioxidants, and it is used in Ayurvedic medicine to treat urinary tract disorders and to lower blood sugar levels. It is also frequently incorporated as an ingredient in homeopathic toothpaste for its antibacterial and breath-freshening qualities. Stronger health claims, such as its efficacy in fighting cancers, lack medical substantiation to date.

Although cardamom is widely cultivated at various elevations in South Asia, most of the world market demand is met by Guatemala, where cardamom was introduced by European coffee planters. It is ranked high among the world’s most expensive spices by weight.

Details

Cardamom, sometimes cardamon or cardamum, is a spice made from the seeds of several plants in the genera Elettaria and Amomum in the family Zingiberaceae. Both genera are native to the Indian subcontinent and Indonesia. They are recognized by their small seed pods: triangular in cross-section and spindle-shaped, with a thin, papery outer shell and small, black seeds; Elettaria pods are light green and smaller, while Amomum pods are larger and dark brown.

Species used for cardamom are native throughout tropical and subtropical Asia. The first references to cardamom are found in Sumer, and in Ayurveda. In the 21st century, it is cultivated mainly in India, Indonesia, and Guatemala.

Etymology

The word cardamom is derived from the Latin cardamōmum, as a Latinisation of the Greek (kardámōmon), a compound of  (kárdamon, "cress") and (ámōmon), of unknown origin.

The earliest attested form of the word κάρδαμον signifying "cress" is the Mycenaean Greek ka-da-mi-ja, written in Linear B syllabic script, in the list of flavorings on the spice tablets found among palace archives in the House of the Sphinxes in Mycenae.

The modern genus name Elettaria is derived from the root ēlam attested in Dravidian languages.

Types and distribution

The two main types of cardamom are:

* True or green cardamom (or white cardamom when bleached) comes from the species Elettaria cardamomum and is distributed from India to Malaysia. What is often referred to as white cardamon is actually Siam cardamom, Amomum krervanh.
* Black cardamom, also known as brown, greater, large, longer, or Nepal cardamom, comes from the species Amomum subulatum and is native to the eastern Himalayas and mostly cultivated in Eastern Nepal, Sikkim, and parts of Darjeeling district in West Bengal of India, and southern Bhutan.

The two types of cardamom, were distinguished in the fourth century BCE by Theophrastus.

Uses

Both forms of cardamom are used as flavorings and cooking spices in both food and drink. E. cardamomum (green cardamom) is used as a spice, a masticatory, or is smoked.

Food and beverage

Cardamom has a strong taste, with an aromatic, resinous fragrance. Black cardamom has a more smoky – though not bitter – aroma, with a coolness some consider similar to mint.

Green cardamom is one of the most expensive spices by weight, but little is needed to impart flavor. It is best stored in the pod, as exposed or ground seeds quickly lose their flavor. Grinding the pods and seeds together lowers both the quality and the price. For recipes requiring whole cardamom pods, a generally accepted equivalent is 10 pods equals 1+1/2 teaspoons (7.4 ml) of ground cardamom.

Cardamom is a common ingredient in Indian cooking. It is also often used in baking in the Nordic countries, in particular in Sweden, Norway, and Finland, where it is used in traditional treats such as the Scandinavian Yule bread Julekake, the Swedish kardemummabullar sweet bun, and Finnish sweet bread pulla. In the Middle East, green cardamom powder is used as a spice for sweet dishes, and as a traditional flavouring in coffee and tea. Cardamom is used to a wide extent in savoury dishes. In some Middle Eastern countries, coffee and cardamom are often ground in a wooden mortar, a mihbaj, and cooked together in a skillet, a mehmas, over wood or gas, to produce mixtures with up to 40% cardamom.

In Asia, both types of cardamom are widely used in both sweet and savoury dishes, particularly in the south. Both are frequent components in such spice mixes as Indian and Nepali masalas and Thai curry pastes. Green cardamom is often used in traditional Indian sweets and in masala chai (spiced tea). Both are also often used as a garnish in basmati rice and other dishes. Individual seeds are sometimes chewed and used in much the same way as chewing gum. It is used by confectionery giant Wrigley; its Eclipse Breeze Exotic Mint packaging indicates the product contains "cardamom to neutralize the toughest breath odors". It is also included in aromatic bitters, gin, and herbal teas.

In Korea, Tavoy cardamom (Wurfbainia villosa var. xanthioides) and red cardamom (Lanxangia tsao-ko) are used in tea called jeho-tang.

Composition

The essential oil content of cardamom seeds depends on storage conditions and may be as high as 8%. The oil is typically 45% α-terpineol, 27% myrcene, 8% limonene, 6% menthone, 3% β-phellandrene, 2% 1,8-cineol, 2% sabinene and 2% heptane. Other sources report the following contents: 1,8-cineol (20 to 50%), α-terpenylacetate (30%), sabinene, limonene (2 to 14%), and borneol.

In the seeds of round cardamom from Java (Wurfbainia compacta), the content of essential oil is lower (2 to 4%), and the oil contains mainly 1,8-cineol (up to 70%) plus β-pinene (16%); furthermore, α-pinene, α-terpineol and humulene are found.

Production

In 2022, world production of cardamom (included with nutmeg and mace for reporting to the United Nations) was 138,888 tonnes, led by India, Indonesia and Guatemala, which together accounted for 85% of the total.

Production practices

According to Nair (2011), in the years when India achieves a good crop, it is still less productive than Guatemala. Other notable producers include Costa Rica, El Salvador, Honduras, Papua New Guinea, Sri Lanka, Tanzania, Thailand, and Vietnam.

Much production of cardamom in India is cultivated on private property or in areas the government leases out to farmers. Traditionally, small plots of land within the forests (called eld-kandies) where the wild or acclimatised plant existed are cleared during February and March. Brushwood is cut and burned, and the roots of powerful weeds are torn up to free the soil. Soon after clearing, cardamom plants spring up. After two years the cardamom plants may have eight-to-ten leaves and reach 30 cm (1 ft) in height. In the third year, they may be 120 cm (4 ft) in height. In the following May or June the ground is again weeded, and by September to November a light crop is obtained. In the fourth year, weeding again occurs, and if the cardamoms grow less than 180 cm (6 ft) apart a few are transplanted to new positions. The plants bear for three or four years; and historically the life of each plantation was about eight or nine years. In Malabar the seasons run a little later than in Mysore, and – according to some reports – a full crop may be obtained in the third year. Cardamoms grown above 600 m (2,000 ft) elevation are considered to be of higher quality than those grown below that altitude.

Plants may be raised from seed or by division of the rhizome. In about a year, the seedlings reach about 30 cm (1 ft) in length, and are ready for transplantation. The flowering season is April to May, and after swelling in August and September, by the first half of October usually attain the desired degree of ripening. The crop is accordingly gathered in October and November, and in exceptionally moist weather, the harvest protracts into December. At the time of harvesting, the scapes or shoots bearing the clusters of fruits are broken off close to the stems and placed in baskets lined with fresh leaves. The fruits are spread out on carefully prepared floors, sometimes covered with mats, and are then exposed to the sun. Four or five days of careful drying and bleaching in the sun is usually enough. In rainy weather, drying with artificial heat is necessary, though the fruits suffer greatly in colour; they are consequently sometimes bleached with steam and sulphurous vapour or with ritha nuts.

The industry is highly labour-intensive, each hectare requiring considerable maintenance throughout the year. Production constraints include recurring climate vagaries, the absence of regular re-plantation, and ecological conditions associated with deforestation.

Cultivation

In 1873 and 1874, Ceylon (now Sri Lanka) exported about 4,100 kg (9,000 lb) each year. In 1877, Ceylon exported 5,039 kg (11,108 lb), in 1879, 8,043 kg (17,732 lb), and in the 1881–82 season, 10,490 kg (23,127 lb). In 1903, 1,600 hectares (4,000 acres) of cardamom growing areas were owned by European planters. The produce of the Travancore plantations was given as 290,000 kg (650,000 lb), or just a little under that of Ceylon. The yield of the Mysore plantations was approximately 91,000 kg (200,000 lb), and the cultivation was mainly in Kadur district. The volume for 1903–04 stated the value of the cardamoms exported to have been Rs. 3,37,000 as compared with Rs. 4,16,000 the previous year. India, which ranks second in world production, recorded a decline of 6.7 percent in cardamom production for 2012–13, and projected a production decline of 30–40% in 2013–14, compared with the previous year due to unfavorable weather. In India, the state of Kerala is by far the most productive producer, with the districts of Idukki, Palakkad and Wynad being the principal producing areas. Given that a number of bureaucrats have personal interests in the industry, in India, several organisations have been set up to protect cardamom producers such as the Cardamom Growers Association (est. 1992) and the Kerala Cardamom Growers Association (est. 1974). Research in India's cardamom plantations began in the 1970s while Kizhekethil Chandy held the office of Chairman of the Cardamom Board. The Kerala Land Reforms Act imposed restrictions on the size of certain agricultural holdings per household to the benefit of cardamom producers.

In 1979–1980, Guatemala surpassed India in worldwide production.[18] Guatemala cultivates Elettaria cardamomum, which is native to the Malabar Coast of India. Alta Verapaz Department produces 70 percent of Guatemala's cardamom. Cardamom was introduced to Guatemala before World War I by the German coffee planter Oscar Majus Kloeffer. After World War II, production was increased to 13,000 to 14,000 tons annually.

The average annual income for a plantation-owning household in 1998 was US$3,408. Although the typical harvest requires over 210 days of labor per year, most cardamom farmers are better off than many other agricultural workers, and there are a significant number of those from the upper strata of society involved in the cultivation process. Increased demand since the 1980s, principally from China, for both Wurfbainia villosa and Lanxangia tsao-ko, has provided a key source of income for poor farmers living at higher altitudes in localized areas of China, Laos, and Vietnam, people typically isolated from many other markets. Laos exports about 400 tonnes annually through Thailand according to the FAO.

Trade

Cardamom production's demand and supply patterns of trade are influenced by price movements, nationally and internationally, in 5 to 6-year cycles. Importing leaders mentioned are Saudi Arabia and Kuwait, while other significant importers include Germany, Iran, Japan, Jordan, Pakistan, Qatar, United Arab Emirates, the UK, and the former USSR. According to the United Nations Conference on Trade and Development, 80 percent of cardamom's total consumption occurs in the Middle East.

In the 19th century, Bombay and Madras were among the principal distributing ports of cardamom. India's exports to foreign countries increased during the early 20th century, particularly to the United Kingdom, followed by Arabia, Aden, Germany, Turkey, Japan, Persia and Egypt. However, some 95% of cardamom produced in India is for domestic purposes, and India is itself by far the most important consuming country for cardamoms in the world. India also imports cardamom from Sri Lanka. In 1903–1904, these imports came to 122,076 kg (269,132 lb), valued at Rs. 1,98,710. In contrast, Guatemala's local consumption is negligible, which supports the exportation of most of the cardamom that is produced. In the mid-1800s, Ceylon's cardamom was chiefly imported by Canada. After saffron and vanilla, cardamom is currently the third most expensive spice, and is used as a spice and flavouring for food and liqueurs.

History

Cardamom has been used in flavorings and food over centuries. During the Middle Ages, cardamom dominated the trade industry. The Arab states played a significant role in the trade of Indian spices, including cardamom. It is now ranked the third most expensive spice following saffron and vanilla.

Cardamom production began in ancient times, and has been referred to in ancient Sanskrit texts as ela. The Babylonians and Assyrians used the spice early on, and trade in cardamom opened up along land routes and by the interlinked Persian Gulf route controlled from Dilmun as early as the third millennium BCE Early Bronze Age, into western Asia and the Mediterranean world.

The ancient Greeks thought highly of cardamom, and the Greek physicians Dioscorides and Hippocrates wrote about its therapeutic properties, identifying it as a digestive aid. Due to demand in ancient Greece and Rome, the cardamom trade developed into a handsome luxury business; cardamom was one of the spices eligible for import tax in Alexandria in 126 CE. In medieval times, Venice became the principal importer of cardamom into the west, along with pepper, cloves and cinnamon, which was traded with merchants from the Levant with salt and meat products.

In China, Amomum was an important part of the economy during the Song Dynasty (960–1279). In 1150, the Arab geographer Muhammad al-Idrisi noted that cardamom was being imported to Aden, in Yemen, from India and China.

The Portuguese became involved in the trade in the 16th century, and the industry gained wide-scale European interest in the 19th century.

Additional Information

Cardamom is an herb that is often used as a spice in foods. The seeds and the oil from the seeds are sometimes used to make medicine.

Cardamom contains chemicals that might kill some bacteria, reduce swelling, and help the immune system.

Cardamom is used for diabetes, high cholesterol, build up of fat in the liver in people who drink little or no alcohol (nonalcoholic fatty liver disease or NAFLD), and other purposes, but there is no good scientific evidence to support its use.

Side Effects

When taken by mouth: Cardamom is commonly consumed in foods. It is possibly safe when taken in the larger amounts found in medicine.

When inhaled: It is possibly safe to breathe the vapor from cardamom essential oil as aromatherapy.

Special Precautions and Warnings

When taken by mouth: Cardamom is commonly consumed in foods. It is possibly safe when taken in the larger amounts found in medicine.

When inhaled: It is possibly safe to breathe the vapor from cardamom essential oil as aromatherapy.

Pregnancy: Cardamom is commonly consumed in foods. But it is possibly unsafe to take larger amounts of cardamom as medicine when pregnant. There is concern that cardamom might cause a miscarriage.

Breast-feeding: Cardamom is commonly consumed in foods. There isn't enough reliable information to know if cardamom is safe to use in larger amounts as medicine when breast-feeding. Stay on the safe side and stick to food amounts.

Dosing

Cardamom is often included as a spice in foods. As a supplement, it is most often taken by mouth as a dose of 3 grams daily for up to 4 weeks in adults. Speak with a healthcare provider to found out what dose might be best for a specific condition.

green-cardamom.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2298 2024-09-12 00:17:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2298) Voltage Regulator

Gist

A voltage regulator is any electrical or electronic device that maintains the voltage of a power source within acceptable limits. The voltage regulator is needed to keep voltages within the prescribed range that can be tolerated by the electrical equipment using that voltage.

Summary

A voltage regulator is any electrical or electronic device that maintains the voltage of a power source within acceptable limits. The voltage regulator is needed to keep voltages within the prescribed range that can be tolerated by the electrical equipment using that voltage. Such a device is widely used in motor vehicles of all types to match the output voltage of the generator to the electrical load and to the charging requirements of the battery. Voltage regulators also are used in electronic equipment in which excessive variations in voltage would be detrimental.

In motor vehicles, voltage regulators rapidly switch from one to another of three circuit states by means of a spring-loaded, double-pole switch. At low speeds, some current from the generator is used to boost the generator’s magnetic field, thereby increasing voltage output. At higher speeds, resistance is inserted into the generator-field circuit so that its voltage and current are moderated. At still higher speeds, the circuit is switched off, lowering the magnetic field. The regulator switching rate is usually 50 to 200 times per second.

Electronic voltage regulators utilize solid-state semiconductor devices to smooth out variations in the flow of current. In most cases, they operate as variable resistances; that is, resistance decreases when the electrical load is heavy and increases when the load is lighter.

Voltage regulators perform the same function in large-scale power-distribution systems as they do in motor vehicles and other machines; they minimize variations in voltage in order to protect the equipment using the electricity. In power-distribution systems the regulators are either in the substations or on the feeder lines themselves. Two types of regulators are used: step regulators, in which switches regulate the current supply, and induction regulators, in which an induction motor supplies a secondary, continually adjusted voltage to even out current variations in the feeder line.

Details

A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages.

Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line.

Electronic voltage regulators

A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for low voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large.

Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, up to a point, to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range.

Electromechanical regulators

In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more.

If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator.

Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement.

Modern designs now use solid state technology (transistors) to perform the same function that the relays perform in electromechanical regulators.

Automatic voltage regulator

Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR  is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults.

AC voltage stabilizers:

Coil-rotation AC voltage regulator

This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler.

When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil.

This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil.

Electromechanical

Electromechanical regulators called voltage stabilizers or tap-changers, have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount.

Constant-voltage transformer

The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary.

The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply.

Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely.

The ferroresonant transformers, which are also known as constant-voltage transformers (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection.

A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage.

Output power factor remains in the range of 0.96 or higher from half to full load.

Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching.

Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency.

Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance.

Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads.

It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components.

Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD.

Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation.

Power distribution

Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V.

DC voltage stabilizers

Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a shunt regulator such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device.

If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand.

VR-7808-800x800.jpg?_gl=1*1g6vdo1*_gcl_au*NTA5NjIxMzUzLjE3MjYxNDI4OTY.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2299 2024-09-13 00:03:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2299) Latitude

Gist

Lines of latitude, also called parallels, are imaginary lines that divide the Earth. They run east to west, but measure your distance north or south. The equator is the most well known parallel. At 0 degrees latitude, it equally divides the Earth into the Northern and Southern hemispheres. From the equator, latitude increases as you travel north or south, reaching 90 degrees at each pole.

Latitude measures the distance north or south of the equator. Latitude lines start at the equator (0 degrees latitude) and run east and west, parallel to the equator. Lines of latitude are measured in degrees north or south of the equator to 90 degrees at the North or South poles.

Summary

Latitude is the measurement of distance north or south of the Equator. It is measured with 180 imaginary lines that form circles around Earth east-west, parallel to the Equator. These lines are known as parallels. A circle of latitude is an imaginary ring linking all points sharing a parallel.

The Equator is the line of 0 degrees latitude. Each parallel measures one degree north or south of the Equator, with 90 degrees north of the Equator and 90 degrees south of the Equator. The latitude of the North Pole is 90 degrees N, and the latitude of the South Pole is 90 degrees S.

Like the poles, some circles of latitude are named. The Tropic of Cancer, for instance, is 23 degrees 26 minutes 21 seconds N—23° 26' 21'' N. Its twin, the Tropic of Capricorn, is 23° 26' 21'' S. The tropics are important geographic locations that mark the northernmost and southernmost latitudes where the sun can be seen directly overhead during a solstice.

One degree of latitude, called an arc degree, covers about 111 kilometers (69 miles). Because of Earth's curvature, the farther the circles are from the Equator, the smaller they are. At the North and South Poles, arc degrees are simply points.

Degrees of latitude are divided into 60 minutes. To be even more precise, those minutes are divided into 60 seconds. One minute of latitude covers about 1.8 kilometers (1.1 miles) and one second of latitude covers about 32 meters (105 feet).

For example, the latitude for Cairo, Egypt, in degrees and minutes would be written as 29° 52' N, because the city is 29 degrees, 52 minutes north of the Equator. The latitude for Cape Town, South Africa, would be 33° 56' S, because the city is 33 degrees, 56 minutes south of the Equator. Using seconds of latitude, global positioning system (GPS) devices can pinpoint schools, houses, even rooms in either of these towns.

Similar to latitude, the corresponding measurement of distance around the Earth is called longitude. The imaginary lines of latitude and longitude intersect each other, forming a grid that covers Earth. The points of latitude and longitude are called coordinates, and can be used together to locate any point on Earth.

Details

In geography, latitude is a coordinate that specifies the north–south position of a point on the surface of the Earth or another celestial body. Latitude is given as an angle that ranges from −90° at the south pole to 90° at the north pole, with 0° at the Equator. Lines of constant latitude, or parallels, run east–west as circles parallel to the equator. Latitude and longitude are used together as a coordinate pair to specify a location on the surface of the Earth.

On its own, the term "latitude" normally refers to the geodetic latitude as defined below. Briefly, the geodetic latitude of a point is the angle formed between the vector perpendicular (or normal) to the ellipsoidal surface from the point, and the plane of the equator.

Background

Two levels of abstraction are employed in the definitions of latitude and longitude. In the first step the physical surface is modeled by the geoid, a surface which approximates the mean sea level over the oceans and its continuation under the land masses. The second step is to approximate the geoid by a mathematically simpler reference surface. The simplest choice for the reference surface is a sphere, but the geoid is more accurately modeled by an ellipsoid of revolution. The definitions of latitude and longitude on such reference surfaces are detailed in the following sections. Lines of constant latitude and longitude together constitute a graticule on the reference surface. The latitude of a point on the actual surface is that of the corresponding point on the reference surface, the correspondence being along the normal to the reference surface, which passes through the point on the physical surface. Latitude and longitude together with some specification of height constitute a geographic coordinate system as defined in the specification of the ISO 19111 standard.

Since there are many different reference ellipsoids, the precise latitude of a feature on the surface is not unique: this is stressed in the ISO standard which states that "without the full specification of the coordinate reference system, coordinates (that is latitude and longitude) are ambiguous at best and meaningless at worst". This is of great importance in accurate applications, such as a Global Positioning System (GPS), but in common usage, where high accuracy is not required, the reference ellipsoid is not usually stated.

In English texts, the latitude angle, defined below, is usually denoted by the Greek lower-case letter phi. It is measured in degrees, minutes and seconds or decimal degrees, north or south of the equator. For navigational purposes positions are given in degrees and decimal minutes. For instance, The Needles lighthouse is at 50°39.734′ N 001°35.500′ W.

Determination

In celestial navigation, latitude is determined with the meridian altitude method. More precise measurement of latitude requires an understanding of the gravitational field of the Earth, either to set up theodolites or to determine GPS satellite orbits. The study of the figure of the Earth together with its gravitational field is the science of geodesy.

Additional Information

Horizontal mapping lines on Earth are lines of latitude. They are known as "parallels" of latitude, because they run parallel to the equator. One simple way to visualize this might be to think about having imaginary horizontal "hula hoops" around the earth, with the biggest hoop around the equator, and then progressively smaller ones stacked above and below it to reach the North and South Poles.

Lines of latitude, also called parallels, are imaginary lines that divide the Earth. They run east to west, but measure your distance north or south. The equator is the most well known parallel. At 0 degrees latitude, it equally divides the Earth into the Northern and Southern hemispheres. From the equator, latitude increases as you travel north or south, reaching 90 degrees at each pole.

There are other named lines of latitude. They’re based on the sun’s position during Earth’s orbit, and they help us understand climate, weather, and ocean currents. The Tropic of Cancer, at roughly 23 degrees north, and the Tropic of Capricorn, at roughly 23 degrees south, are the boundaries of what we consider the tropics. The Arctic Circle and the Antarctic Circle are at roughly 66 degrees north and south, respectively. They mark the boundaries of the Arctic and Antarctic regions.

Each degree of latitude covers about 111 kilometers on the Earth’s surface. One degree of latitude can be further divided into 60 minutes, and one minute can be further divided into 60 seconds. A second of latitude covers only about 30.7 meters. Unlike longitude lines, which get closer to each other at the poles, latitude lines are parallel. No matter where you are on Earth, latitude lines are the same distance apart.

Latitude and longitude have been used in astronomy and navigation since ancient times. By calculating the angle between the horizon and a celestial object (usually the sun or the North Star), navigators could determine their approximate latitude using basic tools. The calculations were simple, so measuring latitude at sea was reliable hundreds of years before accurate longitude measurements could be calculated during a voyage. If the North Star was 60 degrees above the horizon, the observer was at 60 degrees latitude (north). This process was more complex in the southern hemisphere, where the North Star is not visible.

latitude-istock.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

#2300 2024-09-13 21:16:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,086

Re: Miscellany

2300) Baby shower

Gist

A baby shower is a celebration to mark the impending birth of a baby. It's a way for friends and family to come together and 'shower' parents-to-be with essential items and gifts. They can be flashy affairs with lots of baby shower activities and games, or they can be much more low-key and intimate.

A baby shower is a party centered on gift-giving to celebrate the delivery or expected birth of a child. It is a rite of passage that celebrates through giving gifts and spending time together.

Details

A baby shower is a party centered on gift-giving to celebrate the delivery or expected birth of a child. It is a rite of passage that celebrates through giving gifts and spending time together.

Etymology

The term shower is often assumed to mean that the expectant parent is "showered" with gifts. A related custom, called a bridal shower, may have derived its name from the custom in the 19th century for the presents to be put inside a parasol, which when opened would "shower" the bride-to-be with gifts.

Description

Traditionally, baby showers are given only for the family's first child, and only women are invited to party .... though this has changed in recent years, now allowing showers being split up for different audiences: workplace, mixed-gender, etc. Smaller showers, or showers in which guests are encouraged to give only diapers or similar necessities, are common for subsequent babies.

Activities at baby showers include gift-giving and playing themed games. Giving gifts is a primary activity. Baby shower games vary, sometimes including standard games such as bingo, and sometimes being pregnancy-themed, such as "guess the mother's measurements" or "guess the baby".

According to etiquette authority Miss Manners, because the party centers on gift-giving, the baby shower is typically arranged and hosted by a close friend rather than a member of the family, since it is considered improper for families to beg for gifts on behalf of their members. The pregnant mother, as well as her mother and mother-in-law, and any sisters and sisters-in-law are commonly considered too closely related to properly host a baby shower, but a more distant family member, such as a cousin, might be accepted. However, this custom varies by culture or region and in some it is expected and customary for a close female family member to host the baby shower.

Timing

Pre-birth baby showers may be held late in the pregnancy, but not usually during the last few weeks, in case of a pre-term birth.

Many cultures do not have pre-birth celebrations. When a baby shower is held after a baby's birth, an invitation to attend the shower may be combined with a baby announcement. In China, it is considered unlucky to have a baby shower before the baby is born, and gifts are usually sent after the birth, unrelated to a party. In the US, if a baby shower does not happen before the arrival of the baby, a sip-and-see party or other similar events can be organized after the birth.

[Gifts

Guests bring small gifts for the expectant parent. Typical gifts related to babies include diapers, blankets, baby bottles, clothes, and toys. It is common to open gifts during the party; sometimes the host will make a game of opening gifts.

Whether and how a gift registry is used depends partly on the family's class, because wealthier families do not depend on the gifts received to care for the baby. Preparing a gift registry is a time-consuming and potentially fun activity for the parents-to-be. It may result in less personal gifts (e.g., the purchase of a store-bought item instead of a handmade one). As with gift registries for other gift-giving occasions, some guests appreciate them, and others do not.

Some families discourage gifts, saying that they want "your presence, not presents", or organizing a different activity, such as a blessing ceremony.

Social significance

* In the United States, the baby shower is the only public event that recognizes a woman's transition into motherhood.

The baby shower is a family's first opportunity to gather people together to help play a part in their child's life. The new parents may wish to call on people to assist in the upbringing of their child, and help educate the child over time. People around the family, who care for them, want to be involved in the child's life, and a baby shower presents an opportunity for them to give gifts and be of help, showing their love for the family. If it happens before the birth, it allows the new family to thank everyone.

History

Baby showers are relatively new, having become popular only in the middle of the 20th century, but other celebrations and rituals associated with pregnancy and childbirth are both ancient and enduring.

Ancient Egypt

In ancient Egypt, rituals relating to the birth of a child took place after the event itself. Quite unlike modern baby showers, this involved the mother and the child being separated to "contain and eliminate the pollution of birth" – this may have included visiting local temples or shrines. After this, household rituals may have taken place, but the specifics are hard to study as these are such female-focuses events.

Ancient and Modern India

In India, a pregnancy ritual has been followed since the Vedic ages: an event called Simantha, held in the 7th or 8th month. The mother-to-be is showered with dry fruits, sweets and other gifts that help the baby's growth. A musical event to please the baby's ears is the highlight of the ritual, as it was common knowledge that the baby's ears would start functioning within the womb. The ritual prays for a healthy baby and mother, as well as a happy delivery and motherhood.

Ancient Greece

The ancient Greeks also celebrated pregnancy after the birth, with a shout (oloyge) after the labor has ended, to indicate that "peace had arrived". Five to seven days later, there is a ceremony called Amphidromia, to indicate that the baby had integrated into the household. In wealthy families, the public dekate ceremony, after ten days, indicated the mother's return to society. (The ten-day period is still observed in modern-day Iran.)

Medieval Europe

Due to the likelihood a mother would die in childbirth, this time was recognized as having a great risk of spiritual danger in addition to the risk of physical danger. Priests would often visit women during labor so they could confess their sins. After the birth, usually on the same day, a baptism ceremony would take place for the baby. In this ceremony, the godparents would give gifts to the child, including a pair of silver spoons.

Renaissance Europe

Pregnancies at this time were celebrated with many other kinds of birth gifts: functional items, like wooden trays and bowls, as well as paintings, sculptures, and food. Childbirth was seen as almost mystical, and mothers-to-be were often surrounded with references to the Annunciation by way of encouragement and celebration.

Victorian Britain and North America

Superstitions sometimes led to speculation that a woman might be pregnant, such as two teaspoons being accidentally placed together on a saucer. Gifts were usually hand-made, but the grandmother would give silver, such as a spoon, mug, or porringer. In Britain, the manners of the upper-class (and, later, middle-class) required pregnancy to be treated with discretion: the declining of social invitations was often the only hint given. After the birth, a monthly nurse would be engaged, whose duties included regulating visitors. When the nanny took over, the mother began to resume normal domestic life, and the resumption of the weekly 'at home' afternoon tea an opportunity for female friends to visit. The Christening - usually held when the child was between 8-12 weeks old - was an important social event for the family, godparents and friends.

Modern North America

The modern baby shower in America started in the late 1940s and the 1950s, as post-war women were expecting the Baby Boomers generation. As in earlier eras, when young women married and were provided with trousseaux, the shower served the function of providing the mother and her home with useful material goods.

While continuing the traditions from the 1950s, modern technology has altered the form a baby shower takes: games can include identifying baby parts on a sonogram. Moreover, although traditional baby showers were female-exclusive, mixed-gender showers have increased in frequency.

In different countries

A diaper cake is a party decoration made from baby diapers, elaborately arranged to look like a fancy tiered cake.
Baby showers and other social events to celebrate an impending or recent birth are popular around the world, but not in Western Europe. They are often women-only social gatherings.

* In Armenia, a baby shower is called "qarasunq" and is celebrated 40 days after the birth. It is a mixed party for all relatives and friends. Guests usually bring gifts for the baby or parents.
* In Australia, Canada, New Zealand, and the United States, baby showers are a common tradition.
* In Brazil, a party called "chá de bebê" (baby tea) is offered before birth.
* In Bulgaria, as a superstition, no baby gifts are given to the family before the baby's birth. However, family and friends give or send unsolicited gifts to the newborn baby, even if some babies are kept from the public for the first 40 days to prevent early infections.
* In Chinese tradition a baby shower, manyue (满月), is held one month after the baby is born.
* In Hmong culture, a baby shower is called "Puv Hli", and is held one month after the baby is born. A ceremony would be hosted by the paternal grandparents or the father to welcome the baby to the family by tying the baby's wrist with white yarn and/or strings.
* In Costa Rica, a baby shower party is called té de canastilla ("basket tea"), and multiple events are held for a single pregnancy for the family, co-workers, and friends.
* In Egypt a baby shower is known as " Sebouh " (sebouh means week) which is usually celebrated one week after birth hence its name. This is usually celebrated with a DJ, much decoration, a food and candy buffet, activities and games.
* In Guatemala, only women attend this event. Middle-class women usually celebrate more than one baby shower (one with close friends, co-workers, family, etc.).

In Indian tradition, they are called by different names depending on the family's community.
** In northern India it is known as godbharaai (filled lap), in the Punjab region, it is also known as "reet". In western India, especially Maharashtra, the celebration is known as dohaaljewan, and in Odisha it is called saadhroshi. In West Bengal, in many places a party named "sadh" or "sadhbhokkhon" is observed on the seventh month of pregnancy. After this, the woman resides in her father's house instead of her husband's until the birth.
** In southern India, in Tamil Nadu and Andhra Pradesh it is called seemantham, valaikaapu or poochoottal. The expecting mother wears bangles and is adorned with flowers.
** In Karnataka it is called seemanta or kubasa. It is held when the woman is in her 5th, 7th, or 9th month of pregnancy.
** In coastal Karnataka, especially in Tulunadu (Tulu speaking region), the ceremony is also known as "baayake". Baayake in Tulu means desire. It is popularly considered that pregnant women crave fruits and eatables during the pregnancy period; and the ceremony was designed in the olden days to fulfill the desire or food cravings of the mother-to-be.
** Although these might be celebrated together, they are very different: seemantham is a religious ceremony, while valaikappu and poochoottal are purely social events much like Western baby showers. In a valaikappu or poochoottal, music is played and the expectant mother is decked in traditional attire with many flowers and garlands made of jasmine or mogra. A swing is decorated with flowers of her choice, which she uses to sit and swing. At times, symbolic cut-outs of moons and stars are put up. The elderly ladies from the household and community shower blessings on the expectant mother and gifts are given to her.
** In Gujarat, it is known as seemant or kholo bharyo, a religious ritual for most Gujarati Hindus during the 5th or 7th month of pregnancy, usually only for the first child. The expectant mother can only go to her father's house for delivery after her seemant. They offer special prayer and food to the goddess "Randal, the wife of the Sun".
** In Jain tradition, the baby shower ceremony is often called as "Shreemant". The expectant mother can go to her father's house in the 5th month of pregnancy and has to come back before the baby shower ceremony. After the ceremony the expectant mother cannot go back to her father's house. The ceremony is only performed on Sunday, Tuesday or Thursday of the seventh or ninth month of pregnancy. During the ceremony one of the practice is that the younger brother-in-law of the expectant mother dips his hands in Kumkuma water and slaps the expectant mother seven times on her cheeks and then the expectant mother slaps her younger brother-in-law seven times on his cheeks.
** In Kerala it is known as pulikudi or vayattu pongala', and is practiced predominantly in the Nair community, though its popularity has spread to other Hindu sects over the years. On an auspicious day, after being massaged with homemade ayurvedic oil the woman has a customary bath with the help of the elderly women in the family. After this, the family deity is worshipped, invoking all the paradevatas (family deities) and a concoction of herbal medicines prepared traditionally, is given to the woman. She is dressed in new clothes and jewellery used for such occasions. A big difference in the western concept of baby shower and Hindu tradition is that the Hindu ceremony is a religious ceremony to pray for the baby's well-being. In most conservative families, gifts are bought for the mother-to-be but not the baby. The baby is showered with gifts only after birth.
* In Iran, a baby shower (Persian:حمام زایمان) is also called a "sismooni party".  It is celebrated 1–3 months before the baby's birth. Family and close friends give gifts intended for the baby such as a cot, toys, and baby clothes.
* In the Islamic tradition of Aqiqah, an animal (such as a sheep) is slaughtered anytime after the birth, and the meat is distributed among relatives and the poor. The practice is considered sunnah and is not done universally.
* In Italy a party is held when the expectant mother is three or four months pregnant. Marked by the revelation of the baby's gender to parents, friends, and relatives, this festive gathering features an array of food and music. Symbolically, colored balloons, either pink or blue, are released into the air, signifying the anticipated arrival of a baby girl or boy. Attendees express their well-wishes through the presentation of gifts to the soon-to-be parents; this tradition has been recently imported to Italy, where it was not celebrated before the early 2010s;
* In Mongolia, a baby shower is called "(huuhdyn ugaalga)".
* In Nepal a baby shower is known as "dahi chiura khuwaune". The mother-to-be is given gifts from her elders and a meal is cooked for her according to her preferences. The pregnant mother is often invited by her relatives to eat meals with them. Pasni is a traditional celebration that often marks a baby boy's 6th month or a baby girl's fifth month, marking the transition to a diet higher in carbohydrates and allowing guests to bestow blessings, and money and other gifts.
* In Puerto Rico, a baby shower is celebrated anytime after other family members are made aware of the pregnancy, but typically during the last trimester. The grandmother, sisters, or friends of the pregnant mother organize the celebration and invite other relatives and friends. It is not common for men to attend baby showers. The "bendición" (blessing) is bestowed money and other gifts.
* In Russia, and Commonwealth of Independent States, there are no baby showers, though some of the younger generation are starting to adopt the custom.
* In South Africa, a baby shower is called a stork party (named after the folk myth that a white stork delivers babies), and typically takes place during the mother's 6th month. Stork parties, usually not attended by men and often organized as a surprise for the mother, involve silliness such as dressing up, and mothers receive gifts of baby supplies.
* In Vietnam, as a superstition no baby shower should be planned before the baby arrives. The baby shower is only organized when it is one month old. The baby shower is known as “Đầy tháng” which means “one full month”. The party is usually organized by the baby’s parents and/or the grandparents (baby’s father’s side). Relatives and close friends are invited. Gifts are welcomed, but try to avoid white color material gifts such as white clothing, white towels, white cloths … (mourning color)

Baby showers for fathers

Some baby showers are directed at fathers. These may be more oriented towards drinking beer, watching sports, fishing, or playing video games. The primary nature of these gifts is diapers and/or diaper-related items. The organization of the diaper party is typically done by the friends of the father-to-be as a way of helping to prepare for the coming child. These parties may be held at local pubs/bars, a friend's house, or the soon-to-be grandfather's house. In the United Kingdom, this is called wetting the baby's head, and is generally more common than baby showers. However, with the growth of American cultural influence – accelerated through celebrities via social media sites like Instagram – baby shower decorations are becoming more common in the United Kingdom. Wetting the baby's head is traditionally when the father celebrates the birth by having a few drinks and getting drunk with a group of friends.

There has been some controversy over these, with Judith Martin calling them a "monstrous imposition", although she was referring to the attitude of demanding gifts and not necessarily the male version of a baby shower.

In Hungary, such an event is called a milking party, and is held by tradition in favor of the mother to be blessed with breast milk for the newborn. Practically, it is the last day-off of the father for some time as he is expected to stay home to help. No similar domestic custom exists for mothers, such as a baby shower. Gifts for the baby are given on the first visit to his/her home. This, due to health concerns, happens at the appropriate and suitable time for each counterpart.

Names for events

* Diaper shower refers to a small-scale baby shower, generally for subsequent children, when the parents don't need as many baby supplies.
* Grandma's shower refers to a shower at which people bring items for the grandparents to keep at their house, such as a collapsible crib and a changing pad.
* Sprinkles or mistings are small showers for a subsequent child, especially a child who is of a different gender than the previous offspring.
* A sip and see party is a celebration usually planned by the new parents after the baby's birth, so that friends and family can sip on refreshments and meet the new baby.
* An adoption shower is held to celebrate a child's adoption into a family. Such events are called welcome parties when it's an older child being adopted rather than an infant.

GettyImages-1488615335-1f640a30172b41358035637347471ccf.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Online

Board footer

Powered by FluxBB