Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2476 2025-03-01 00:03:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2375) Polyurethane foam

Gist

Polyurethane foam is used as a thermal insulator in construction; appliances such as refrigerator, insulated products (thermoware) such as thermos bottles and cool boxes; and automobiles (in arm rests, doors and seats). It is also used to produce panels, slab-stock, cushions, mattresses and packaging materials.

Polyurethane foam is soft, less dense foam. It is considered open-cell, meaning that the cells that make up the actual material are not cross-linked. This means air can freely flow through them, which results in a foam that gives a lot when pressure is applied to it.

Polyurethane Foam refers to a type of foam formed by the reaction of polyols and isocyanates, ranging from rigid insulating materials to flexible cushioning materials used in various applications such as construction, furniture, bedding, and packaging.

Summary

Polyurethane foam is one component, curing with moisture in air, expanding while curing, semi-rigid in aerosol form for installation, grouting and insulation material.

Flexible Polyurethane Foams are used mainly for assembly of doors and windows, infilling applications, sound and heat insulations, waterproof barriers and insulation against fire. Polyurethane Foam reacts rapidly with moisture in the air and expands after application. Polyurethane bonds extremely well to the surfaces applied thanks to its high adhesion power.

Where are They Used?

Door and window assembly is the most effective usage areas of PU Foams. It is used for the insulation of electricity installations, hot and cold water pipes, adhesion of roofing tiles, tightness of terraces, concrete shear wall buildings, industrial roof insulation, cold storage houses and ice plants, decks of ship and yacht, filling of the voids between external thermal insulation materials, adhesion of insulation materials, filling of the voids and insulation of dry food storages.

What are the PU Foam Types?

Polyurethane foams may be divided into two sections depending on its application type and purpose.

1- PU Foams according to their Application Type;

Foams with Straw: It may be called as type of most frequent and best seller foam in the market. It is used with the straw apparatus supplied with each canister. It is preferred for filling broad voids (average 200% - 250%) due to its high expansion rate. Akfix Maximum PU Foam 805, 940, 806, 840, 820 coded products are preferred for the foam applications with straw.

Foams with Pistoles (Professional): It is used with a special application gun. The product mostly addresses the professionals. It is preferred to fill relatively narrow voids due to its low expansion rate (average %0%-60%). Akfix foams with guns having codes of 805P, 850, 840P, 820P offer an effective solution for filling small voids.

2- PU Foams in Accordance with Their Intended Use;

Foams for Filling Voids: It is standard and most frequently used foam type.

Adhesion Foams: These are the foams having increased adhesive property due to having very low (minimal) expansion rates compared to standard foams and denser polymer. Heat insulation materials like EPX, XPS, aerated concrete, brick and different construction materials like marble may be bonded with these products. For these group of products; Akfix 960, 960P, 962P coded products are preferred.

Contribution of Polyurethane Foams to Sound, Heat and Water Insulation;

It is required to apply PU Foam accurately for a good insulation.

Sound Insulation

They are used for filling voids between materials used for sound insulation, door and window voids, preventing passing of sound, heat and air, insulation the environment of the vent ducts, chimneys and climatic units protruding from the buildings and many more similar fields.

Heat insulation

2 components rigid (strong) foams are preferred for heat insulation. Akfix 812 PU foam developed for low temperatures - like 25° C degree and Akfix 885 PU Foam for up to -25° C degree temperature ensure a perfect heat insulation even under very challenging conditions.

Water insulation

Water can in no way damage the foam that is properly applied and cured completely. One component polyurethane foams are preferred for external environment applications, decorative pools, attaching decorative stones to each other, preventing water leakage from joints of the mechanisms like climatic units on caravan type vehicles and antennas. Akfix polyurethane foams provide long life and effective solution for sound, heat and water insulation applications.

Frequently Asked Questions:

How long PU Foam can last?

How long the structure stands as long as it does not expose intense UV lights. UV lights cause deformation of the foam corrupting its structure. Dried foam can be protected from hazardous lights by painting surface.

Does moisture, wetness, bleeding, humidity occur at the places where rigid spray foam is applied?

Rigid foam is applied to roofs of buildings and have a perfect heat insulation capability. One living on the top floor does not feel moisture, humidity and therefore molding in case if there is no point in another place that may create thermal bridge.

Is Polyurethane Foam flammable?

There are fire resistant foams in fire classifications of B1 and B2 according to German Standard DIN 4102 and these products resist against the fire for a certain time when exposed to flame directly or remain in a place where fire continues. Akfix B1, B2 Foams resist against the fire up to 217 minutes. Akfix 820, 820P, 840, 840P coded foams are preferred among fire resistant products.

How is it cleaned?

The most practical way to clean wet foam is to use foam cleaner. The foam that is contaminated around during application may be removed from all building materials, skin, textile products by spraying foam cleaner on it before it gets dried. It is possible to clean foam residues easily with Akfix 800C foam cleaner.  Acetone can also be used in case where foam cleaner is not available. Cured foam can only be cleaned by mechanical ways like utility knife etc.

Which Names Are They Referred By In The World?

Names commonly used in the market; one component pu foam, polyurethane pu foam, OCF pu foam sealant, gap filling pu foam, window assembling pu foam, mounting pu foam, montage pu foam, heat insulation pu foam, sound isolation pu foam, door assembling pu foam, Insulation foam, construction foam.

Details

Polyurethane foam is a solid polymeric foam based on polyurethane chemistry. As a specialist synthetic material with highly diverse applications, polyurethane foams are primarily used for thermal insulation and as a cushioning material in mattresses, upholstered furniture or as seating in vehicles. Its low density and thermal conductivity combined with its mechanical properties make them excellent thermal and sound insulators, as well as structural and comfort materials.

Polyurethane foams are thermosetting polymers. They cannot be melted and reshaped after initially formed, because the chemical bonds between the molecules in the material are very strong and are not broken down by heating. Once cured and cooled, the material maintains its shape and properties.

Classification of polyurethane foams
Polyurethane foams are the most widely used representatives of thermoset foams. Depending on their cellular structure, they can be classified as open or closed-cell foams. Looking at mechanical properties, there are two main types of polyurethane foam; flexible (soft) and rigid (hard) foams. Generally speaking, flexible polyurethane foams have an open-cell structure where the pores are interconnected, smaller in size and irregularly shaped; contrary to rigid polyurethane foams that have a closed-cell structure, where the pores are not interconnected. The market share between these two types is largely equal.

There are various processing technologies in the production of polyurethane foams. Depending on the properties of the end application, the two most often used at large scale production are moulding and slabstock (block) foaming. Next to these, other prominent types include cavity-filling foam (e.g. car fillings used for acoustic insulation); and spray foam (e.g. roof thermal insulation). These are known as semi-flexible foams behind appropriate overlays.

Flexible polyurethane foam

The flexible polyurethane foam (FPUF) is produced from the reaction of polyols and isocyanates, a process pioneered in 1937. Depending on the application the foam will be used for, a series of additives are necessary to produce high-quality PU foam products. FPUF is a versatile material that can be tailored to exhibit different properties. It allows for superior compression, load-bearing and resilience that provides a cushioning effect. Because of this property, lightweightness, and efficient production process, it is often used in furniture, bedding, automotive seating, athletic equipment, packaging, footwear, and carpets.

Flexible polyurethane foams with a high volume of open pores have been greatly regarded as an effective noise absorption material and are widely used as acoustic insulation in various sectors, from construction to transportation. It is also a very resilient material that does not deteriorate over time and its lifetime is typically linked to the lifetime of the application it is used in.

Types of Flexible Polyurethane Foams based on Manufacturing Technology

Flexible polyurethane foams can be manufactured through a continuous (slabstock) production or moulding process. In the continuous process, the mixed ingredients are poured on the conveyor belt. The chemical reaction occurs instantly, causing the foam to rise within seconds and then solidify. In theory, foam blocks of several kilometres in length could be produced this way. In reality, the foam blocks are typically cut at a length of between 15 and 120m, cured and stored for further processing.

Contrary to slabstock foam, moulded foam production is a discontinuous process. Moulded foam articles are made one at a time by injecting the foam mixture into moulds. When the foam rises and expands, it occupies the whole space in the mould. It solidifies almost instantly and the produced part can then be removed from the mould, either mechanically or manually. This is the biggest advantage of moulded PU foams – they can be moulded into specific desired shapes, eliminating the need for cutting and reducing waste fractions. They can be produced with multiple zones of hardness and with reinforcements for further easier assembly. This is why moulded foam technology is widely used in the production of seat cushions used in the transport industries.

Based on the production process, other types of flexible polyurethane foams may include rebonded (or recycled), reticulated and auxetic PU foams.

Sustainability of Flexible Polyurethane Foams

Since the invention of polyurethane chemistry there have been constant innovations in the industry, driven by the need to decrease the toxicity of chemical substances used in production processes. Some examples include reducing Volatile Organic Compounds emissions or using blowing agents with a lower global warming potential (GWP) as well as ozone-depleting potential (ODP).

In the last decades, the main focus of the FPUF industry has been improving the environmental impact of its products and processes. A cradle-to-gate analysis of flexible (TDI slabstock) PU foam shows that (by far) the largest effect on the life cycle of the PU foams is due to raw materials extraction and production. Depending on the parameters, these account for about 90% of the total Greenhouse Gas (GHG) emissions.

Traditionally nearly all raw materials used for flexible PU foam production have been of fossil origin. Today, it is possible to make flexible PU foams from alternative, non-fossil sources, thus significantly improving its environmental footprint. These include bio-polyols, recycled polyols and CO2-based polyols.

As a thermosetting polymer PU foam cannot just be melted at the end of its useful life to make new products. For PU foam-containing products, there are various recycling technologies available and in broad use today:

* Physical (or mechanical) recycling. Physical recycling changes the physical properties of the material to a form more suitable for further processing. With this recycling process, the chemical composition of the PU material is not changed. The most common method is called rebonding, in which the flexible PU foam (production cut-offs and end-of-life foam) is transformed into so-called trim (foam flocks), which in turn can become rebonded foam used in products such as: e.g. carpet underlay, gym mats, acoustic insulation, as well as mattresses and furniture cushioning. Other types of mechanical recycling of PU foam include regrinding (powdering), compression moulding and adhesive pressing of powdered PU waste. For example, regrinding includes shredding PU material into a fine powder and mixing it as filler with a polyol component to make new PU foams
* Chemical recycling (or depolymerisation). Chemical recycling methods are focused on recovering monomers, which can be used to synthesise new polymers. The chemical composition of the waste PU foam is changed by breaking down and reforming the targeted bonds, to recover the original raw materials. Flexible polyurethane foam is broken down into its specific constituent chemical raw materials, which can be used again to make fresh foam. Technology has been in use at an industrial scale (in Europe) since 2013 for post-industrial flexible PU foam. Differentiated by the base material used to dissolve PU foam, depolymerisation technologies include hydrolysis, aminolysis, alcoholysis and glycolysis.
* Feedstock (or thermo-chemical) recovery. Feedstock recycling includes thermal processing of (often) mixed waste materials (of which PU can be one constituent), disintegrating them at a molecular level and recovering synthesis (syngas) and fuel gas - products which can be further used as new raw materials for the petrochemical industry. Mass balance accounting is needed to account for recycled materials.  For applications that are too difficult to dismantle or too contaminated to recycle, thermo-chemical recycling is the best option. This technology in particular allows the production of new “virgin-equivalent” raw materials, which are specifically appropriate for the production of applications that need to comply with stringent requirements, e.g. in the automotive industry.

Globally today the most often used waste management methods are landfilling and energy recovery. These should only be used when recycling methods are not available or cost-effective. Energy recovery processes include combustion, incineration and thermal degradation of PU.

Rigid polyurethane foams

Rigid polyurethane foam has many desirable properties which has enabled increased use in various applications, some of which are quite demanding. These properties include low thermal conduction making it useful as an insulator. It also has low density compared to metals and other materials and also good dimensional stability. A metal will expand on heating whereas rigid PU foam does not. They have excellent strength to weight ratios. Like many applications, there has been a trend to make rigid PU foam from renewable raw materials in place of the usual polyols.

They are used in vehicles, planes and buildings in structural applications. They have also been used in fire-retardant applications.

Space shuttles

Polyurethane foam has been widely used to insulate fuel tanks on Space Shuttles. However, it requires a perfect application, as any air pocket, dirt or an uncovered tiny spot can knock it off due to extreme conditions of liftoff. Those conditions include violent vibrations, air friction and abrupt changes in temperature and pressure. For a perfect application of the foam there have been two obstacles: limitations related to wearing protective suits and masks by workers and inability to test for cracks before launch, such testing is done only by naked eye. The loss of foam caused the Space Shuttle Columbia disaster. According to the Columbia accident report, NASA officials found foam loss in over 80% of the 79 missions for which they have pictures.

By 2009 researchers created a superior polyimide foam to insulate the reusable cryogenic propellant tanks of Space Shuttles.

Additional Information

Polyurethane is any of a class of synthetic resinous, fibrous, or elastomeric compounds belonging to the family of organic polymers made by the reaction of diisocyanates (organic compounds containing two functional groups of structure ―NCO) with other difunctional compounds such as glycols. The best known polyurethanes are flexible foams—used as upholstery material, mattresses, and the like—and rigid foams—used for such lightweight structural elements as cores for airplane wings.

Foamed polyurethanes result from the reaction of diisocyanates with organic compounds, usually polyesters, containing carboxyl groups; these reactions liberate bubbles of carbon dioxide that remain dispersed throughout the product. Use of polyethers or polyesters containing hydroxyl groups in preparing polyurethanes results in the formation of elastomeric fibres or rubbers that have outstanding resistance to attack by ozone but are vulnerable to the action of acids or alkalies.

In textiles the synthetic fibre known generically as spandex is composed of at least 85 percent polyurethane by weight. Such fibres are generally used for their highly elastic properties. Trademarked fibres in this group are Lycra, Numa, Spandelle, and Vyrene. Such fibres have, for many textile purposes, largely replaced natural and synthetic rubber fibres.

Although somewhat weak in the relaxed state, spandex fibres can be stretched about 500–610 percent beyond their original length without breaking and quickly return to their original length. The fibre, usually white with dull lustre, is readily dyed. It absorbs very little moisture. It melts at about 250° C (480° F) and yellows upon prolonged exposure to heat or light. Items made of spandex can be machine washed and dried at moderate temperatures. Use of chlorine bleach can produce yellowing. Spandex fibres are frequently covered with other fibres such as nylon.

Spandex is used in such apparel as foundation garments, support hosiery, and swimsuits. It is light in weight and cool; it is resistant to deterioration from body acids; and it is easily laundered and quick-drying.

106a2b59e2cba3bc0c045dbfde89f301_540x@2x.jpg?v=1732262718


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2477 2025-03-02 00:05:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2376) Birthday

Gist

Birthday refers to the exact date of your birth (dd/mm/yyyy), The day you came to this world whereas Birth Anniversary is what we celebrate that particular day (dd/mm) every year. But we use the term "Birthday" in place of "Birth Anniversary" .

Summary

Annual celebrations and commemorations came about with the invention of the calendar. Not much is known about the first birthday celebrations in history, in part because they are very ancient. The earliest ones we know about were for nobles, in which the celebration played a performative social function that celebrated the noble as a leader. The use of the date of birth for such celebrations was somewhat arbitrary; other dates, such as the date of coronation or the annual festival of a patron deity, were common as well.

The tradition of celebrating everyone's birthday is fairly recent. It coincided with several socioeconomic trends in the 19th and 20th centuries that saw the rise of consumerism and increased investment in the upbringing of children—and, thus, the annual celebration of their lives through the giving of gifts.

Details

A birthday is the anniversary of the birth of a person, or figuratively of an institution. The birthdays of people are celebrated in numerous cultures, often with birthday gifts, birthday cards, a birthday party, or a rite of passage.

Many religions celebrate the birth of their founders or religious figures with special holidays (e.g. Christmas, Mawlid, Buddha's Birthday, Krishna Janmashtami, and Gurpurb).

There is a distinction between birthday and birthdate (also known as date of birth): the former, except for February 29, occurs each year (e.g. January 15), while the latter is the complete date when a person was born (e.g. January 15, 2001).

Legal conventions

In most legal systems, one becomes a legal adult on a particular birthday when they reach the age of majority (usually between 12 and 21), and reaching age-specific milestones confers particular rights and responsibilities. At certain ages, one may become eligible to leave full-time education, become subject to military conscription or to enlist in the military, to consent to sexual intercourse, to marry with parental consent, to marry without parental consent, to vote, to run for elected office, to legally purchase (or consume) alcohol and tobacco products, to purchase lottery tickets, or to obtain a driver's licence. The age of majority is when minors cease to legally be considered children and assume control over their persons, actions, and decisions, thereby terminating the legal control and responsibilities of their parents or guardians over and for them. Most countries set the age of majority at 18, though it varies by jurisdiction.

Cultural conventions

Many cultures have one or more coming of age birthdays:

* In Canada and the United States, families often mark a girl's 16th birthday with a "sweet sixteen" celebration – often represented in popular culture.
* In some Hispanic countries, as well as Brazil, the quinceañera (Spanish) or festa de quinze anos (Portuguese) celebration traditionally marks a girl's 15th birthday.
* In the Philippines, a coming-of-age party called a debut is held for young women on their 18th birthday and young men on their 21st birthday.
* In some Asian countries that follow the zodiac calendar, there is a tradition of celebrating the 60th birthday.
* In Korea, many celebrate a traditional ceremony of Baek-il (Feast for the 100th day) and Doljanchi (child's first birthday).
* In Japan, people celebrate a Coming of Age Day for all those who have turned 18.
* In British Commonwealth nations, cards from the Royal Family are sent to those celebrating their 100th and 105th birthday and every year thereafter.
* In Ghana, on their birthday, children wake up to a special treat called "oto" which is a patty made from mashed sweet potato and eggs fried in palm oil. Later they have a birthday party where they usually eat stew and rice and a dish known as "kelewele", which is fried plantain chunks.
* Jewish boys have a bar mitzvah on their 13th birthday. Jewish girls have a bat mitzvah on their 12th birthday, or sometimes on their 13th birthday in Reform and Conservative Judaism. This marks the transition where they become obligated in commandments from which they were previously exempted and are counted as part of the community.

Historically significant people's birthdays, such as national heroes or founders, are often commemorated by an official holiday marking the anniversary of their birth.

* Catholic saints are remembered by a liturgical feast on the anniversary of their "birth" into heaven a.k.a. their day of death. The ancient Romans marked the anniversary of a temple dedication or other founding event as a dies natalis, a term still sometimes applied to the anniversary of an institution (such as a university).

An individual's Beddian birthday, named in tribute to firefighter Bobby Beddia, occurs during the year that their age matches the last two digits of the year they were born.

In many cultures and jurisdictions, if a person's real birthday is unknown (for example, if they are an orphan), their birthday may be adopted or assigned to a specific day of the year, such as January 1. The birthday of Jesus is celebrated at Christmas. Racehorses are reckoned to become one year old in the year following their birth on January 1 in the Northern Hemisphere and August 1 in the Southern Hemisphere.

Traditions

In certain parts of the world, an individual's birthday is celebrated by a party featuring a specially made cake. It may be decorated with lettering and the person's age, or studded with the same number of lit candles as the age of the individual. The celebrated individual may make a silent wish and attempt to blow out the candles in one breath; if successful, superstition holds that the wish will be granted. In many cultures, the wish must be kept secret or it will not "come true".

Presents are bestowed on the individual by the guests appropriate to their age. Other birthday activities may include entertainment (sometimes by a hired professional, i.e., a clown, magician, or musician) and a special toast or speech by the birthday celebrant. The last stanza of Patty Hill's and Mildred Hill's famous song, "Good Morning to You" (unofficially titled "Happy Birthday to You") is typically sung by the guests at some point in the proceedings. In some countries, a piñata takes the place of a cake.

Name days

In some historically Roman Catholic and Eastern Orthodox countries, it is common to have a 'name day', otherwise known as a 'Saint's day'. It is celebrated in much the same way as a birthday, but it is held on the official day of a saint with the same Christian name as the birthday person; the difference being that one may look up a person's name day in a calendar, or easily remember common name days (for example, John or Mary); however in pious traditions, the two were often made to concur by giving a newborn the name of a saint celebrated on its day of confirmation, more seldom one's birthday. Some are given the name of the religious feast of their christening's day or birthday, for example, Noel or Pascal (French for Christmas and "of Easter"); as another example, Togliatti was given Palmiro as his first name because he was born on Palm Sunday.

Official birthdays

Some notables, particularly monarchs, have an official birthday on a fixed day of the year, which may not necessarily match the day of their birth, but on which celebrations are held. Examples are:

* Jesus Christ's traditional birthday is celebrated as Christmas Eve or Christmas Day around the world, on December 24 or 25, respectively. As some Eastern churches use the Julian calendar, December 25 will fall on January 7 in the Gregorian calendar. These dates are traditional and have no connection with Jesus's actual birthday, which is not recorded in the Gospels.
* Similarly, the birthdays of the Virgin Mary and John the Baptist are liturgically celebrated on September 8 and June 24, especially in the Roman Catholic and Eastern Orthodox traditions (although for those Eastern Orthodox churches using the Julian calendar the corresponding Gregorian dates are September 21 and July 7 respectively). As with Christmas, the dates of these celebrations are traditional and probably have no connection with the actual birthdays of these individuals.
* The King's Official Birthday or Queen's Official Birthday in Australia, Fiji, Canada, New Zealand, and the United Kingdom.
* The Grand Duke's Official Birthday in Luxembourg is typically celebrated on June 23. This is different from the monarch's date of birth, April 16.
* Koninginnedag in the Kingdom of the Netherlands was typically celebrated on April 30. Queen Beatrix fixed it on her mother's birthday, the previous queen, to avoid the winter weather associated with her own birthday in January. The present monarch's birthday is 27 April, and it is also celebrated on that day. This has replaced the 30th of April celebration of Koninginnedag.
* The previous Japanese Emperor Showa (Hirohito)'s birthday was April 29. After his death, the holiday was kept as "Showa no Hi", or "Showa Day". This holiday falls close to Golden Week, the week in late April and early May.
* Kim Il Sung and Kim Jong Il's birthdays are celebrated in North Korea as national holidays called the Day of the Sun and the Day of the Shining Star respectively.
* Washington's Birthday, commonly referred to as Presidents' Day, is a federal holiday in the United States that celebrates the birthday of George Washington. President Washington's birthday is observed on the third Monday of February each year. However, his actual birth date was either February 11 (Old Style), or February 22 (New Style).
* In India, every year, October 2, which marks the Birthday of Mahatma Gandhi, is declared a holiday. All liquor shops are closed across the country in honor of Gandhi, who did not consume liquor.
* Martin Luther King Jr. Day is a federal holiday in the United States marking the birthday of Martin Luther King Jr.. It is observed on the third Monday of January each year, around the time of King's birthday, January 15.
* Mawlid is the official birthday of Muhammad and is celebrated on the 12th or 17th day of Rabi' al-awwal by adherents of Sunni and Shia Islam respectively. These are the two most commonly accepted dates of birth of Muhammad.

birthdaypackage2.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2478 2025-03-03 00:03:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2377) Lexicographer

Gist

A lexicographer studies words and compiles the results into a dictionary. This is one of several words for a certain type of writer or editor. Just as a playwright writes plays and a poet writes poems, a lexicographer puts together dictionaries.

A lexicographer is a professional that writes and edits dictionaries by researching new words, finding the meaning of existing words and translating expressions and words amongst others.

Ever wonder who writes dictionaries? They're called lexicographers. A lexicographer studies words and compiles the results into a dictionary.

This is one of several words for a certain type of writer or editor. Just as a playwright writes plays and a poet writes poems, a lexicographer puts together dictionaries. Lexicographer come up with definitions, determines parts of speech, gives pronunciations, and sometimes provides example sentences. Lexicographer need to do a lot of research to make sure they're defining a word correctly; dictionaries are books that people need to trust. If you love words, you might enjoy being a lexicographer.

Summary

Becoming a Lexicographer: A Comprehensive Guide

If you have a passion for language and a love for words, a career as a lexicographer might be the perfect fit for you. Lexicography is the art and science of compiling dictionaries, and it plays a crucial role in preserving and shaping language. In this comprehensive guide, we will explore the fascinating world of lexicography, from understanding its importance to finding your path in this unique and rewarding field.

Understanding Lexicography

Have you ever wondered who is responsible for creating the dictionaries you use every day? Lexicographers are linguistic experts who specialize in compiling, editing, and researching words and their meanings. They play a vital role in documenting and defining the ever-evolving English language.

Lexicography is a fascinating field that delves deep into the intricacies of language. It involves not only the compilation of words but also the exploration of their origins, meanings, and usage. Lexicographers are like detectives, uncovering the hidden stories behind the words we use.

The Role and Importance of a Lexicographer:

Role of a lexicographer

Lexicographers are guardians of our language, ensuring that words are accurately defined and recorded. They painstakingly research the etymology and usage of words, staying up to date with new words and phrases that enter the lexicon. Their work forms the foundation of our understanding and communication.

Lexicographers have a profound impact on society by shaping the way we communicate. They provide us with the tools to express ourselves effectively and to comprehend the thoughts and ideas of others. Without lexicographers, our language would lack structure and clarity.

The Evolution of Lexicography

Lexicography has come a long way from the days of traditional dictionaries. Modern lexicographers now harness the power of technology and the vastness of the internet to create comprehensive and dynamic resources. The advent of corpus linguistics has revolutionized the way lexicographers analyze language usage, providing valuable insights into the semantic and syntactic patterns of words.

Corpus linguistics involves the study of large collections of written and spoken texts, known as corpora. Lexicographers use these corpora to identify patterns of word usage, helping them determine the most common meanings and collocations of words. This data-driven approach ensures that dictionaries are not only descriptive but also reflective of how language is actually used.

Furthermore, the internet has opened up new possibilities for lexicographers. Online dictionaries can be constantly updated, ensuring that they remain relevant in a rapidly changing world. Lexicographers can collaborate with experts from around the globe, enriching their understanding of regional variations and slang.

Educational Requirements for Lexicographers:

Becoming a lexicographer

If you're considering to build a career path as linguistics graduates or in lexicography, it's important to equip yourself with the necessary knowledge and skills. Lexicography is a fascinating field that involves the creation and compilation of dictionaries, making it essential for lexicographers to have a solid educational background.

While there is no specific degree in lexicography, a strong foundation in linguistics, English language, or a related field is highly beneficial. Many lexicographers hold degrees in linguistics, philology, or language studies. These degree programs provide a comprehensive understanding of language and its intricacies, which is crucial for lexicographers.

Studying linguistics allows aspiring lexicographers to delve deep into the structure and usage of language. It provides them with the tools to analyze and interpret various linguistic phenomena, such as phonetics, morphology, syntax, and semantics. By gaining a profound understanding of language, lexicographers can effectively create and define words in dictionaries.

Essential Skills for Lexicographers

In addition to linguistic and language studies, lexicographers must possess a range of essential skills and knowledge to excel in their profession. These skills include:

* Strong research abilities: Research skills are crucial for lexicographers as they often need to consult various sources to gather information about words and their meanings. They must be adept at conducting thorough research to ensure the accuracy and reliability of their dictionary entries.
* Attention to detail: Attention to detail is another vital skill for lexicographers. They must meticulously analyze words, their definitions, and their usage examples to ensure precision and clarity. Lexicographers must pay close attention to nuances, subtle differences in meaning, and the appropriate contexts in which words are used.
* Ability to work independently and collaboratively: Lexicographers also need to possess the ability to work both independently and collaboratively. While they may spend a significant amount of time working alone, conducting research and compiling dictionary entries, they also need to collaborate with other language experts, editors, and proofreaders to ensure the accuracy and quality of their work.

Moreover, proficiency in corpus linguistics and digital tools is becoming increasingly important for lexicographers. Corpus linguistics involves the study of large collections of texts, which can provide valuable insights into word usage and frequency. Lexicographers need to be skilled in utilizing digital tools and software to analyze corpus data and extract relevant information for their dictionaries.

In conclusion, a career in lexicography requires a strong educational foundation in linguistics, English language, or a related field. Lexicographers must possess a deep understanding of language structure and usage, strong research skills, attention to detail, the ability to work independently and collaboratively, and proficiency in corpus linguistics and digital tools. By acquiring these skills and knowledge, aspiring lexicographers can embark on a fulfilling and rewarding career in the world of dictionaries.

The Lexicographer's Toolbox

A lexicographer relies on various tools and resources to carry out their work effectively, such as:

* Dictionaries and thesauri: These references provide a wealth of information that aids in the process of word analysis and definition. By studying different types of dictionaries, such as monolingual, bilingual, and specialized dictionaries, lexicographers gain valuable insights into the structure, meaning, and usage of words.
* Corpus linguistics: Corpus linguistics involves analyzing large collections of texts to identify patterns and trends in language usage. By utilizing corpus linguistics, lexicographers can accurately define words and capture their nuances. They can observe how words are used in different contexts and determine their frequency of occurrence.
* Etymology: Etymology is the study of the origin and history of words. By understanding the etymology of a word, lexicographers can trace its roots and uncover its evolution over time. This knowledge adds depth and context to the definitions provided in dictionaries.
* Linguistic databases and software programs: These tools assist in organizing and managing the vast amount of data that lexicographers work with. They provide efficient ways to search, cross-reference, and update word entries.

Furthermore, lexicographers often collaborate with other language experts, such as linguists and subject specialists, to ensure the accuracy and comprehensiveness of their work. This collaborative approach allows for a more thorough analysis of words and their meanings.

In conclusion, the lexicographer's toolbox is filled with a variety of tools and resources that enable them to carry out their work effectively. From dictionaries and thesauri to corpus linguistics and etymology, each tool plays a crucial role in the process of word analysis and definition. With the help of these tools and the expertise of their colleagues, lexicographers strive to provide accurate and comprehensive language resources for users around the world.

The Process of Lexicography:

Studying as a lexicographer

Lexicography is a meticulous and multi-step process that entails meticulous research, analysis, and writing. It is a fascinating field that delves deep into the intricacies of language and aims to capture the essence of words and their meanings.

Let's explore the various steps involved in the process of lexicography in more detail.

Word Collection and Analysis

Lexicographers continually update and expand word databases by monitoring language use in various sources, such as literature, media, and online platforms. They immerse themselves in the vast ocean of words, carefully observing how language evolves and adapts to the ever-changing world.

By meticulously analyzing the collected data, lexicographers identify new words, word senses, and changes in usage. They unravel the intricate tapestry of language, uncovering hidden gems and linguistic nuances that shape our communication.

Every word is like a puzzle piece, waiting to be discovered and understood. Lexicographers meticulously examine each word, its origins, and its various meanings. They explore the historical context, cultural connotations, and semantic relationships that give words their unique flavor.

Definition Writing and Editing

Defining words may seem straightforward, but lexicographers face the challenge of encapsulating the full meaning of a word in a concise and clear manner. They embark on a quest to distill the essence of a word, to capture its true essence and convey it to the reader.

While becoming an academic researcher or Lexicographers, you carefully craft definitions that not only provide a clear understanding of a word but also evoke a sense of its richness and depth. They strive to strike a delicate balance between precision and accessibility, ensuring that the definitions are accurate, comprehensive, and reflect the current usage of the word.

Editing plays a crucial role in the lexicographic process. Lexicographers meticulously review and refine their definitions, polishing them to perfection. They scrutinize each word choice, sentence structure, and example usage, striving for clarity and coherence.

Career Paths in Lexicography

Lexicographers have diverse career opportunities in both traditional and digital domains.

Lexicography, the art and science of compiling dictionaries, offers a wide range of exciting career paths for language enthusiasts. Whether you have a passion for traditional print publications or a knack for digital innovation, the field of lexicography has something for everyone.

Opportunities in Publishing Houses

Many lexicographers find work in publishing houses, where they contribute to the creation of dictionaries, language reference materials, and educational resources. These roles involve collaborating with editors, researchers, and other linguistic experts.

Working in a publishing house allows lexicographers to immerse themselves in the world of words. They meticulously research and analyze language usage, etymology, and semantic nuances to ensure the accuracy and relevance of the content they create. Lexicographers also work closely with editors to refine definitions, examples, and usage notes, ensuring that the final product meets the needs of the target audience.

Lexicographers in publishing houses often specialize in specific subject areas, such as medical, legal, technical terminology, and even becoming a press sub-editor. This specialization allows them to develop a deep understanding of the terminology used in these fields and create comprehensive dictionaries tailored to the needs of professionals and students.

The Digital World of Lexicography

With the rise of digital platforms and the internet, lexicographers can now contribute to online dictionaries, language-learning apps, and text analysis tools. These roles require expertise in data management and digital lexicography.

In the digital realm, lexicographers harness the power of technology to create dynamic and interactive language resources. They collaborate with software developers, user experience designers, and linguists to design user-friendly interfaces and develop innovative features. Lexicographers in the digital world also work closely with data scientists to analyze large volumes of linguistic data and extract valuable insights.

Lexicographers in digital lexicography play a crucial role in ensuring that online dictionaries and language-learning apps provide accurate and up-to-date information. They continuously update and expand the content to reflect the evolving nature of language, incorporating new words, idioms, and expressions that emerge in various contexts.

Challenges and Rewards of Lexicography

Embarking on a career in lexicography presents both challenges and rewards.

Lexicography, the art and science of compiling dictionaries, is a fascinating field that requires a deep understanding of language and its complexities. It is a profession that demands dedication, meticulousness, and a passion for words. Let's explore some of the challenges and rewards that come with being a lexicographer.

Navigating the Complexities of Language

Language is a living entity, constantly evolving and adapting to the needs and preferences of its speakers. Lexicographers must stay up to date with emerging words, slang, and changes in usage. This requires continuous learning and adapting to the ever-changing linguistic landscape.

Imagine being in the shoes of a lexicographer, delving into the depths of language to uncover new words and phrases that have emerged from the depths of popular culture. It's like embarking on a linguistic treasure hunt, where the reward is a comprehensive understanding of the ever-evolving lexicon.

Lexicographers must also grapple with the intricacies of regional dialects, jargon, and specialized terminology. They meticulously research and analyze these linguistic nuances to ensure that their dictionaries accurately reflect the diverse ways in which language is used.

The Satisfaction of Shaping Language Use

Lexicographers have the privilege of shaping language usage through their work. By providing accurate and comprehensive definitions, they contribute to effective communication and enable the understanding of different cultures and perspectives.

Think about it - every time you look up a word in a dictionary, you are relying on the expertise of lexicographers who have carefully curated and crafted the definitions. They play a crucial role in ensuring that words are understood in their proper context, preventing miscommunication and promoting clarity.

Lexicographers also have the opportunity to document the evolution of language. They observe how words change in meaning over time and capture these shifts in their dictionaries. This not only helps us understand the historical development of language but also provides valuable insights into the cultural and social changes that shape our world. Furthermore, lexicographers are often called upon to resolve disputes over language usage. They act as linguistic referees, providing authoritative guidance on matters of grammar, spelling, and pronunciation.

Tips for Aspiring Lexicographers

Are you considering a career in lexicography? Here are some tips to help you pave your way in this fascinating field.

Gaining Relevant Experience

Internships and volunteering opportunities with publishing houses, linguistic research institutions, or language technology companies can provide valuable hands-on experience and help you build connections within the industry.

Networking and Professional Development

Attend career events, workshops, and conferences related to lexicography and linguistics. Networking with professionals in the field can open doors to job opportunities and collaborations.

Bottom Line

Embarking on a career in lexicography requires dedication, passion, and a deep appreciation for language. By following this comprehensive guide, you'll be well on your way to becoming a skilled lexicographer and contributing to the world of words.

Details

Lexicography is the study of lexicons and the art of compiling dictionaries. It is divided into two separate academic disciplines:

* Practical lexicography is the art or craft of compiling, writing and editing dictionaries.
* Theoretical lexicography is the scholarly study of semantic, orthographic, syntagmatic and paradigmatic features of lexemes of the lexicon (vocabulary) of a language, developing theories of dictionary components and structures linking the data in dictionaries, the needs for information by users in specific types of situations, and how users may best access the data incorporated in printed and electronic dictionaries. This is sometimes referred to as "metalexicography".

There is some disagreement on the definition of lexicology, as distinct from lexicography. Some use "lexicology" as a synonym for theoretical lexicography; others use it to mean a branch of linguistics pertaining to the inventory of words in a particular language.

A person devoted to lexicography is called a lexicographer and is, according to a jest of Samuel Johnson, a "harmless drudge".

Focus

Generally, lexicography focuses on the design, compilation, use and evaluation of general dictionaries, i.e. dictionaries that provide a description of the language in general use. Such a dictionary is usually called a general dictionary or LGP dictionary (Language for General Purpose). Specialized lexicography focuses on the design, compilation, use and evaluation of specialized dictionaries, i.e. dictionaries that are devoted to a (relatively restricted) set of linguistic and factual elements of one or more specialist subject fields, e.g. legal lexicography. Such a dictionary is usually called a specialized dictionary or Language for specific purposes dictionary and following Nielsen 1994, specialized dictionaries are either multi-field, single-field or sub-field dictionaries.

It is now widely accepted that lexicography is a scholarly discipline in its own right and not a sub-branch of applied linguistics, as the chief object of study in lexicography is the dictionary.

Lexicography is the practice of creating books, computer programs, or databases that reflect lexicographical work and are intended for public use. These include dictionaries and thesauri which are widely accessible resources that present various aspects of lexicology, such as spelling, pronunciation, and meaning.

Lexicographers are tasked with defining simple words as well as figuring out how compound or complex words or words with many meanings can be clearly explained. They also make decisions regarding which words should be kept, added, or removed from a dictionary. They are responsible for arranging lexical material (usually alphabetically) to facilitate understanding and navigation.

Etymology

Coined in English 1680, the word "lexicography" derives from the Greek (lexikographos), "lexicographer", from λεξικόν (lexicon), neut. of  lexikos, "of or for words", from (lexis), "speech", "word"[7] (in turn from (lego), "to say", "to speak") and  (grapho), "to scratch, to inscribe, to write".

Aspects

Practical lexicographic work involves several activities, and the compilation of well-crafted dictionaries requires careful consideration of all or some of the following aspects:

* profiling the intended users (i.e. linguistic and non-linguistic competences) and identifying their needs
* defining the communicative and cognitive functions of the dictionary
* selecting and organizing the components of the dictionary
* choosing the appropriate structures for presenting the data in the dictionary (i.e. frame structure, distribution structure, macro-structure, micro-structure and cross-reference structure)
* selecting words and affixes for systematization as entries
* selecting collocations, phrases and examples
* choosing lemma forms for each word or part of word to be lemmatized
* defining words
* organizing definitions
* specifying pronunciations of words
* labeling definitions and pronunciations for register and dialect, where appropriate
* selecting equivalents in bi- and multi-lingual dictionaries
* translating collocations, phrases and examples in bi- and multilingual dictionaries
* designing the best way in which users can access the data in printed and electronic dictionaries

One important goal of lexicography is to keep the lexicographic information costs incurred by dictionary users as low as possible. Nielsen (2008) suggests relevant aspects for lexicographers to consider when making dictionaries as they all affect the users' impression and actual use of specific dictionaries.

Theoretical lexicography concerns the same aspects as lexicography, but aims to develop principles that can improve the quality of future dictionaries, for instance in terms of access to data and lexicographic information costs. Several perspectives or branches of such academic dictionary research have been distinguished: 'dictionary criticism' (or evaluating the quality of one or more dictionaries, e.g. by means of reviews (see Nielsen 1999), 'dictionary history' (or tracing the traditions of a type of dictionary or of lexicography in a particular country or language), 'dictionary typology' (or classifying the various genres of reference works, such as dictionary versus encyclopedia, monolingual versus bilingual dictionary, general versus technical or pedagogical dictionary), 'dictionary structure' (or formatting the various ways in which the information is presented in a dictionary), 'dictionary use' (or observing the reference acts and skills of dictionary users), and 'dictionary IT' (or applying computer aids to the process of dictionary compilation).

One important consideration is the status of 'bilingual lexicography', or the compilation and use of the bilingual dictionary in all its aspects (see e.g. Nielsen 1994). In spite of a relatively long history of this type of dictionary, it is often said to be less developed in a number of respects than its unilingual counterpart, especially in cases where one of the languages involved is not a major language. Not all genres of reference works are available in interlingual versions, e.g. LSP, learners' and encyclopedic types, although sometimes these challenges produce new subtypes, e.g. 'semi-bilingual' or 'bilingualised' dictionaries such as Hornby's (Oxford) Advanced Learner's Dictionary English-Chinese, which have been developed by translating existing monolingual dictionaries.

Additional Information

Lexicography is the compiling, editing, or writing of a dictionary. It is distinct from lexicology, the study of the words in a given language, including their origins, evolution, meanings, usage, and contexts.

History of lexicography

The history of lexicographical practices can be traced back to about 3200 bce, when Sumerians began compiling word lists in cuneiform writing on clay tablets to teach literacy. The history of English lexicography dates back to the expansion of Latin Christianity into England (beginning at the end of the 6th century ce), when English-speaking priests and monks needed to learn Latin to read the Bible and conduct services in the liturgical language. In 1218 John of Garland, an English-born Parisian teacher, coined the word dictionarius (Latin: “of or pertaining to words”) as a title for an elementary Latin textbook. The first examples of modern, comprehensive English dictionaries came in the 18th century. A Dictionary of the English Language, Samuel Johnson’s seminal work in precision of definition and organization, was published in 1755. It included quotations and prescriptive commentaries about word usage.

Practical and theoretical lexicography

Lexicography is divided into two fields: practical and theoretical. Practical lexicography is concerned with compiling, writing, and editing dictionaries. Practical lexicographers focus on creating user-friendly dictionaries with accurate, up-to-date, and comprehensive information. Theoretical lexicography, also called metalexicography, is concerned with dictionary research. Theoretical lexicographers focus on researching structural and semantic relationships among words in current dictionaries to improve information organization and structure in future dictionaries. They often focus their research on specific types of dictionaries or elements of a dictionary’s compilation.

Types of dictionaries

The different types of dictionaries are vast and varied. In addition to what are considered “general purpose” dictionaries and language learners’, or bilingual, dictionaries, there are specialized dictionaries, including etymological, pronunciation, and usage dictionaries. Some dictionaries focus on the vocabulary of specific fields of knowledge—e.g., biology, psychology, law, medicine, religion, literature, economics, and fine arts.

Practical lexicographical processes

Lexicographers continually track language by reading books, newspapers, industry-specific journals, online corpora, social media, and any text in which they might discover a new word or a new use for an already recorded word. When lexicographers encounter a new word or usage, they create a citation in a searchable database, noting the word’s context and source. Then they search other databases of words from numerous different sources, including everything from articles to popular literature to song lyrics to speeches. Using these databases, they determine if a word meets certain criteria for inclusion in a dictionary—such as frequent, widespread, and meaningful use. If a word meets the criteria, lexicographers draft a definition for the word and forward it to a series of editors for review. Once the word and definition are approved, they are entered into the system, reviewed by a copy editor, proofread, and added to a dictionary.

Lexicography in the digital age

The shift to digital dictionaries has added a new dimension to the way lexicographers write definitions and structure digital dictionary entries. Web analytics allow lexicographers to see which words users look up more frequently, and the lexicographers can spend more time revising the definitions of those words. Because digital dictionaries are interactive, when lexicographers write definitions, they consider where to place explanatory hyperlinks and how to structure entries for online presentation, sometimes breaking up paragraphs into individual lines.

65606f1c71d8eb800fbc415a_young-woman-sitting-library_2736.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2479 2025-03-04 00:00:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2378) River Congo

Gist

The mighty Congo is Africa’s second-longest river after the Nile; in terms of flow, it’s second only to the Amazon. With a basin spanning most of the Democratic Republic of Congo (DRC) and parts of six neighboring countries, the river has been a vital lifeline for centuries, forming (with its tributaries) a vast inland waterway that allows access to places still inaccessible by road.

The river also nourishes immense biodiversity: It’s home to at least 700 fish species, and it supports the world’s second-largest rainforest. It also empties water and sediment into one of the largest carbon sinks in the world, the Congo Plume in the Atlantic.

The Congo is the deepest river in the world. Its headwaters are in the north-east of Zambia, between Lake Tanganyika and Lake Nyasa (Malawi), 1760 metres above sea level; it flows into the Atlantic Ocean.

Summary:

Introduction

The Congo River flows through the heart of Africa for about 2,900 miles (4,700 kilometers). It is the second longest river in Africa, after the Nile.

The Congo Basin is the area of land that is drained by the river. The basin includes all of the Democratic Republic of the Congo. It also covers parts of the Republic of the Congo, the Central African Republic, Zambia, Angola, Cameroon, and Tanzania.

Geography

The farthest source of the Congo is a river in the highlands of northeastern Zambia. But the Congo’s main stream begins in the southeastern part of the Democratic Republic of the Congo. From there, the river makes a giant arc across central Africa. It flows to the northwest, west, and southwest before reaching the west coast. It empties into the Atlantic Ocean at the town of Banana, in the Democratic Republic of the Congo.

It is impossible to travel the entire length of the river by boat. Several waterfalls block the western end of the river’s course. Therefore, river travel begins and ends farther upstream, at the city of Kinshasa.

Plants and Animals

The Congo River basin contains the second largest tropical rainforest. Only the Amazon rainforest is larger. Savannas, or tropical grasslands, border the Congo rainforest.

Many types of birds live near the river, and many types of fish live in it. Crocodiles, water snakes, turtles, and hippopotamuses also swim in the Congo’s waters.

Economy

Fishing in the Congo River is an important activity. The river also is a source of electric power, produced by dams. In addition, the Congo River and its tributaries form a large transportation network. Many port cities and towns are located along the banks. The capital cities of Kinshasa and Brazzaville sit on opposite sides of a wide part of the river, called Malebo Pool.

Details

The Congo River, formerly also known as the Zaire River, is the second-longest river in Africa, shorter only than the Nile, as well as the third-largest river in the world by discharge volume, following the Amazon and Ganges rivers. It is the world's deepest recorded river, with measured depths of around 220 m (720 ft). The Congo–Lualaba–Luvua–Luapula–Chambeshi River system has an overall length of 4,700 km (2,900 mi), which makes it the world's ninth-longest river. The Chambeshi is a tributary of the Lualaba River, and Lualaba is the name of the Congo River upstream of Boyoma Falls, extending for 1,800 km (1,100 mi).

Measured along with the Lualaba, the main tributary, the Congo River has a total length of 4,370 km (2,720 mi). It is the only major river to cross the equator twice. The Congo Basin has a total area of about 4,000,000 {km}^{2} (1,500,000 sq mi), or 13% of the entire African landmass.

Name

The name Congo/Kongo originates from the Kingdom of Kongo once located on the southern bank of the river. The kingdom in turn was named after the indigenous Bantu Kongo people, known in the 17th century as "Esikongo". South of the Kingdom of Kongo proper lay the similarly named Kakongo kingdom, mentioned in 1535. Abraham Ortelius labelled "Manicongo" as the city at the mouth of the river in his world map of 1564. The tribal names in Kongo possibly derive from a word for a public gathering or tribal assembly. The modern name of the Kongo people or Bakongo was introduced in the early 20th century.

The name Zaire is from a Portuguese adaptation of a Kikongo word, nzere ("river"), a truncation of nzadi o nzere ("river swallowing rivers"). The river was known as Zaire during the 16th and 17th centuries; Congo seems to have replaced Zaire gradually in English usage during the 18th century, and Congo is the preferred English name in 19th-century literature, although references to Zahir or Zaire as the name used by the inhabitants remained common. The Democratic Republic of the Congo and the Republic of the Congo are named after it, as was the previous Republic of the Congo which had gained independence in 1960 from the Belgian Congo. The Republic of Zaire during 1971–1997 was also named after the river's name in French and Portuguese.

The Congo's drainage basin covers 4,014,500 {km}^{2} (1,550,000 sq mi), an area nearly equal to that of the European Union. The Congo's discharge at its mouth ranges from 23,000 to 75,000 {m}^{3}/s (810,000 to 2,650,000 cu ft/s), with an average of 41,000 {m}^{3}/s (1,400,000 cu ft/s). The river transports annually 86 million tonnes of suspended sediment to the Atlantic Ocean and an additional 6% of bedload.

The river and its tributaries flow through the Congo rainforest, the second largest rainforest area in the world, after the Amazon rainforest in South America. The river also has the second-largest flow in the world, behind the Amazon; the second-largest drainage basin of any river, behind the Amazon; and is one of the deepest rivers in the world, at depths greater than 220 m (720 ft). Because its drainage basin includes areas both north and south of the Equator, its flow is stable, as there is always at least one part of the river experiencing a rainy season.

The sources of the Congo are in the highlands and mountains of the East African Rift, as well as Lake Tanganyika and Lake Mweru, which feed the Lualaba River, which then becomes the Congo below Boyoma Falls. The Chambeshi River in Zambia is generally taken as the source of the Congo in line with the accepted practice worldwide of using the longest tributary, as with the Nile River.

The Congo flows generally toward the northwest from Kisangani just below the Boyoma Falls, then gradually bends southwestward, passing by Mbandaka, joining with the Ubangi River and running into the Pool Malebo (Stanley Pool). Kinshasa (formerly Léopoldville) and Brazzaville are on opposite sides of the river at the Pool, where the river narrows and falls through a number of cataracts in deep canyons (collectively known as the Livingstone Falls), running by Matadi and Boma, and into the sea at Muanda.

Lower Congo constitutes the "lower" parts of the great river; that is the section of the river from the river mouth at the Atlantic coast to the twin capitals of Brazzaville and Kinshasa. In this section of the river, there are two significant tributaries, both on the left or south side. The Kwilu River originates in the hills near the Angolan border and enters the Congo some 100 km upstream from Matadi. The other is the Inkisi River, that flows in a northerly direction from the Uíge Province in Angola to the confluence with the Congo at Zongo some 80 km (50 mi) downstream from the twin capitals. Because of the vast number of rapids, in particular the Livingstone Falls, this section of the river is not operated continuously by riverboats.

Additional Information - I

Congo River is a river in west-central Africa. With a length of 2,900 miles (4,700 km), it is the continent’s second longest river, after the Nile. It rises in the highlands of northeastern Zambia between Lakes Tanganyika and Nyasa (Malawi) as the Chambeshi River at an elevation of 5,760 feet (1,760 metres) above sea level and at a distance of about 430 miles (700 km) from the Indian Ocean. Its course then takes the form of a giant counterclockwise arc, flowing to the northwest, west, and southwest before draining into the Atlantic Ocean at Banana (Banane) in the Democratic Republic of the Congo. Its drainage basin, covering an area of 1,335,000 square miles (3,457,000 square km), takes in almost the entire territory of that country, as well as most of the Republic of the Congo, the Central African Republic, eastern Zambia, and northern Angola and parts of Cameroon and Tanzania.

With its many tributaries, the Congo forms the continent’s largest network of navigable waterways. Navigability, however, is limited by an insurmountable obstacle: a series of 32 cataracts over the river’s lower course, including the famous Inga Falls. These cataracts render the Congo unnavigable between the seaport of Matadi, at the head of the Congo estuary, and Malebo Pool, a lakelike expansion of the river. It was on opposite banks of Malebo Pool—which represents the point of departure of inland navigation—that the capitals of the former states of the French Congo and the Belgian Congo were founded: on the left bank Kinshasa (formerly Léopoldville), now the capital of the Democratic Republic of the Congo, and on the right bank Brazzaville, now the capital of the Republic of the Congo.

The Amazon and the Congo are the two great rivers of the world that flow out of equatorial zones where heavy rainfall occurs throughout all or almost all of the year. Upstream from Malebo Pool, the Congo basin receives an average of about 60 inches (1,500 mm) of rain a year, of which more than one-fourth is discharged into the Atlantic. The drainage basin of the Congo is, however, only about half the size of that of the Amazon, and the Congo’s rate of flow—1,450,000 cubic feet (41,000 cubic metres) per second at its mouth—is considerably less than the Amazon’s flow of more than 6,180,000 cubic feet (175,000 cubic metres) per second.

While the Chambeshi River, as the remotest source, may form the Congo’s original main stream in terms of the river’s length, it is another tributary—the Lualaba, which rises near Musofi in southeastern Democratic Republic of the Congo—that carries the greatest quantity of water and thus may be considered as forming the Congo’s original main stream in terms of water volume.

When the river first became known to Europeans at the end of the 15th century, they called it the Zaire, a corruption of a word that is variously given as nzari, nzali, njali, nzaddi, and niadi and that simply means “river” in local African languages. It was only in the early years of the 18th century that the river was first called the “Rio Congo,” a name taken from the kingdom of Kongo that had been situated along the lower course of the river. During the period (1971–97) when the Democratic Republic of the Congo was called Zaire, the government also renamed the river the Zaire. Even during that time, however, the river continued to be known throughout the world as the Congo. To the literary-minded the river is evocative of the famous 1902 short story “Heart of Darkness” by Joseph Conrad. His book conjured up an atmosphere of foreboding, treachery, greed, and exploitation. Today, however, the Congo appears as the key to the economic development of the central African interior.

Additional Information - II

The Congo River (also known as Zaire River) is the largest river in Africa. Its overall length of 4,700 km (2,922 miles) makes it the second longest in Africa (after the Nile). The river and its tributaries flow through the second largest rain forest area in the world, second only to the Amazon Rainforest in South America.

The river also has the second-largest flow in the world, behind the Amazon, and the second-largest watershed of any river, again trailing the Amazon. Its watershed is a little larger than that of the Mississippi River. Because large parts of the river basin sit north and south of the equator, its flow is steady, as there is always at least one river having a rainy season. The Congo gets its name from the old Kingdom of Kongo which was at the mouth of the river. The Democratic Republic of the Congo and the Republic of the Congo, both countries sitting along the river's banks, are named after it. From 1971 to 1997, the Democratic Republic of the Congo was called Zaire and its government called the river the Zaire River.

The sources of the Congo are in the Highlands and mountains of the East African Rift, as well as Lake Tanganyika and Lake Mweru, which feed the Lualaba River. This then becomes the Congo below Boyoma Falls. The Chambeshi River in Zambia is usually taken as the source of the Congo because of the accepted practice worldwide of using the longest tributary, as with the Nile River.

The Congo flows mostly west from Kisangani just below the falls, then slowly bends southwest, passing by Mbandaka, joining with the Ubangi River, and running into the Pool Malebo (Stanley Pool). Kinshasa (formerly Léopoldville) and Brazzaville are on opposite sides of the river at the Pool, where the river narrows and falls through a few cataracts in deep canyons (collectively known as the Livingstone Falls), running by Matadi and Boma, and into the sea at the small town of Muanda.

History of exploration

The mouth of the Congo was visited by Europeans in 1482, by the Portuguese Diogo Cão, and in 1817, by a British exploration under James Kingston Tuckey that went up the river as far as Isangila. Henry Morton Stanley was the first European to travel along the whole river.

Economic importance

Although the Livingstone Falls stop ships coming in from the sea, almost all of the Congo is navigable in parts, especially between Kinshasa and Kisangani. Railways cross the three major falls that interrupt navigation, and much of the trade of central Africa passes along the river. Goods include copper, palm oil, sugar, coffee, and cotton. The river can also be valuable for hydroelectric power, and Inga Dams below Pool Malebo have been built.

In February of 2005, South Africa's state owned power company, Eskom, said that they had a proposal to increase the amount of electric power that the Inga can make through improvements and the building of a new hydroelectric dam. The project would bring the highest output of the dam to 40 GW, twice that of China's Three Gorges Dam.

Geological history

In the Mesozoic period before the continental drift opened the South Atlantic Ocean, the Congo was the upper part of a river about 12,000 km (7,500 miles) long that flowed west across the parts of Gondwanaland, now called Africa and South America.   

The-Congo-River.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2480 2025-03-05 00:02:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2379) River Zambezi

Gist

The Zambezi (meaning “Great River” in the language of the Tonga people) includes along its course the Victoria Falls, one of the world's greatest natural wonders, and the Kariba and Cahora Bassa dams, two of Africa's largest hydroelectric projects.

The 2,574 km (1,599 mi) river rises in Zambia and flows through eastern Angola, along the north-eastern border of Namibia and the northern border of Botswana, then along the border between Zambia and Zimbabwe to Mozambique, where it crosses the country to empty into the Indian Ocean.

Summary

Zambezi River, river draining a large portion of south-central Africa. Together with its tributaries, it forms the fourth largest river basin of the continent. The river flows eastward for about 2,200 miles (3,540 kilometres) from its source on the Central African Plateau to empty into the Indian Ocean. With its tributaries, it drains an area of more than 500,000 square miles (1,300,000 square kilometres). The Zambezi (meaning “Great River” in the language of the Tonga people) includes along its course the Victoria Falls, one of the world’s greatest natural wonders, and the Kariba and Cahora Bassa dams, two of Africa’s largest hydroelectric projects. The river either crosses or forms the boundaries of six countries—Angola, Zambia, Namibia, Botswana, Zimbabwe, and Mozambique—and the use of its waters has been the subject of a series of international agreements.

Physical features:

Physiography

The Zambezi rises out of a marshy bog near Kalene Hill, Zambia, about 4,800 feet (1,460 metres) above sea level, and flows some 20 miles before entering Angola, through which it runs for more than 175 miles. In this first section of its course, the river is met by more than a dozen tributaries of varying sizes. Shortly after reentering Zambia, the river flows over the Chavuma Falls and enters a broad region of hummocky, sand-covered floodplains, the largest of which is the Barotse, or Zambezi, Plain. The region is inundated during the summer floods, when it receives fertile alluvial soils. The main tributaries intersecting the river along the plains are the Kabompo River from the east and the larger Lungué-Bungo (Lungwebungo) River from the west.

The Zambezi then enters a stretch of rapids that extends from Ngonye (Sioma) Falls south to the Katima Mulilo Rapids, after which for about 80 miles it forms the border between Zambia to the north and the eastern Caprivi Strip—an extension of Namibia—to the south. In this stretch the river meanders through the broad grasslands of the Sesheke Plain until it is joined by the Cuando (Kwando) River. Near Kazungula, Zambia, the river, after flowing past Botswana territory to the south, turns almost due east and forms the frontier between Zambia and Zimbabwe. From the Cuando confluence to the Victoria Falls, the Zambezi varies considerably in width, from open reaches with sand islands to stretches of rapids through narrow channels separated by numerous rock islands.

The Victoria Falls mark the end of the upper course of the Zambezi, as its waters tumble with a thunderous roar and an enormous cloud of spray. The area around the falls was once covered by a thick layer of lava, which as it cooled formed wide cracks, or joints, that became filled with softer sediments. As the Zambezi cut its present valley it encountered one of these joints, eroded the sediment, and created a trench, eventually forcing a gap at the lower end of the trench that quickly widened into a gorge. The force of the water also created a second gap at the upper end of the trench that gradually diverted the river until the trench itself was left dry. As the river cut backward it repeated the process, scouring eight successive waterfalls in the past half million years.

The Zambezi’s middle course extends about 600 miles from Victoria Falls to the eastern end of Lake Cahora Bassa in Mozambique. It continues to form the boundary between Zambia and Zimbabwe until it crosses the Mozambique border at Luangwa. Below the falls a gorge some 60 miles long has been formed by the trench-scouring process, through which the river descends in a series of rapids. Just upstream of Lake Kariba the river valley widens and is contained by escarpments nearly 2,000 feet high. The middle Zambezi is notable for the two man-made lakes, Kariba and Cahora Bassa, that constitute much of this stretch of the river. Between the two lakes the Zambezi trends northeast for nearly 40 miles before it turns east below the confluence with the Kafue River, the Zambezi’s largest tributary. In this section the river rushes through two rocky, narrow gorges, the first just below the Kariba Dam and the other above the confluence with the Luangwa River.


At the dam at the eastern end of Lake Cahora Bassa, the Zambezi begins its lower course, during which it descends from the Central African Plateau to the coastal plain. At first the hilly country is replaced by flat areas at the head of the Tete Basin, and the river becomes more placid. About 40 miles downstream the river has cut the Lupata Gorge through a range of hills, where it emerges onto the Mozambique Plain and occupies a broad valley that spreads out in places to a width of three to five miles. Near Vila Fontes the river receives its last great tributary, the Shire River, which drains Lake Nyasa (Malaŵi) some 210 miles to the north.

At its mouth the Zambezi splits into a wide, flat, and marshy delta obstructed by sandbars. There are two main channels, each again divided into two. The wider, eastern channel splits into the Muselo River to the north and the main mouth of the Zambezi to the south. The western channel forms both the Inhamissengo River and the smaller Melambe River. North of the main delta the Chinde River separates from the Zambezi’s main stream to form a navigable channel leading to a shallow harbour.

Hydrology

The Zambezi, according to measurements taken at Maramba (formerly Livingstone), Zambia, experiences its maximum flow in March or April. In October or November the discharge diminishes to less than 10 percent of the maximum. The annual average flow reaches about 247,000 cubic feet (7,000 cubic metres) per second. Measurements taken at Kariba Dam reflect the same seasonal pattern; the highest flood recorded there was in March 1958, when the flow reached 565,000 cubic feet per second.

Climate of the Zambezi River

The Zambezi River lies within the tropics. The upper and middle course of the river is on an upland plateau, and temperatures, modified by altitude, are relatively mild, generally between 64° and 86° F (18° and 30° C). The winter months (May to July) are cool and dry, with temperatures averaging 68° F (20° C). Between August and October there is a considerable rise in average temperatures, particularly in the river valley itself; just before the rains begin in October temperatures there become excessively hot, often reaching 104° F (40° C). The rainy season lasts from November to April. Rain falls in short, intense thundershowers—the rate sometimes reaching 6 inches (150 millimetres) per hour—with skies clearing between downpours. In these months the upper Zambezi receives nearly all its total rainfall, and this accounts for the great variation in the flow of the river throughout the year. In all, the upper and middle Zambezi valley receives 22 to 30 inches of rain per year. Studies have suggested that a microclimate in the area of Lake Kariba has created a rise in precipitation, possibly as a result of a lake breeze blocked by the escarpment that produces thunderstorms.

In the lower course of the river in Mozambique the influence of the summer monsoon increases the levels of precipitation and humidity. Temperatures are also higher—determined more by the latitude and less by altitude—as the river descends from the plateau.

Plant life

The vegetation along the upper and middle course of the Zambezi is predominantly savanna, with deciduous trees, grass, and open woodland. Mopane woodland (Colophospermum mopane) is predominant on the alluvial flats of the low-lying river valleys and is highly susceptible to fire. Grass, when present, is typically short and sparse. Forestland with species of the genus Baikiaea, found extensively on sandy interfluves between drainage channels, is economically the most important vegetation type in Zambia, for it is the source of the valuable Rhodesian teak (Baikiaea plurijuga). Destruction of the Baikiaea forest results in a regression from forest to grassland, a slow process involving intermediate stages of scrub vegetation. The river additionally has a distinct fringing vegetation, mainly riverine forest including ebony (Diospyros mespiliformis) and small shrubs and ferns (e.g., Haemanthus). In the lower course of the Zambezi, dense bush and evergreen forest, with palm trees and patches of mangrove swamp, is the typical vegetation.

Animal life

The tiger fish is one of the few species found both above and below the Victoria Falls. Pike is predominant in the upper course of the river, as are yellowfish and barbel. Bream are now common both above and below the falls. Crocodiles abound in the Zambezi, though they generally avoid stretches of fast-running water. Hippopotamuses are also found in the upper and lower stretches of the Zambezi.

Elephants are common over much of the river’s course, particularly in areas such as the Sesheke Plain and near the Luangwa confluence. Game animals include buffalo, eland, sable, roan, kudu, waterbuck, impala, duiker, bushbuck, reedbuck, bushpig, and warthog. Of the big cats, lions can be found in the Victoria Falls National Park in Zimbabwe and elsewhere along the river’s course; cheetahs, although comparatively rare, can be sighted; and leopards, rarely seen by daylight, are common, both in the plains and the river gorges. Baboons and monkeys abound throughout the region.

The people

The Lozi (Barotse), who dominate much of the upper Zambezi, have taken advantage of the seasonal flooding of the Barotse Plain for centuries and have an agricultural economy that is supplemented by animal husbandry, fishing, and trade. The main groups of the middle Zambezi include the Tonga, Shona, Chewa, and Nsenga peoples, all of whom largely practice subsistence agriculture. In Mozambique the riverine population is varied; many engage in commercial agriculture—the growing of sugarcane and cotton in particular—which was established by the Portuguese.

The economy:

Navigation

Given its numerous natural barriers—sandbars at the mouth, shallowness, and rapids and cataracts—the Zambezi is of little economic significance as a trade route. About 1,620 miles of the river, however, are navigable by shallow-draft steamers. The longest stretch of unbroken water runs from the river delta about 400 miles upstream to the Cahora Bassa Dam. Above the dam Lake Cahora Bassa is navigable to its confluence with the Luangwa River, where navigation is interrupted again to the Kariba Dam. Lake Kariba is navigable, but the river again becomes impassable from the end of the lake to the Ngonye Falls, some 250 miles upstream. It is again navigable by shallow-draft boats for the 300 miles between the Ngonye and Chavuma falls and then for another 120 miles above Chavuma.

The river has four major crossing points. The Victoria Falls Bridge, the first from the head of the river, carries rail, road, and foot traffic between Zambia and Zimbabwe. The dam wall at Kariba is heavily used by road traffic, and a road bridge at Chirundu, Zimb., also connects the two countries. The fourth major crossing is the rail and road bridge between Mutarara (Dona Ana) and Vila de Sena, Mozambique. There are also a number of motor ferries crossing the river at various points.

Kariba and Cahora Bassa schemes

The Kariba Dam harnesses the Zambezi at Kariba, Zimb., 300 miles below Victoria Falls. A concrete-arch dam with a maximum height of 420 feet and a crest length of 1,900 feet carries a road connecting the Zambian and Zimbabwean banks of the gorge. Six floodgates permit a discharge of some 335,000 cubic feet of water per second. Both Zambia and Zimbabwe obtain most of their electricity from the Kariba Dam. Lake Kariba covers an area of about 2,000 square miles. The flooded land was previously inhabited by about 51,000 Tonga agriculturalists, who had to be resettled. The lake stretches for 175 miles from the dam to Devil’s Gorge and is 20 miles across at its widest point. Three townships have been built around lakeshore harbours at Kariba and at Siavonga and Sinazongwe, Zambia. Tourist resorts have also been developed along the lakeshore.

Lake Cahora Bassa was formed by a dam across the Zambezi at the head of Cahora Bassa Gorge, about 80 miles northwest of Tete, Mozambique. The dam, 560 feet high and 1,050 feet wide at its crest, impounds the river for 150 miles to the Mozambique–Zambia border, providing hydroelectric power and water for crop irrigation.

Study and exploration

The first non-Africans to reach the Zambezi were Arab traders, who utilized the river’s lower reaches from the 10th century onward. They were followed in the 16th century by the Portuguese, who hoped to use the river to develop a trade in ivory, gold, and slaves. Until the 19th century, the river, then called the Zanbere, was believed to flow south from a vast inland sea that was also thought to be the origin of the Nile River. Accurate mapping of the Zambezi did not take place until the Scottish missionary and explorer David Livingstone charted most of the river’s course in the 1850s. Searching for a trade route to the East African coast, he traveled from Sesheke, 150 miles above Victoria Falls, to the Indian Ocean. His map of the river remained the most accurate until the 20th century, when further surveys finally traced the Zambezi to its source.

Details

The Zambezi (also spelled Zambeze and Zambesi) is the fourth-longest river in Africa, the longest east-flowing river in Africa and the largest flowing into the Indian Ocean from Africa. Its drainage basin covers 1,390,000 {km}^{2} (540,000 sq mi), slightly less than half of the Nile's. The 2,574 km (1,599 mi) river rises in Zambia and flows through eastern Angola, along the north-eastern border of Namibia and the northern border of Botswana, then along the border between Zambia and Zimbabwe to Mozambique, where it crosses the country to empty into the Indian Ocean.

The Zambezi's most noted feature is Victoria Falls. Its other falls include the Chavuma Falls at the border between Zambia and Angola and Ngonye Falls near Sioma in western Zambia.

The two main sources of hydroelectric power on the river are the Kariba Dam, which provides power to Zambia and Zimbabwe, and the Cahora Bassa Dam in Mozambique, which provides power to Mozambique and South Africa. Additionally, two smaller power stations are along the Zambezi River in Zambia, one at Victoria Falls and the other in Zengamina, near Kalene Hill in the Ikelenge District.

Course:

Origins

The river rises in a black, marshy dambo in dense, undulating miombo woodland 50 km (31 mi) north of Mwinilunga and 20 km (12 mi) south of Ikelenge in the Ikelenge District of North-Western Province, Zambia, at about 1,524 metres (5,000 ft) above sea level. The area around the source is a national monument, forest reserve, and important bird area.

Eastward of the source, the watershed between the Congo and Zambezi Basins is a well-marked belt of high ground, running nearly east–west and falling abruptly to the north and south. This distinctly cuts off the basin of the Lualaba (the main branch of the upper Congo) from the Zambezi. In the neighborhood of the source, the watershed is not as clearly defined, but the two river systems do not connect.

The region drained by the Zambezi is a vast, broken-edged plateau 900–1,200 m high, composed in the remote interior of metamorphic beds and fringed with the igneous rocks of the Victoria Falls. At Chupanga, on the lower Zambezi, thin strata of grey and yellow sandstones, with an occasional band of limestone, crop out on the bed of the river in the dry season, and these persist beyond Tete, where they are associated with extensive seams of coal. Coal is also found in the district just below Victoria Falls. Gold-bearing rocks occur in several places.

Upper Zambezi

The river flows to the southwest into Angola for about 240 km (150 mi), then is joined by sizeable tributaries such as the Luena and the Chifumage flowing from highlands to the north-west. It turns south and develops a floodplain, with extreme width variation between the dry and rainy seasons. It enters dense evergreen Cryptosepalum dry forest, though on its western side, Western Zambezian grasslands also occur. Where it re-enters Zambia, it is nearly 400 m (1,300 ft) wide in the rainy season and flows rapidly, with rapids ending in the Chavuma Falls, where the river flows through a rocky fissure. The river drops about 400 m (1,300 ft) in elevation from its source at 1,500 m (4,900 ft) to the Chavuma Falls at 1,100 m (3,600 ft), over a distance of about 400 km (250 mi). From this point to the Victoria Falls, the level of the basin is very uniform, dropping only by another 180 m (590 ft) across a distance of around 800 km (500 mi).

The first of its large tributaries to enter the Zambezi is the Kabompo River in the North-Western Province of Zambia. The savanna through which the river flows gives way to a wide floodplain, studded with Borassus fan palms. A little farther south is the confluence with the Lungwebungu River. This is the beginning of the Barotse Floodplain, the most notable feature of the upper Zambezi, but this northern part does not flood so much and includes islands of higher land in the middle.

About 30 km below the confluence of the Lungwebungu, the country becomes very flat, and the typical Barotse Floodplain landscape unfolds, with the flood reaching a width of 25 km in the rainy season. For more than 200 km downstream, the annual flood cycle dominates the natural environment and human life, society, and culture. About 80 km further down, the Luanginga, which with its tributaries drains a large area to the west, joins the Zambezi. A short distance higher up on the east, the main stream is joined in the rainy season by overflow of the Luampa/Luena system.

A short distance downstream of the confluence with the Luanginga is Lealui, one of the capitals of the Lozi people, who populate the Zambian region of Barotseland in the Western Province. The chief of the Lozi maintains one of his two compounds at Lealui; the other is at Limulunga, which is on high ground and serves as the capital during the rainy season. The annual move from Lealui to Limulunga is a major event, celebrated as one of Zambia's best-known festivals, the Kuomboka.

After Lealui, the river turns south-southeast. From the east, it continues to receive numerous small streams, but on the west, it is without major tributaries for 240 km. Before this, the Ngonye Falls and subsequent rapids interrupt navigation. South of Ngonye Falls, the river briefly borders Namibia's Caprivi Strip. Below the junction of the Cuando River and the Zambezi, the river bends almost due east. Here, the river is broad and shallow and flows slowly, but as it flows eastward towards the border of the great central plateau of Africa, it reaches a chasm into which the Victoria Falls plunge.

Middle Zambezi

The Victoria Falls are considered the boundary between the upper and middle Zambezi. Below them, the river continues to flow due east for about 200 km (120 mi), cutting through perpendicular walls of basalt 20 to 60 m (66 to 197 ft) apart in hills 200 to 250 m (660 to 820 ft) high. The river flows swiftly through the Batoka Gorge, the current being continually interrupted by reefs. It has been described[18][citation needed] as one of the world's most spectacular whitewater trips, a tremendous challenge for kayakers and rafters alike. Beyond the gorge are a succession of rapids that end 240 km (150 mi) below Victoria Falls. Over this distance, the river drops 250 m (820 ft).

At this point, the river enters Lake Kariba, created in 1959 following the completion of the Kariba Dam. The lake is one of the largest man-made lakes in the world, and the hydroelectric power-generating facilities at the dam provide electricity to much of Zambia and Zimbabwe.

The Luangwa and Kafue rivers are the two largest left-hand tributaries of the Zambezi. The Kafue joins the main river in a quiet, deep stream about 180 m (590 ft) wide. From this point, the northward bend of the Zambezi is checked, and the stream continues due east. At the confluence of the Luangwa (15°37' S), it enters Mozambique.

The middle Zambezi ends where the river enters Lake Cahora Bassa, formerly the site of dangerous rapids known as Kebrabassa; the lake was created in 1974 by the construction of the Cahora Bassa Dam.

Lower Zambezi

The lower Zambezi's 650 kilometres (400 mi) from Cahora Bassa to the Indian Ocean is navigable, although the river is shallow in many places during the dry season. This shallowness arises as the river enters a broad valley and spreads out over a large area. Only at one point, the Lupata Gorge, 320 kilometres (200 mi) from its mouth, is the river confined between high hills. Here, it is scarcely 200 metres (660 ft) wide. Elsewhere it is from 5 to 8 kilometres (3 to 5 mi) wide, flowing gently in many streams. The river bed is sandy, and the banks are low and reed-fringed. At places, however, and especially in the rainy season, the streams unite into one broad, fast-flowing river.

About 160 kilometres (99 mi) from the sea, the Zambezi receives the drainage of Lake Malawi through the Shire River. On approaching the Indian Ocean, the river splits up into a delta. Each of the primary distributaries, Kongone, Luabo, and Timbwe, is obstructed by a sand bar. A more northerly branch, called the Chinde mouth, has a minimum depth at low water of 2 metres (6 ft 7 in) at the entrance and 4 metres (13 ft) further in, and is the branch used for navigation. About 100 kilometres (62 mi) further north is a river called the Quelimane, after the town at its mouth. This stream, which is silting up, receives the overflow of the Zambezi in the rainy season.

Additional Information

The Zambezi is the fourth longest river in Africa, after the Nile, Congo, and Niger Rivers. It is the longest east flowing river in Africa.

It flows through six countries on its journey from its source in north-western Zambia to the Indian Ocean, an amazing 2 700 km.

This river evokes mystery and excitement with few rivers in the world remaining as pristine or as little explored.

The source of the mighty Zambezi River lies at about 1 500 m (4 900ft) above sea level in the Mwinilunga District, very close to the border where Zambia, Angola and the Congo meet.

From there it flows through Zambia, Angola, Namibia and Botswana then back along the border of Zambia and Zimbabwe finally discharging into the Indian Ocean at its delta in Mozambique. The area of its catchment basin is 1 390 000 square km which is half that of the Nile.

The Power of the Zambezi River has been harnessed along its journey at two points, the first being Kariba Dam in Zimbabwe and the second Cahora Bassa Dam in Mozambique. Both these dams are sources of hydroelectric power and supply a large portion of power to Zambia, Zimbabwe and South Africa.

Recently worrying reports have popped up in the press that the Zambezi Seaway Scheme, a project to open up the Zambezi to enable the transportation of goods and minerals from the hinterland, is back on the cards. Click here for full details and to leave your comments.

For years there has also been talk and plans of another Hydroelectric Dam to be built in the Batoka gorge just below Victoria Falls, of major concern is that these plans are very much alive again. To find out more please click on this link "Batoka Gorge Dam Project" as we continue to follow and oppose this threatening project.

The rivers beauty has attracted tourists from all over the world and provides great opportunities for game viewing and various water sports. Hippopotamus, crocodiles, elephants and lions are some examples of wildlife you will find along various parts of the Zambezi River.

length-of-zambezi-river.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2481 2025-03-06 00:02:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2380) Pulmonologist

Gist

A pulmonologist is a physician who specializes in the respiratory system. From the windpipe to the lungs, if your complaint involves the lungs or any part of the respiratory system, a pulmonologist is the doc you want to solve the problem. Pulmonology is a medical field of study within internal medicine.

A pulmonologist is a physician who specializes in lung conditions. They diagnose and treat diseases of the respiratory system, including your airways, alveoli (air sacs in your lungs) and blood vessels.

Pulmonology: A branch of medicine that specializes in diagnosing and treating diseases of the lungs and other parts of the respiratory system. These diseases include asthma, emphysema, tuberculosis, and pneumonia.

Summary

A pulmonologist is a healthcare provider that specializes in diagnosing and treating conditions that affect your respiratory system, including your airways and lungs. You might see a pulmonologist if you have a chronic condition that affects your breathing or if you have symptoms like chronic cough, shortness of breath or wheezing.

Overview:

What is a pulmonologist?

A pulmonologist is a physician who specializes in lung conditions. They diagnose and treat diseases of the respiratory system, including your airways, alveoli (air sacs in your lungs) and blood vessels.

You might hear people call pulmonologists lung doctors, lung specialists or chest doctors.

What does a pulmonologist do?

A pulmonologist can diagnose and treat respiratory system diseases. They might have certain areas that they specialize in, like critical care, asthma or sleep medicine. They might also specialize in treating certain age groups, like kids younger than 18 (pediatric pulmonologists) or people over 65 (geriatric pulmonologists).

What conditions do pulmonologists treat?

Pulmonologists treat various respiratory conditions and illnesses, including:

* Asbestosis.
* Aspergillosis.
* Asthma.
* Bronchiectasis.
* Bronchitis.
* Chronic beryllium disease (berylliosis).
* Chronic obstructive pulmonary disease (COPD).
* Coal workers’ pneumoconiosis (black lung disease).
* Cystic fibrosis.
* Emphysema.
* Interstitial lung disease.
* Long COVID.
* Lung cancer.
* Pulmonary hypertension.
* Sarcoidosis.
* Silicosis.
* Sleep apnea.
* Tuberculosis.

Why would you need to see a pulmonologist?

If you have a respiratory condition that requires specialized testing, your primary care provider might refer you to a pulmonologist. Symptoms you might see a pulmonologist for include:

* A cough that doesn’t improve over time (chronic cough).
* Shortness of breath (dyspnea).
* Asthma attacks.
* Chest pain or tightness.
* Wheezing.
* Sleep apnea symptoms, like daytime tiredness or snoring.

What will a pulmonologist do on the first visit?

If it’s your first appointment with a pulmonologist, they’ll take a detailed medical history and do a physical examination. During this time, you can talk with your healthcare provider about the reasons you’re there and explain the details of your symptoms.

You might find it helpful to prepare notes in advance about things like:

* How long you’ve had symptoms.
* If you’ve noticed anything that triggers your symptoms (like respiratory illnesses, stress or seasonal changes).
* Anything you’ve noticed that makes your symptoms better or worse.
* Whether you smoke or vape, or if you used to.
* Whether your job, hobbies or living conditions could’ve exposed to you allergens or lung irritants (like secondhand smoke, chemicals, grains, livestock or birds).
* Whether anyone in your family has a respiratory condition.
* Any questions you have.

Before the end of the appointment, your provider might:

* Order tests.
* Schedule a follow-up visit.
* Recommend or prescribe treatments.
* Refer you to another provider.

What tests does a pulmonologist run?

Your pulmonologist might order some tests to help with diagnosis and treatment. These might include:

* Blood tests.
* Imaging tests like chest X-rays or CT scans (computed tomography scans).
* Pulmonary function tests.
* Spirometry.
* Bronchoscopy.
* Sleep studies.

You may have to repeat these tests in the future or have additional testing to confirm results.

Details

A pulmonologist is a doctor who diagnoses and treats diseases of the respiratory system -- the lungs and other organs that help you breathe.

For some relatively short-lasting illnesses that affect your lungs, like the flu or pneumonia, you might be able to get all the care you need from your regular doctor. But if your cough, shortness of breath, or other symptoms don't get better, you might need to see a pulmonologist.

What is pulmonology?

Internal medicine is the type of medical care that deals with adult health, and pulmonology is one of its many fields. Pulmonologists focus on the respiratory system and diseases that affect it. The respiratory system includes your:

* Mouth and nose
* Sinuses
* Throat (pharynx)
* Voice box (larynx)
* Windpipe (trachea)
* Bronchial tubes
* Lungs and things inside them like bronchioles and alveoli
* Diaphragm

What Conditions Do Pulmonologists Treat?

A pulmonologist can treat many kinds of lung problems. These include:

* Asthma, a disease that inflames and narrows your airways and makes it hard to breathe
* Chronic obstructive pulmonary disease (COPD), a group of lung diseases that includes emphysema and chronic bronchitis
* Cystic fibrosis, a disease caused by changes in your genes that makes sticky mucus build up in your lungs
* Emphysema, which damages the air sacs in your lungs
* Interstitial lung disease, a group of conditions that scar and stiffen your lungs
* Lung cancer, a type of cancer that starts in the lungs
* Obstructive sleep apnea, which causes repeated pauses in your breathing while you sleep
* Pulmonary hypertension, or high blood pressure in the arteries of your lungs
* Tuberculosis, a bacterial infection of the lungs
* Bronchiectasis, which damages your airways so they widen and become flabby or scarred
* Bronchitis, which is when your airways are inflamed, with a cough and extra mucus. It can lead to an infection.
* Pneumonia, an infection that makes the air sacs (alveoli) in your lungs inflamed and filled with pus
* COVID-19 pneumonia, which can cause severe breathing problems and respiratory failure

What Kind of Training Do Pulmonologists Have?

A pulmonologist's training starts with a medical school degree. Then, they do an internal medicine residency at a hospital for 3 years to get more experience. After their residency, doctors can get certified in internal medicine by the American Board of Internal Medicine.

That's followed by years of specialized training as a fellow in pulmonary medicine. Finally, they must pass specialty exams to become board-certified in pulmonology. Some doctors get even more training in Interventional Pulmonary, pulmonary hypertension, and lung transplantation. Others might specialize in younger or older patients.

How Do Pulmonologists Diagnose Lung Diseases?

Pulmonologists use tests to figure out what kind of lung problem you have. They might ask you to get:

* Blood tests. They check levels of oxygen and other things in your blood.
* Bronchoscopy. It uses a thin, flexible tube with a camera on the end to see inside your lungs and airways.
* X-rays. They use low doses of radiation to make images of your lungs and other things in your chest.
* CT scan. It's a powerful X-ray that makes detailed pictures of the inside of your chest.
* Spirometry. This tests how well your lungs work by measuring how hard you can breathe air in and out.

What Kinds of Procedures Do Pulmonologists Do?

Pulmonologists can do special procedures such as:

* Pulmonary hygiene. This clears fluid and mucus from your lungs.
* Airway ablation. This opens blocked air passages or eases difficult breathing.
* Biopsy. This takes tissue samples to diagnose disease.
* Bronchoscopy. This looks inside your lungs and airways to diagnose disease.

Why See a Pulmonologist

You might see a pulmonologist if you have symptoms such as:

* A cough that is severe or that lasts more than 3 weeks
* Chest pain
* Wheezing
* Dizziness
* Trouble breathing
* Severe tiredness
* Asthma that’s hard to control
* Bronchitis or a cold that keeps coming back.

Additional Information

Pulmonology is a medical specialty that deals with diseases involving the respiratory tract. It is also known as respirology, respiratory medicine, or chest medicine in some countries and areas.

Pulmonology is considered a branch of internal medicine, and is related to intensive care medicine. Pulmonology often involves managing patients who need life support and mechanical ventilation. Pulmonologists are specially trained in diseases and conditions of the chest, particularly pneumonia, asthma, tuberculosis, emphysema, and complicated chest infections.

Pulmonology/respirology departments work especially closely with certain other specialties: cardiothoracic surgery departments and cardiology departments.

History of pulmonology

One of the first major discoveries relevant to the field of pulmonology was the discovery of pulmonary circulation. Originally, it was thought that blood reaching the right side of the heart passed through small 'pores' in the septum into the left side to be oxygenated, as theorized by Galen; however, the discovery of pulmonary circulation disproves this theory, which had previously been accepted since the 2nd century. Thirteenth-century anatomist and physiologist Ibn Al-Nafis accurately theorized that there was no 'direct' passage between the two sides (ventricles) of the heart. He believed that the blood must have passed through the pulmonary artery, through the lungs, and back into the heart to be pumped around the body. This is believed by many to be the first scientific description of pulmonary circulation.

Although pulmonary medicine only began to evolve as a medical specialty in the 1950s, William Welch and William Osler founded the 'parent' organization of the American Thoracic Society, the National Association for the Study and Prevention of Tuberculosis. The care, treatment, and study of tuberculosis of the lung is recognised as a discipline in its own right, phthisiology. When the specialty did begin to evolve, several discoveries were being made linking the respiratory system and the measurement of arterial blood gases, attracting more and more physicians and researchers to the developing field.

caring-caucasian-female-doctor-use-phonendoscope-examine-male-patient-heart-rate-consultation-hospital-woman-nurse-gp-use-stethoscope-listen-man-heartbeat-clinic_657921-873.jpg?t=st=1741259476~exp=1741263076~hmac=9453aad97070d4b21aff8ceeca3cb5cc470dfe0dc49271f4279bd53bf99fc0c2&w=1380


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2482 2025-03-07 00:03:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2381) Silvering

Gist

Silvering is process of making mirrors by coating glass with silver, discovered by the German chemist Justus von Liebig in 1835. In the process silver–ammonia compounds are reduced chemically to metallic silver, which is deposited on a suitably shaped glass surface.

Summary

Silvering is process of making mirrors by coating glass with silver, discovered by the German chemist Justus von Liebig in 1835. In the process silver–ammonia compounds are reduced chemically to metallic silver, which is deposited on a suitably shaped glass surface. Modern processes may utilize silver solutions and reducer solutions—consisting of invert sugar, Rochelle salt, or formaldehyde—that meet in a spray above clean glass traveling on a conveyor; as the spray falls on the glass surface, metallic silver is deposited.

Silvering of partial reflectors, for optical and physics research applications, must be done by a process slow enough so that the amount of reflectivity can be controlled. Partial reflectors transmit and reflect only portions of the incident light. Special mirrors for such instruments as reflecting telescopes are usually silvered by evaporation of silver onto a surface from an electrically heated filament in high vacuum.

Details

Silvering is the chemical process of coating a non-conductive substrate such as glass with a reflective substance, to produce a mirror. While the metal is often silver, the term is used for the application of any reflective metal.

Process

Most common household mirrors are "back-silvered" or "second-surface", meaning that the light reaches the reflective layer after passing through the glass. A protective layer of paint is usually applied to protect the back side of the reflective surface . This arrangement protects the fragile reflective layer from corrosion, scratches, and other damage. However, the glass layer may absorb some of the light and cause distortions and optical aberrations due to refraction at the front surface, and multiple additional reflections on it, giving rise to "ghost images" (although some optical mirrors such as Mangins take advantage of it).

Therefore, precision optical mirrors normally are "front-silvered" or "first-surface", meaning that the reflective layer is on the surface towards the incoming light. The substrate normally provides only physical support, and need not be transparent. A hard, protective, transparent overcoat may be applied to prevent oxidation of the reflective layer and scratching of the metal. Front-coated mirrors achieve reflectivities of 90–95% when new.

History

Ptolemaic Egypt had manufactured small glass mirrors backed by lead, tin, or antimony. In the early 10th century, the Persian scientist al-Razi described ways of silvering and gilding in a book on alchemy, but this was not done for the purpose of making mirrors.

Tin-coated mirrors were first made in Europe in the 15th century. The thin tinfoil used to silver mirrors was known as "tain". When glass mirrors first gained widespread usage in Europe during the 16th century, most were silvered with an amalgam of tin and mercury,

In 1835 German chemist Justus von Liebig developed a process for depositing silver on the rear surface of a piece of glass; this technique gained wide acceptance after Liebig improved it in 1856. The process was further refined and made easier by the chemist Tony Petitjean (1856). This reaction is a variation of the Tollens' reagent for aldehydes. A diamminesilver(I) solution is mixed with a sugar and sprayed onto the glass surface. The sugar is oxidized by silver(I), which is itself reduced to silver(0), i.e. elemental silver, and deposited onto the glass.

In 1856-1857 Karl August von Steinheil and Léon Foucault introduced the process of depositing an ultra-thin layer of silver on the front surface of a piece of glass, making the first optical-quality first surface glass mirrors, replacing the use of speculum metal mirrors in reflecting telescopes. These techniques soon became standard for technical equipment.

An aluminum vacuum-deposition process invented in 1930 by Caltech physicist and astronomer John Strong, led to most reflecting telescopes shifting to aluminum. Nevertheless, some modern telescopes use silver, such as the Kepler Space Telescope. The Kepler mirror's silver was deposited using ion assisted evaporation.

Modern silvering processes:

General processes

Silvering aims to produce a non-crystalline coating of amorphous metal (metallic glass), with no visible artifacts from grain boundaries. The most common methods in current use are electroplating, chemical "wet process" deposition, and vacuum deposition.

Electroplating of a substrate of glass or other non-conductive material requires the deposition of a thin layer of conductive but transparent material, such as carbon. This layer tends to reduce the adhesion between the metal and the substrate. Chemical deposition can result in better adhesion, directly or by pre-treatment of the surface.

Vacuum deposition can produce very uniform coating with very precisely controlled thickness.

Metals:

Silver

The reflective layer on a second surface mirror such as a household mirror is often actual silver. A modern "wet" process for silver coating treats the glass with tin(II) chloride to improve the bonding between silver and glass. An activator is applied after the silver has been deposited to harden the tin and silver coatings. A layer of copper may be added for long-term durability.

Silver would be ideal for telescope mirrors and other demanding optical applications, since it has the best initial front-surface reflectivity in the visible spectrum. However, it quickly oxidizes and absorbs atmospheric sulfur to create a dark, low-reflectivity tarnish.

Aluminum

The "silvering" on precision optical instruments such as telescopes is usually aluminum. Although aluminum also oxidizes quickly, the thin aluminum oxide (sapphire) layer is transparent, and so the high-reflectivity underlying aluminum stays visible.

In modern aluminum silvering, a sheet of glass is placed in a vacuum chamber with electrically heated nichrome coils that can evaporate aluminum. In a vacuum, the hot aluminum atoms travel in straight lines. When they hit the surface of the mirror, they cool and stick.

Some mirror makers evaporate a layer of quartz or beryllia on the mirror; others expose it to pure oxygen or air in an oven so that it will form a tough, clear layer of aluminum oxide.

Tin

The first tin-coated glass mirrors were produced by applying a tin-mercury amalgam to the glass and heating the piece to evaporate the mercury.

Gold

The "silvering" on infrared instruments is usually gold. It has the best reflectivity in the infrared spectrum, and has high resistance to oxidation and corrosion. Conversely, a thin gold coating is used to create optical filters which block infrared (by mirroring it back towards the source) while passing visible light.

122144529_108240187736450_6432138039216600527_n-21449566-406w.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2483 2025-03-08 00:03:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2383) Tissue paper

Gist

Hygienic tissue paper is commonly for personal use as facial tissue (paper handkerchiefs), napkins, bathroom tissue and household towels. Paper has been used for hygiene purposes for centuries, but tissue paper as we know it today was not produced in the United States before the mid-1940s.

Compared to cloth handkerchiefs and towels tissue papers are more hygienic since they are disposable items and are thrown away after use. This prevents the transfer of nasal fluids from one person to another and stops the spread of infectious diseases.

Details

Tissue paper, or simply tissue, is a lightweight paper or light crêpe paper. Tissue can be made from recycled paper pulp on a paper machine.

Tissue paper is very versatile, and different kinds are made to best serve these purposes, which are hygienic tissue paper, facial tissues, paper towels, as packing material, among other (sometimes creative) uses.

The use of tissue paper is common in developed nations, around 21 million tonnes in North America and 6 million in Europe, and is growing due to urbanization. As a result, the industry has often been scrutinized for deforestation. However, more companies are presently using more recycled fibres in tissue paper.

Properties

The key properties of tissues are absorbency, basis weight, thickness, bulk (specific volume), brightness, stretch, appearance and comfort.

Production

Tissue paper is produced on a paper machine that has a single large steam heated drying cylinder (Yankee dryer) fitted with a hot air hood. The raw material is paper pulp. The Yankee cylinder is sprayed with adhesives to make the paper stick. Creping is done by the Yankee's doctor blade that is scraping the dry paper off the cylinder surface. The crinkle (crêping) is controlled by the strength of the adhesive, geometry of the doctor blade, speed difference between the Yankee and final section of the paper machine and paper pulp characteristics.

The highest water absorbing applications are produced with a through air drying (TAD) process. These papers contain high amounts of NBSK and CTMP. This gives a bulky paper with high wet tensile strength and good water holding capacity. The TAD process uses about twice the energy compared with conventional drying of paper.

The properties are controlled by pulp quality, crêping and additives (both in base paper and as coating). The wet strength is often an important parameter for tissue.

Applications:

Hygienic tissue paper

Hygienic tissue paper is commonly for personal use as facial tissue (paper handkerchiefs), napkins, bathroom tissue and household towels. Paper has been used for hygiene purposes for centuries, but tissue paper as we know it today was not produced in the United States before the mid-1940s. In Western Europe large scale industrial production started in the beginning of the 1960s.

Facial tissues

Facial tissue (paper handkerchiefs) refers to a class of soft, absorbent, disposable paper that is suitable for use on the face. The term is commonly used to refer to the type of facial tissue, usually sold in boxes, that is designed to facilitate the expulsion of nasal mucus although it may refer to other types of facial tissues including napkins and wipes.

The first tissue handkerchiefs were introduced in the 1920s. They have been refined over the years, especially for softness and strength, but their basic design has remained constant. Today each person in Western Europe uses about 200 tissue handkerchiefs a year, with a variety of 'alternative' functions including the treatment of minor wounds, the cleaning of face and hands and the cleaning of spectacles.

The importance of the paper tissue on minimising the spread of an infection has been highlighted in light of fears over a swine flu epidemic. In the UK, for example, the Government ran a campaign called "Catch it, Bin it, Kill it", which encouraged people to cover their mouth with a paper tissue when coughing or sneezing.

Pressure on use of tissue papers has grown in the wake of improved hygiene concerns in response to the coronavirus pandemic.

Paper towels

Paper towels are the second largest application for tissue paper in the consumer sector. This type of paper has usually a basis weight of 20 to 24 g/{m}^{2}. Normally such paper towels are two-ply. This kind of tissue can be made from 100% chemical pulp to 100% recycled fibre or a combination of the two. Normally, some long fibre chemical pulp is included to improve strength.

Wrapping tissue

Wrapping tissue is a type of thin, translucent tissue paper used for wrapping/packing various articles and cushioning fragile items.

Custom-printed wrapping tissue is becoming a popular trend for boutique retail businesses. There are various on-demand custom printed wrapping tissue paper available online. Sustainably printed custom tissue wrapping paper are printed on FSC-certified, acid-free paper; and only use soy-based inks.

Toilet paper

Rolls of toilet paper have been available since the end of the 19th century. Today, more than 20 billion rolls of toilet tissue are used each year in Western Europe.[4] Toilet paper brands include, Andrex (United Kingdom), Charmin (United States) and Quilton (Australia), among many others.

Table napkins

Table napkins can be made of tissue paper. These are made from one up to four plies and in a variety of qualities, sizes, folds, colours and patterns depending on intended use and prevailing fashions. The composition of raw materials varies a lot from deinked to chemical pulp depending on quality.

Acoustic disrupter

In the late 1970s and early 1980s, a sound recording engineer named Bob Clearmountain was said to have hung tissue paper over the tweeter of his pair of Yamaha NS-10 speakers to tame the over-bright treble coming from it.

The phenomenon became the subject of hot debate and an investigation into the sonic effects of many different types of tissue paper. The authors of a study for Studio Sound magazine suggested that had the speakers' grilles been used in studios, they would have had the same effect on the treble output as the improvised tissue paper filter. Another tissue study found inconsistent results with different paper, but said that tissue paper generally demonstrated an undesirable effect known as "comb filtering", where the high frequencies are reflected back into the tweeter instead of being absorbed. The author derided the tissue practice as "aberrant behavior", saying that engineers usually fear comb filtering and its associated cancellation effects, suggesting that more controllable and less random electronic filtering would be preferable.

Road repair

Tissue paper, in the form of standard single-ply toilet paper, is commonly used in road repair to protect crack sealants. The sealants require upwards of 40 minutes to cure enough to not stick onto passing traffic. The application of toilet paper removes the stickiness and keeps the tar in place, allowing the road to be reopened immediately and increasing road repair crew productivity. The paper breaks down and disappears in the following days. The use has been credited to Minnesota Department of Transportation employee Fred Muellerleile, who came up with the idea in 1970 after initially trying standard office paper, which worked, but did not disintegrate easily.

Packing industry

Apart from above, a range of speciality tissues are also manufactured to be used in the packing industry. These are used for wrapping/packing various items, cushioning fragile items, stuffing in shoes/bags etc. to keep shape intact or, for inserting in garments etc. while packing/folding to keep them wrinkle free and safe. It is generally used printed with the manufacturers brand name or, logo to enhance the look and aesthetic appeal of the product. It is a type of thin, translucent paper generally in the range of grammages between 17 and 40 GSM, that can be rough or, shining, hard or soft, depending upon the nature of use.

Origami

The use of double-tissue, triple-tissue, tissue-foil and Methyl cellulose coated tissue papers are gaining increasing popularity. Due to the paper's low grammage the paper can be folded into intricate models when treated with Methyl Cellulose (also referred to as MC). The inexpensive paper provides incredible paper memory paired with paper strength (when MC treated). Origami models sometimes require both thin and highly malleable papers, for this tissue-foil is considered a prime choice.

The industry

Consumption of tissue in North America is three times greater than in Europe: out of the world's estimated production of 21 million tonnes (21,000,000 long tons; 23,000,000 short tons) of tissue, Europe produces approximately 6 million tonnes (5,900,000 long tons; 6,600,000 short tons).

The European tissue market is worth approximately 10 billion Euros annually and is growing at a rate of around 3%. The European market represents around 23% of the global market. Of the total paper and board market tissue accounts for 10%. An analysis and market research in Europe, Germany was one of the top tissue-consuming countries in Western Europe while Sweden was on top of the per-capita consumption of tissue paper in Western Europe. Market Study.

In Europe, the industry is represented by the European Tissue Symposium (ETS), a trade association. The members of ETS represent the majority of tissue paper producers throughout Europe and about 90% of total European tissue production. ETS was founded in 1971 and is based in Brussels since 1992.

In the U.S., the tissue industry is organized in the AF&PA.

Tissue paper production and consumption is predicted to continue to grow because of factors like urbanization, increasing disposable incomes and consumer spending. In 2015, the global market for tissue paper was growing at per annum rates between 8–9% (China, currently 40% of global market) and 2–3% (Europe). During the COVID-19 pandemic, tissue demand for homes increased dramatically as people spent more time in their homes, while commercial demand for the product decreased.

Companies

The largest tissue producing companies by capacity – some of them also global players – in 2015 are (in descending order):

* Essity
* Kimberly-Clark
* Georgia-Pacific
* Asia Pulp & Paper (APP)/Sinar Mas
* Procter & Gamble
* Sofidel Group
* CMPC
* WEPA Hygieneprodukte
* Metsä Group
* Cascades

Sustainability

The paper industry in general has a long history of accusations for being responsible for global deforestation through legal and illegal logging. The World Wide Fund for Nature (WWF — formerly the World Wildlife Fund) has urged Asia Pulp & Paper (APP), "one of the world's most notorious deforesters" especially in Sumatran rain forests, to become an environmentally responsible company; in 2012, the WWF launched a campaign to remove a brand of toilet paper known to be made from APP fiber from grocery store shelves. According to the Worldwatch Institute, the world per capita consumption of toilet paper was 3.8 kilograms in 2005. The WWF estimates that "every day, about 270,000 trees are flushed down the drain or end up as garbage all over the world", a rate of which about 10% are attributable to toilet paper alone.

Meanwhile, the paper tissue industry, along with the rest of the paper manufacturing sector, has worked to minimise its impact on the environment. Recovered fibres now represent some 46.5% of the paper industry's raw materials. The industry relies heavily on biofuels (about 50% of its primary energy). Its specific primary energy consumption has decreased by 16% and the specific electricity consumption has decreased by 11%, due to measures such as improved process technology and investment in combined heat and power (CHP). Specific carbon dioxide emissions from fossil fuels decreased by 25% due to process-related measures and the increased use of low-carbon and biomass fuels. Once consumed, most forest-based paper products start a new life as recycled material or biofuel.

EDANA, the trade body for the non-woven absorbent hygiene products industry (which includes products such as household wipes for use in the home) has reported annually on the industry's environmental performance since 2005. Less than 1% of all commercial wood production ends up as wood pulp in absorbent hygiene products. The industry contributes less than 0.5% of all solid waste and around 2% of municipal solid waste (MSW) compared with paper and board, garden waste and food waste which each comprise between 18 and 20 percent of MSW.

There has been a great deal of interest, in particular, in the use of recovered fibres to manufacture new tissue paper products. However, whether this is actually better for the environment than using new fibres is open to question. A life-cycle assessment study indicated that neither fibre type can be considered environmentally preferable. In this study both new fibre and recovered fibre offer environmental benefits and shortcomings.

Total environmental impacts vary case by case, depending on for example the location of the tissue paper mill, availability of fibres close to the mill, energy options and waste utilization possibilities. There are opportunities to minimise environmental impacts when using each fibre type.

When using recovered fibres, it is beneficial to:

* Source fibres from integrated deinking operations to eliminate the need for thermal drying of fibre or long distance transport of wet pulp,
* Manage deinked sludge in order to maximise beneficial applications and minimise waste burden on society; and
* Select the recovered paper depending on the end-product requirements and that also allows the most efficient recycling process.

When using new fibres, it is beneficial to:

* Manage the raw material sources to maintain legal, sustainable forestry practices by implementing processes such as forest certification systems and chain of custody standards; and
* Consider opportunities to introduce new and more renewable energy sources and increase the use of biomass fuels to reduce emissions of carbon dioxide.

When using either fibre type, it is beneficial to:

* Improve energy efficiency in tissue manufacturing;
* Examine opportunities for changing to alternative, non fossil based sources, of energy for tissue manufacturing operations
* Deliver products that maximise functionality and optimize consumption; and
* Investigate opportunities for alternative product disposal systems that minimize the environmental impact of used products.

The Confederation of European Paper Industries (CEPI) has published reports focusing on the industry's environmental credentials. In 2002, it noted that "a little over 60% of the pulp and paper produced in Europe comes from mills certified under one of the internationally recognised eco-management schemes". There are a number of ‘eco-labels’ designed to help consumers identify paper tissue products which meet such environmental standards. Eco-labelling entered mainstream environmental policy-making in the late seventies, first with national schemes such as the German Blue Angel programme, to be followed by the Nordic swan (1989). In 1992 a European eco-labelling regulation, known as the EU Flower, was also adopted. The stated objective is to support sustainable development, balancing environmental, social and economical criteria.

In 2019, the NRDC and Stand.earth released a report grading various brands of toilet paper, paper towels, and facial tissue; the report criticized major brands for lacking recycled material.

Types of eco-labels

There are three types of eco-labels, each defined by ISO (International Organization for Standardization).

Type I: ISO 14024 This type of eco-label is one where the criteria are set by third parties (not the manufacturer). They are in theory based on life cycle impacts and are typically based on pass/fail criteria. The one that has European application is the EU Flower.

Type II: ISO 14021 These are based on the manufacturers or retailers own declarations. Well known amongst these are claims of "100% recycled" in relation to tissue/paper.

Type III: ISO 14025 These claims give quantitative details of the impact of the product based on its life cycle. Sometimes known as EPDs (Environmental Product Declarations), these labels are based on an independent review of the life cycle of the product. The data supplied by the manufacturing companies are also independently reviewed.

The most well known example in the paper industry is the Paper Profile. You can tell a Paper Profile meets the Type III requirements when the verifiers logo is included on the document.

An example of an organization that sets standards is the Forest Stewardship Council.

Tissue-Paper.jpg?fit=500%2C500


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2484 2025-03-10 00:01:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2384) Bagpipe

Gist

A bagpipe is a wind instrument consisting of two or more single- or double-reed pipes, the reeds being set in motion by wind fed by arm pressure on an animal-skin (or rubberized-cloth) bag.

Summary

A bagpipe is a wind instrument consisting of two or more single- or double-reed pipes, the reeds being set in motion by wind fed by arm pressure on an animal-skin (or rubberized-cloth) bag. The pipes are held in wooden sockets (stocks) tied into the bag, which is inflated either by the mouth (through a blowpipe with a leather nonreturn valve) or by bellows strapped to the body. Melodies are played on the finger holes of the melody pipe, or chanter, while the remaining pipes, or drones, sound single notes tuned against the chanter by means of extendable joints. The sound is continuous; to articulate the melody and to reiterate notes the piper employs gracing—i.e., rapidly interpolated notes outside the melody, giving an effect of detached notes.

Bagpipes were alluded to in Europe as early as the 9th century; earlier evidence is scarce but includes four Latin and Greek references of about 100 ce and, possibly, an Alexandrian terra-cotta of about 100 bce (at Berlin). In the earliest ones the bag is typically a bladder or a whole sheepskin or goatskin, minus the hindquarters; later, two pieces of skin were cut to shape and sewn together. Bagpipes have always been folk instruments, but after the 15th century some were used for court music, and others have survived as military instruments.

For the chanter, two single-reed cane pipes are placed parallel, one pipe often sounding a drone or other accompaniment to the other pipe. Most have cowhorn bells, being bag versions of hornpipes; they are found in North Africa, the Arabian Peninsula, the Aegean, the Caucasus, and among the Mari of Russia. Other double chanters in eastern Europe (Serbia, Hungary, Ukraine, and elsewhere) are made of a single piece of wood with two cylindrical bores (as in cane pipes) and single reeds of cane or elder. There is also a separate bass drone tuned, like most bass drones, two octaves below the chanter keynote. The Bulgarian gaida and the Czecho-Polish dudy (koza) have a single chanter, and in the dudy, the chanter and drone each carry a huge cowhorn bell.

In western European bagpipes the chanter typically is conically bored and sounded by a double reed; drones are cylindrical with single reeds, as in bagpipes found elsewhere. The Scottish Highland bagpipe has two tenor drones and a bass drone, tuned an octave apart; its scale preserves traditional intervals foreign to European classical music. It was once, like other bagpipes, a pastoral and festive instrument; its military use with drums dates from the 18th century. The Scottish Lowland bagpipe, played from about 1750 to about 1850, was bellows-blown, with three drones in one stock, and had a softer sound. Akin to this were the two-droned bagpipes played up to the 18th century in Germany, the Netherlands, Ireland, and England. The modern two-droned Irish war pipe is a modified Highland bagpipe revived about 1905.

The cornemuse of central France is distinguished by a tenor drone held in the chanter stock beside the chanter. Often bellows-blown and without bass drone, it is characteristically played with the hurdy-gurdy. The Italian zampogna is unique, with two chanters—one for each hand—arranged for playing in harmony, often to accompany a species of bombarde (especially at Christmas); the chanters and two drones are held in one stock, and all have double reeds.

The bellows-blown musette, fashionable in French society under Louis XIV, had one, later two, cylindrical chanters (the second extending the range upward) and four tunable drones bored in a single cylinder. Partly offshoots of the musette are the British small pipes (c. 1700), of which the Northumbrian small pipe is played today. Its cylindrical chanter, with seven keys, is closed at the bottom, so that when all holes are closed it is silent (thus allowing true articulation and staccato). The four single-reed drones are in one stock and are used three at a time.

A complex instrument of similar date is the bellows-blown Irish union pipe. Its chanter is stopped on the knee both for staccato and to jump the reed to the higher octave, giving this bagpipe a melodic compass of two octaves (in contrast to the more common compass of nine tones). The three drones are held in one stock with three accompanying pipes, or regulators. These resemble the chanter in bore and reeds but are stopped below and have four or five keys that are struck with the edge of the player’s right hand to sound simple chords.

Details

Bagpipes are a woodwind instrument using enclosed reeds fed from a constant reservoir of air in the form of a bag. The Great Highland bagpipes are well known, but people have played bagpipes for centuries throughout large parts of Europe, Northern Africa, Western Asia, around the Persian Gulf and northern parts of South Asia.

The term bagpipe is equally correct in the singular or the plural, though pipers usually refer to the bagpipes as "the pipes", "a set of pipes" or "a stand of pipes".

Construction

A set of bagpipes minimally consists of an air supply, a bag, a chanter, and usually at least one drone. Many bagpipes have more than one drone (and, sometimes, more than one chanter) in various combinations, held in place in stocks—sockets that fasten the various pipes to the bag.

Air supply

The most common method of supplying air to the bag is through blowing into a blowpipe or blowstick. In some pipes the player must cover the tip of the blowpipe with the tongue while inhaling, in order to prevent unwanted deflation of the bag, but most blowpipes have a non-return valve that eliminates this need. In recent times, there are many instruments that assist in creating a clean air flow to the pipes and assist the collection of condensation.

The use of a bellows to supply air is an innovation dating from the 16th or 17th century. In these pipes, sometimes called "cauld wind pipes", air is not heated or moistened by the player's breathing, so bellows-driven bagpipes can use more refined or delicate reeds. Such pipes include the Irish uilleann pipes; the border or Lowland pipes, Scottish smallpipes, Northumbrian smallpipes and pastoral pipes in Britain; the musette de cour, the musette bechonnet and the cabrette in France; and the Dudy [pl], koziol bialy, and koziol czarny in Poland.

Bag

The bag is an airtight reservoir that holds air and regulates its flow via arm pressure, allowing the player to maintain continuous, even sound. The player keeps the bag inflated by blowing air into it through a blowpipe or by pumping air into it with a bellows. Materials used for bags vary widely, but the most common are the skins of local animals such as goats, dogs, sheep, and cows. More recently, bags made of synthetic materials including Gore-Tex have become much more common. Some synthetic bags have zips that allow the player to fit a more effective moisture trap to the inside of the bag. However, synthetic bags still carry a risk of colonisation by fungal spores, and the associated danger of lung infection if they are not kept clean, even if they otherwise require less cleaning than do bags made from natural substances.

Bags cut from larger materials are usually saddle-stitched with an extra strip folded over the seam and stitched (for skin bags) or glued (for synthetic bags) to reduce leaks. Holes are then cut to accommodate the stocks. In the case of bags made from largely intact animal skins, the stocks are typically tied into the points where the limbs and the head joined the body of the whole animal, a construction technique common in Central Europe. Different regions have different ways of treating the hide. The simplest methods involve just the use of salt, while more complex treatments involve milk, flour, and the removal of fur. The hide is normally turned inside out so that the fur is on the inside of the bag, as this helps to reduce the effect of moisture buildup within the bag.

Chanter

The chanter is the melody pipe, played with two hands. All bagpipes have at least one chanter; some pipes have two chanters, particularly those in North Africa, in the Balkans, and in Southwest Asia. A chanter can be bored internally so that the inside walls are parallel (or "cylindrical") for its full length, or it can be bored in a conical shape. Popular woods include boxwood, cornel, plum or other fruit wood.

The chanter is usually open-ended, so there is no easy way for the player to stop the pipe from sounding. Thus most bagpipes share a constant legato sound with no rests in the music. Primarily because of this inability to stop playing, technical movements are made to break up notes and to create the illusion of articulation and accents. Because of their importance, these embellishments (or "ornaments") are often highly technical systems specific to each bagpipe, and take many years of study to master. A few bagpipes (such as the musette de cour, the uilleann pipes, the Northumbrian smallpipes, the piva and the left chanter of the surdelina) have closed ends or stop the end on the player's leg, so that when the player "closes" (covers all the holes), the chanter becomes silent.

A practice chanter is a chanter without bag or drones and has a much quieter reed, allowing a player to practice the instrument quietly and with no variables other than playing the chanter.

The term chanter is derived from the Latin cantare, or "to sing", much like the modern French verb meaning "to sing", chanter.

A distinctive feature of the gaida's chanter (which it shares with a number of other Eastern European bagpipes) is the "flea-hole" (also known as a mumbler or voicer, marmorka) which is covered by the index finger of the left hand. The flea-hole is smaller than the rest and usually consists of a small tube that is made out of metal or a chicken or duck feather. Uncovering the flea-hole raises any note played by a half step, and it is used in creating the musical ornamentation that gives Balkan music its unique character.

Some types of gaida can have a double bored chanter, such as the Serbian three-voiced gajde. It has eight fingerholes: the top four are covered by the thumb and the first three fingers of the left hand, then the four fingers of the right hand cover the remaining four holes.

Chanter reed

The note from the chanter is produced by a reed installed at its top. The reed may be a single (a reed with one vibrating tongue) or double reed (of two pieces that vibrate against each other). Double reeds are used with both conical- and parallel-bored chanters while single reeds are generally (although not exclusively) limited to parallel-bored chanters. In general, double-reed chanters are found in pipes of Western Europe while single-reed chanters appear in most other regions.

They are made from reed (arundo donax or Phragmites), bamboo, or elder. A more modern variant for the reed is a combination of a cotton phenolic (Hgw2082) material from which the body of the reed is made and a clarinet reed cut to size in order to fit the body. These type of reeds produce a louder sound and are not so sensitive to humidity and temperature changes.

Drone

Most bagpipes have at least one drone, a pipe that generally is not fingered but rather produces a constant harmonizing note throughout play (usually the tonic note of the chanter). Exceptions are generally those pipes that have a double-chanter instead. A drone is most commonly a cylindrically bored tube with a single reed, although drones with double reeds exist. The drone is generally designed in two or more parts with a sliding joint so that the pitch of the drone can be adjusted.

Depending on the type of pipes, the drones may lie over the shoulder, across the arm opposite the bag, or may run parallel to the chanter. Some drones have a tuning screw, which effectively alters the length of the drone by opening a hole, allowing the drone to be tuned to two or more distinct pitches. The tuning screw may also shut off the drone altogether. In most types of pipes with one drone, it is pitched two octaves below the tonic of the chanter. Additional drones often add the octave below and then a drone consonant with the fifth of the chanter.

legacy_elm_37635908.jpg?crop=3:2,smart&trim=&width=990&quality=65&enable=upscale


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2485 2025-03-11 00:27:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2385) Grass

Gist

Grass plants develop fruit called grain which feed much of the world and yet have green leaves and stems not digestible for humans that are the main food source for animals. Grasses can also be used for building materials, medicines, and biomass fuels.

What is grass?

Any of a large family ( Gramineae or Poaceae ) of monocotyledonous plants having narrow leaves, hollow stems, and clusters of very small, usually wind-pollinated flowers. Grasses include many varieties of plants grown for food, fodder, and ground cover. Wheat, maize, sugar cane, and bamboo are grasses.

What is grass in biology:

Any of a large family ( Gramineae or Poaceae ) of monocotyledonous plants having narrow leaves, hollow stems, and clusters of very small, usually wind-pollinated flowers. Grasses include many varieties of plants grown for food, fodder, and ground cover. Wheat, maize, sugar cane, and bamboo are grasses.3

Summary

Grass is any of many low, green, nonwoody plants belonging to the grass family (Poaceae), the sedge family (Cyperaceae), and the rush family (Juncaceae). There are many grasslike members of other flowering plant families, but only the approximately 10,000 species in the family Poaceae are true grasses.

They are economically the most important of all flowering plants because of their nutritious grains and soil-forming function, and they have the most-widespread distribution and the largest number of individuals. Grasses provide forage for grazing animals, shelter for wildlife, construction materials, furniture, utensils, and food for humans. Some species are grown as garden ornamentals, cultivated as turf for lawns and recreational areas, or used as cover plants for erosion control. Most grasses have round stems that are hollow between the joints, bladelike leaves, and extensively branching fibrous root systems.

Details

Poaceae,  also called Gramineae, is a large and nearly ubiquitous family of monocotyledonous flowering plants commonly known as grasses. It includes the cereal grasses, bamboos, the grasses of natural grassland and species cultivated in lawns and pasture. The latter are commonly referred to collectively as grass.

With around 780 genera and around 12,000 species, the Poaceae is the fifth-largest plant family, following the Asteraceae, Orchidaceae, Fabaceae and Rubiaceae.

The Poaceae are the most economically important plant family, providing staple foods from domesticated cereal crops such as maize, wheat, rice, oats, barley, and millet for people and as feed for meat-producing animals. They provide, through direct human consumption, just over one-half (51%) of all dietary energy; rice provides 20%, wheat supplies 20%, maize (corn) 5.5%, and other grains 6%.[citation needed] Some members of the Poaceae are used as building materials (bamboo, thatch, and straw); others can provide a source of biofuel, primarily via the conversion of maize to ethanol.

Grasses have stems that are hollow except at the nodes and narrow alternate leaves borne in two ranks. The lower part of each leaf encloses the stem, forming a leaf-sheath. The leaf grows from the base of the blade, an adaptation allowing it to cope with frequent grazing.

Grasslands such as savannah and prairie where grasses are dominant are estimated to constitute 40.5% of the land area of the Earth, excluding Greenland and Antarctica. Grasses are also an important part of the vegetation in many other habitats, including wetlands, forests and tundra.

Though they are commonly called "grasses", groups such as the seagrasses, rushes and sedges fall outside this family. The rushes and sedges are related to the Poaceae, being members of the order Poales, but the seagrasses are members of the order Alismatales. However, all of them belong to the monocot group of plants.

Description

Grasses may be annual or perennial herbs,  generally with the following characteristics (the image gallery can be used for reference): The stems of grasses, called culms, are usually cylindrical (more rarely flattened, but not 3-angled) and are hollow, plugged at the nodes, where the leaves are attached. Grass leaves are nearly always alternate and distichous (in one plane), and have parallel veins. Each leaf is differentiated into a lower sheath hugging the stem and a blade with entire (i.e., smooth) margins.  The leaf blades of many grasses are hardened with silica phytoliths, which discourage grazing animals; some, such as sword grass, are sharp enough to cut human skin. A membranous appendage or fringe of hairs called the ligule lies at the junction between sheath and blade, preventing water or insects from penetrating into the sheath.

Flowers of Poaceae are characteristically arranged in spikelets, each having one or more florets. The spikelets are further grouped into panicles or spikes. The part of the spikelet that bears the florets is called the rachilla. A spikelet consists of two (or sometimes fewer) bracts at the base, called glumes, followed by one or more florets.  A floret consists of the flower surrounded by two bracts, one external—the lemma—and one internal—the palea. The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role. The perianth is reduced to two scales, called lodicules,  that expand and contract to spread the lemma and palea; these are generally interpreted to be modified sepals. The fruit of grasses is a caryopsis, in which the seed coat is fused to the fruit wall. A tiller is a leafy shoot other than the first shoot produced from the seed.

Growth and development

Grass blades grow at the base of the blade and not from elongated stem tips. This low growth point evolved in response to grazing animals and allows grasses to be grazed or mown regularly without severe damage to the plant.

Three general classifications of growth habit present in grasses: bunch-type (also called caespitose), stoloniferous, and rhizomatous. The success of the grasses lies in part in their morphology and growth processes and in part in their physiological diversity. There are both C3 and C4 grasses, referring to the photosynthetic pathway for carbon fixation. The C4 grasses have a photosynthetic pathway, linked to specialized Kranz leaf anatomy, which allows for increased water use efficiency, rendering them better adapted to hot, arid environments.

The C3 grasses are referred to as "cool-season" grasses, while the C4 plants are considered "warm-season" grasses.

* Annual cool-season – wheat, rye, annual bluegrass (annual meadowgrass, Poa annua), and oat
* Perennial cool-season – orchardgrass (math, Dactylis glomerata), fescue (Festuca spp.), Kentucky bluegrass and perennial ryegrass (Lolium perenne)
* Annual warm-season – maize, sudangrass, and pearl millet
* Perennial warm-season – big bluestem, Indiangrass, Bermudagrass and switchgrass.

Although the C4 species are all in the PACMAD clade, it seems that various forms of C4 have arisen some twenty or more times, in various subfamilies or genera. In the Aristida genus for example, one species (A. longifolia) is C3 but the approximately 300 other species are C4. As another example, the whole tribe of Andropogoneae, which includes maize, sorghum, sugar cane, "Job's tears", and bluestem grasses, is C4. Around 46 percent of grass species are C4 plants.

Distribution

The grass family is one of the most widely distributed and abundant groups of plants on Earth. Grasses are found on every continent, including Antarctica. The Antarctic hair grass, Deschampsia antarctica is one of only two flowering plant species native to the western Antarctic Peninsula.

Additional Information

Grass is a type of plant with narrow leaves growing from the base. Their appearance as a common plant was in the mid-Cretaceous period. There are 12,000 species now.

A common kind of grass is used to cover the ground in places such as lawns and parks. Grass is usually the color green. That is because they are wind-pollinated rather than insect-pollinated, so they do not have to attract insects. Green is the best colour for photosynthesis.

Grasslands such as savannah and prairie are where grasses are dominant. They cover 40.5% of the land area of the Earth, but not Greenland and Antarctica.

Grasses are monocotyledon herbaceous plants. They include the "grass" of the family Poaceae, which are called grass by ordinary people. This family is also called the Gramineae and includes some of the sedges (Cyperaceae) and the rushes (Juncaceae). These three families are not very closely related, though all of them belong to clades in the order Poales. They are similar adaptations to a similar life-style.

With about 780 genera and about 12,000 species, the Poaceae is the fifth-largest plant family. Only the Asteraceae, Orchidaceae, Fabaceae and Rubiaceae have more species.

The true grasses include cereals, bamboo and the grasses of lawns (turf) and grassland. Uses for graminoids include food (as grain, shoots or rhizomes), drink (beer, whisky), pasture for livestock, thatch, paper, fuel, clothing, insulation, construction, basket weaving and many others.

Many grasses are short, but some grasses can grow tall, such as bamboo. Plants from the grass family can grow in many places and make grasslands, including areas that are very dry or cold. There are several other plants that look similar to grass and are referred to as such but are not members of the grass family. These plants include rushes, reeds, papyrus and water chestnut. Seagrass is a monocot in the order Alismatales.

Grasses are an important food for many animals, such as deer, buffalo, cattle, mice, grasshoppers, caterpillars and many other grazers. Unlike other plants, grasses grow from the bottom, so when animals eat grass, they usually do not destroy the part that grows. This is part of the reason why the plants are so successful.

Without grass, more soil might wash away into rivers (erosion).

Evolution of grass

Grasses include some of the most versatile plant life-forms. They became widespread toward the end of the Cretaceous. Fossilized dinosaur dung (coprolites) have been found containing grass phytoliths (silica stones inside grass leaves). Grasses have adapted to conditions in lush rain forests, dry deserts, cold mountains and even intertidal habitats, and are now the most widespread plant type. Grass is a valuable source of food and energy for many animals.[9]

Grass and people

Lawn grass is often planted on sports fields and in the area around a building. Sometimes chemicals and water is used to help lawns to grow.

People have used grasses for a long time. People eat parts of grasses. Corn, wheat, barley, oats, rice and millet are cereals, common grains whose seeds are used for food and to make alcohol such as beer.

Sugar comes from sugar cane, which is also a plant in the grass family. People have grown grasses as food for farm animals for about 4,000 years. People use bamboo to build houses, fences, furniture and other things. Grass plants can also be used as fuel, to cover rooves, and to weave baskets.

grass_10surprising_header.jpg?h=480&iar=0&w=1140&hash=52D3C55DD9AEDEC3AE5D90AC0599EF8D


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2486 2025-03-11 20:44:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2386) Archipelago

Gist

An archipelago is an area that contains a chain or group of islands scattered in lakes, rivers, or the ocean. West of British Columbia, Canada, and south of the Yukon Territory, the southeastern coastline of Alaska trails off into the islands of the Alexander Archipelago.

The largest archipelago in the world was formed by glacial retreat. The Malay Archipelago, between the Pacific and Indian Oceans, contains more than 25,000 islands in Southeast Asia. The thousands of islands of Indonesia and Malaysia are a part of the Malay Archipelago.

Summary

The word “archipelago” comes from the medieval Italian word archi, meaning chief or principal, and the Greek word pelagus, meaning gulf, pool, or pond.

Most archipelagos are formed when volcanoes erupt from the ocean floor; these are called oceanic islands. The islands of the Hawaiian archipelago, for example, were formed by a series of volcanic eruptions that began more than 80 million years ago and are still active today.

Archipelagos can also form as a result of erosion, sedimentary deposits, rising sea level, and other geographic processes. The Florida Keys are an example of a coral cay archipelago, which form when ocean currents transport sediments that gradually build up on the reef surface.

Continental fragments are archipelagos that have separated from a continental land mass due to the Earth’s tectonic movements. The Farallon Islands off the coast of California are an example of continental fragments.

Continental archipelagos, such as British Columbia’s Inside Passage, are islands that form close to the coast of a continent.

Details

An archipelago, sometimes called an island group or island chain, is a chain, cluster, or collection of islands. An archipelago may be in an ocean, a sea, or a smaller body of water. Example archipelagos include the Aegean Islands (the origin of the term), the Canadian Arctic Archipelago, the Stockholm Archipelago, the Malay Archipelago (which includes the Indonesian and Philippine Archipelagos), the Lucayan (Bahamian) Archipelago, the Japanese archipelago, and the Hawaiian archipelago.

Etymology

The word archipelago is derived from the Italian arcipelago, used as a proper name for the Aegean Sea, itself perhaps a deformation of the Greek Αιγαίον.  Later, usage shifted to refer to the Aegean Islands (since the sea has a large number of islands). The erudite paretymology deriving the word from Ancient Greek (arkhi-, "chief") and (pélagos, "sea"), proposed by Buondelmonti, can still be found here and there.

Geographic types

Archipelagos may be found isolated in large amounts of water or neighboring a large land mass. For example, Scotland has more than 700 islands surrounding its mainland, which form an archipelago.

Depending on their geological origin, islands forming archipelagos can be referred to as oceanic islands, continental fragments, or continental islands.

Oceanic islands

Oceanic islands are formed by volcanoes erupting from the ocean floor. The Hawaiian Islands and Galapagos Islands in the Pacific, and Mascarene Islands in the south Indian Ocean are examples.

Continental fragments

Continental fragments are islands that were once part of a continent, and became separated due to natural disasters. The fragments may also be formed by moving glaciers which cut out land, which then fills with water. The Farallon Islands off the coast of California are examples of continental islands.

Continental Islands

Continental islands are islands that were once part of a continent and still sit on the continental shelf, which is the edge of a continent that lies under the ocean. The islands of the Inside Passage off the coast of British Columbia and the Canadian Arctic Archipelago are examples.

Artificial archipelagos

Artificial archipelagos have been created in various countries for different purposes. Palm Islands and The World Islands in Dubai were or are being created for leisure and tourism purposes. Marker Wadden in the Netherlands is being built as a conservation area for birds and other wildlife.

Superlatives

The largest archipelago in the world by number of islands is the Archipelago Sea, which is part of Finland. There are approximately 40,000 islands, mostly uninhabited.

The largest archipelagic state in the world by area, and by population, is Indonesia.

Additional Information

An archipelago is a group of islands closely scattered in a body of water. Usually, this body of water is the ocean, but it can also be a lake or river.

Most archipelagoes are made of oceanic islands. This means the islands were formed by volcanoes erupting from the ocean floor. An archipelago made up of oceanic islands is called an island arc.

Many island arcs were formed over a single “hot spot.” The Earth’s crust shifted while the hot spot stayed put, creating a line of islands that show exactly the direction the crust moved.

The Hawaiian Islands continue to form this way, with a hot spot remaining relatively stable while the Pacific tectonic plate moves northwest. There are 137 Hawaiian islands, reefs and atolls, stretching from Kure and Midway in the west to the "Big Island" of Hawaii in the east. The Big Island is still being formed by the active volcanoes Mauna Loa and Kilauea. The island arc will grow as Loihi, a seamount southeast of the Big Island, eventually punctures the ocean surface as Hawaii's youngest island.

Japan is another island arc. The Japanese archipelago consists of four large islands, from Hokkaido, in the far north, through Honshu, Shikoku, and Kyushu in the far south. Japan also includes more than 3,000 smaller islands. In several places in the Japanese archipelago, volcanoes are still active.

Volcanoes do not form all archipelagoes. Many archipelagoes are continental islands formed only after the last ice age. As glaciers retreated, sea levels rose and low-lying valleys were flooded. Coastal mountain ranges became archipelagoes just off the mainland.

The largest archipelago in the world was formed by glacial retreat. The Malay Archipelago, between the Pacific and Indian Oceans, contains more than 25,000 islands in Southeast Asia. The thousands of islands of Indonesia and Malaysia are a part of the Malay Archipelago. At least some of these islands—and the straits that separate them—were part of mainland Asia during the last ice age.

Finland’s Archipelago Sea, part of the Baltic Sea, also emerged after the last ice age. There are more than 50,000 islands in the Archipelago Sea, although many of them do not measure half a hectare (one acre). Some of the islands are close enough to be connected by bridges.

Islands of the archipelago sea were never coastal mountaintops, however. They were formed by post-glacial rebound. In this process, land that was squashed by the weight of heavy glaciers during the Ice Age slowly regains its shape, like a sponge. Because post-glacial rebound is still occurring, islands continue to rise from the Archipelago Sea.

st-davids-island.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2487 2025-03-12 00:12:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2387) Juggler/Juggling

Gist

A juggler, (from Latin joculare, “to jest”), is an entertainer who specializes in balancing and in feats of dexterity in tossing and catching items such as balls, plates, and knives.

Summary

A juggler, (from Latin joculare, “to jest”), is an entertainer who specializes in balancing and in feats of dexterity in tossing and catching items such as balls, plates, and knives. Its French linguistic equivalent, jongleur, signifies much more than just juggling, though some of the jongleurs may have turned to juggling when their original role fell out of fashion.

Juggling was a highly developed art long before the medieval period, according to evidence found in ancient Egyptian, Greek, and Roman sculpture, coins, and manuscripts. Comparison with these ancient records reveals that, although juggling has advanced in technical perfection, the underlying principles are still the same. In an early manuscript, for example, a bear is shown standing on its hind legs and juggling with three knives. (A similar feat is performed in the modern Russian circus with the bear lying in a small cradle and juggling a flaming torch with its hind legs).

In the 17th and 18th centuries the juggler found a living in the fairs, but it was not until the 19th century that jugglers came into their own in the circus and in the music hall. These new fields provided a unique training ground for fresh talent and before long had produced such outstanding artists as Severus Scheffer, Kara, Paul Cinquevalli, and Enrico Rastelli (who could juggle with 10 balls, an almost miraculous accomplishment in the juggling world). Juggling large numbers of balls remains a popular activity, as do a variety of specialties, such as juggling blindfolded, on horseback, on a perch or high wire, or, as done by Rudy Horn, on a unicycle.

Details

Juggling is a physical skill, performed by a juggler, involving the manipulation of objects for recreation, entertainment, art or sport. The most recognizable form of juggling is toss juggling. Juggling can be the manipulation of one object or many objects at the same time, most often using one or two hands but other body parts as well, like feet or head. Jugglers often refer to the objects they juggle as props. The most common props are balls, clubs, or rings. Some jugglers use more dramatic objects such as knives, fire torches or chainsaws. The term juggling can also commonly refer to other prop-based manipulation skills, such as diabolo, plate spinning, devil sticks, poi, cigar boxes, contact juggling, hooping, yo-yo, hat manipulation and kick-ups.

Etymology

The words juggling and juggler derive from the Middle English jogelen ("to entertain by performing tricks"), which in turn is from the Old French jangler. There is also the Late Latin form joculare of Latin joculari, meaning "to jest". Although the etymology of the terms juggler and juggling in the sense of manipulating objects for entertainment originates as far back as the 11th century, the current sense of to juggle, meaning "to continually toss objects in the air and catch them", originates from the late 19th century.

From the 12th to the 17th century, juggling and juggler were the terms most consistently used to describe acts of magic, though some have called the term juggling a lexicographical nightmare, stating that it is one of the least understood relating to magic. In the 21st century, the term juggling usually refers to toss juggling, where objects are continuously thrown into the air and caught again, repeating in a rhythmical pattern.

According to James Ernest in his book Contact Juggling, most people will describe juggling as "throwing and catching things"; however, a juggler might describe the act as "a visually complex or physically challenging feat using one or more objects". David Levinson and Karen Christensen describe juggling as "the sport of tossing and catching or manipulating objects [...] keeping them in constant motion". "Juggling, like music, combines abstract patterns and mind-body coordination in a pleasing way."

Origins and history:

Ancient to 20th century

The earliest record of juggling is suggested in a panel from the 15th (1994 to 1781 B.C.) Beni Hasan tomb of an unknown Egyptian prince, showing female dancers and acrobats throwing balls. Juggling has been recorded in many early cultures including Egyptian, Nabataean, Chinese, Indian, Greek, Roman, Norse, Aztec (Mexico) and Polynesian civilizations.

Juggling in ancient China was an art performed by some warriors. One such warrior was Xiong Yiliao, whose juggling of nine balls in front of troops on a battlefield reportedly caused the opposing troops to flee without fighting, resulting in a complete victory.

In Europe, juggling was an acceptable diversion until the decline of the Roman Empire, after which the activity fell into disgrace. Throughout the Middle Ages, most histories were written by religious clerics who frowned upon the type of performers who juggled, called gleemen, accusing them of base morals or even practicing witchcraft. Jugglers in this era would only perform in marketplaces, streets, fairs, or drinking houses. They would perform short, humorous and bawdy acts and pass a hat or bag among the audience for tips. Some kings' and noblemen's bards, fools, or jesters would have been able to juggle or perform acrobatics, though their main skills would have been oral (poetry, music, comedy and storytelling).

In 1768, Philip Astley opened the first modern circus. A few years later, he employed jugglers to perform acts along with the horse and clown acts. Since then, jugglers have been associated with circuses.

In the early 19th century, troupes from Asia, such as the famous "Indian Jugglers" referred to by William Hazlitt, arrived to tour Britain, Europe and parts of America.

In the 19th century, variety and music hall theatres became more popular, and jugglers were in demand to fill time between music acts, performing in front of the curtain while sets were changed. Performers started specializing in juggling, separating it from other kinds of performance such as sword swallowing and magic. The Gentleman Juggler style was established by German jugglers such as Salerno and Kara. Rubber processing developed, and jugglers started using rubber balls. Previously, juggling balls were made from balls of twine, stuffed leather bags, wooden spheres, or various metals. Solid or inflatable rubber balls meant that bounce juggling was possible. Inflated rubber balls made ball spinning easier and more readily accessible. Soon in North America, vaudeville theatres employed jugglers, often hiring European performers.

20th century

In the early to mid-20th century, variety and vaudeville shows decreased in popularity due to competition from motion picture theatres, radio and television, and juggling suffered as a result. Music and comedy transferred very easily to radio, but juggling could not. In the early years of TV, when variety-style programming was popular, jugglers were often featured; but developing a new act for each new show, week after week, was more difficult for jugglers than other types of entertainers; comedians and musicians can pay others to write their material, but jugglers cannot get other people to learn new skills on their behalf.

The International Jugglers' Association, founded in 1947, began as an association for professional vaudeville jugglers, but restrictions for membership were eventually changed, and non-performers were permitted to join and attend the annual conventions. The IJA continues to hold an annual convention each summer and runs a number of other programs dedicated to advance the art of juggling worldwide.

World Juggling Day was created as an annual day of recognition for the hobby, with the intent to teach people how to juggle, to promote juggling and to get jugglers together and celebrate. It is held on the Saturday in June closest to the 17th, the founding date of the International Jugglers' Association.

Most cities and large towns now have juggling clubs. These are often based within, or connected to, universities and colleges. There are also community circus groups that teach young people and put on shows. The Juggling Edge maintains a searchable database of most juggling clubs.

Since the 1980s, a juggling culture has developed. The scene revolves around local clubs and organizations, special events, shows, magazines, web sites, internet forums and, possibly most importantly, juggling conventions. In recent years, there has also been a growing focus on juggling competitions. Juggling today has evolved and branched out to the point where it is synonymous with all prop manipulation. The wide variety of the juggling scene can be seen at any juggling convention.

Juggling conventions or festivals form the backbone of the juggling scene. The focus of most of these conventions is the main space used for open juggling. There will also be more formal workshops in which expert jugglers will work with small groups on specific skills and techniques. Most juggling conventions also include a main show (open to the general public), competitions, and juggling games.

clown-juggling_1308-84022.jpg?t=st=1741781526~exp=1741785126~hmac=6be42429f487c3755e5d09c78bbe9a52d958a5f905e8d8a6d037f953f4d18cd0&w=740


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2488 2025-03-13 00:04:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2388) Electrician

Gist

An electrician is a tradesperson specializing in electrical wiring of buildings, transmission lines, stationary machines, and related equipment. Electricians may be employed in the installation of new electrical components or the maintenance and repair of existing electrical infrastructure.

Summary

Electricians work with electrical power. They install, test, and maintain wiring, lighting systems, and fixtures in homes and businesses.

Electricians work indoors and outdoors. They work in new construction and in existing buildings. Electricians should have a basic understanding of mathematics, including algebra, and good problem-solving skills. Experienced electricians can work with architects and help design electrical systems for new buildings. Electricians also work with alternative energy sources that turn wind or sunlight into electricity.

It takes commitment to gain the proper training to become an electrician. Electricians need to have a license to be able to work in most states. Most electricians earn their licenses by participating in an apprenticeship program. An apprenticeship is when a student is taught a trade with hands-on experience by a master. This program lasts about 4–5 years. In addition to working on projects during this time the apprentice must also take hundreds of hours of technical courses.

Details

An electrician is a tradesperson specializing in electrical wiring of buildings, transmission lines, stationary machines, and related equipment. Electricians may be employed in the installation of new electrical components or the maintenance and repair of existing electrical infrastructure. Electricians may also specialize in wiring ships, airplanes, and other mobile platforms, as well as data and cable lines.

Terminology

Electricians were originally people who demonstrated or studied the principles of electricity, often electrostatic generators of one form or another.

In the United States, electricians are divided into two primary categories: lineperson, who work on electric utility company distribution systems at higher voltages, and wiremen, who work with the lower voltages utilized inside buildings. Wiremen are generally trained in one of five primary specialties: commercial, residential, light industrial, industrial, and low-voltage wiring, more commonly known as Voice-Data-Video, or VDV. Other sub-specialties such as control wiring and fire-alarm may be performed by specialists trained in the devices being installed, or by inside wiremen.

Electricians are trained to one of three levels: Apprentice, Journeyperson, and Master Electrician. In the US and Canada, apprentices work and receive a reduced compensation while learning their trade. They generally take several hundred hours of classroom instruction and are contracted to follow apprenticeship standards for a period of between three and six years, during which time they are paid as a percentage of the Journeyperson's pay. Journeymen are electricians who have completed their Apprenticeship and who have been found by the local, State, or National licensing body to be competent in the electrical trade. Master Electricians have performed well in the trade for a period of time, often seven to ten years, and have passed an exam to demonstrate superior knowledge of the National Electrical Code, or NEC.

Service electricians are tasked to respond to requests for isolated repairs and upgrades. They have skills troubleshooting wiring problems, installing wiring in existing buildings, and making repairs. Construction electricians primarily focus on larger projects, such as installing all new electrical system for an entire building, or upgrading an entire floor of an office building as part of a remodeling process. Other specialty areas are marine electricians, research electricians and hospital electricians. "Electrician" is also used as the name of a role in stagecraft, where electricians are tasked primarily with hanging, focusing, and operating stage lighting. In this context, the Master Electrician is the show's chief electrician. Although theater electricians routinely perform electrical work on stage lighting instruments and equipment, they are not part of the electrical trade and have a different set of skills and qualifications from the electricians that work on building wiring.

In the film industry and on a television crew the head electrician is referred to as a Gaffer.

Electrical contractors are businesses that employ electricians to design, install, and maintain electrical systems. Contractors are responsible for generating bids for new jobs, hiring tradespeople for the job, providing material to electricians in a timely manner, and communicating with architects, electrical and building engineers, and the customer to plan and complete the finished product.

Training and regulation of trade

Many jurisdictions have regulatory restrictions concerning electrical work for safety reasons due to the many hazards of working with electricity. Such requirements may be testing, registration or licensing. Licensing requirements vary between jurisdictions.

Australia

An electrician's license entitles the holder to carry out all types of electrical installation work in Australia without supervision. However, to contract, or offer to contract, to carry out electrical installation work, a licensed electrician must also be registered as an electrical contractor. Under Australian law, electrical work that involves fixed wiring is strictly regulated and must almost always be performed by a licensed electrician or electrical contractor. A local electrician can handle a range of work including air conditioning, light fittings and installation, safety switches, smoke alarm installation, inspection and certification and testing and tagging of electrical appliances.

To provide data, structured cabling systems, home automation & theatre, LAN, WAN and VPN data solutions or phone points, an installer must be licensed as a Telecommunications Cable Provider under a scheme controlled by Australian Communications and Media Authority

Electrical licensing in Australia is regulated by the individual states. In Western Australia, the Department of Commerce tracks licensee's and allows the public to search for individually named/licensed Electricians.

Currently in Victoria the apprenticeship lasts for four years, during three of those years the apprentice attends trade school in either a block release of one week each month or one day each week. At the end of the apprenticeship the apprentice is required to pass three examinations, one of which is theory based with the other two practically based. Upon successful completion of these exams, providing all other components of the apprenticeship are satisfactory, the apprentice is granted an A Class licence on application to Energy Safe Victoria (ESV).

An A Class electrician may perform work unsupervised but is unable to work for profit or gain without having the further qualifications necessary to become a Registered Electrical Contractor (REC) or being in the employment of a person holding REC status. However, some exemptions do exist.

In most cases a certificate of electrical safety must be submitted to the relevant body after any electrical works are performed.

Safety equipment used and worn by electricians in Australia (including insulated rubber gloves and mats) needs to be tested regularly to ensure it is still protecting the worker. Because of the high risk involved in this trade, this testing needs to be performed regularly and regulations vary according to state. Industry best practice is the Queensland Electrical Safety Act 2002, and requires six-monthly testing.

Canada

Training of electricians follows an apprenticeship model, taking four or five years to progress to fully qualified journeyperson level. Typical apprenticeship programs consists of 80-90% hands-on work under the supervision of journeymen and 10-20% classroom training. Training and licensing of electricians is regulated by each province, however professional licenses are valid throughout Canada under Agreement on Internal Trade. An endorsement under the Red Seal Program provides additional competency assurance to industry standards.[9] In order for individuals to become a licensed electricians, they need to have 9000 hours of practical, on the job training. They also need to attend school for 4 terms and pass a provincial exam. This training enables them to become journeyperson electricians. Furthermore, in British Columbia, an individual can go a step beyond that and become a "FSR", or field safety representative. This credential gives the ability to become a licensed electrical contractor and to pull permits. Notwithstanding this, some Canadian provinces only grant "permit pulling privileges" to current Master Electricians, that is, a journeyperson who has been engaged in the industry for three years and has passed the Master's examination (i.e. Alberta). The various levels of field safety representatives are A, B and C. The only difference between each class is that they are able to do increasingly higher voltage and current work.

United Kingdom

The two qualification awarding organisations are City and Guilds and EAL. Electrical competence is required at Level 3 to practice as a 'qualified electrician' in the UK. Once qualified and demonstrating the required level of competence an Electrician can apply to register for a Joint Industry Board Electrotechnical Certification Scheme card in order to work on building sites or other controlled areas.

Although partly covered during Level 3 training, more in depth knowledge and qualifications can be obtained covering subjects such as Design and Verification or Testing and Inspection among others. These additional qualifications can be listed on the reverse of the JIB card. Beyond this level is additional training and qualifications such as EV charger installations or training and working in specialist areas such as street furniture or within industry.

The Electricity at Work Regulations are a statutory document that covers the use and proper maintenance of electrical equipment and installations within businesses and other organisations such as charities. Parts of the Building Regulations cover the legal requirements of the installation of electrical technical equipment with Part P outlining most of the regulations covering dwellings

Information regarding design, selection, installation and testing of electrical structures is provided in the non-statutory publication 'Requirements for Electrical Installations, IET Wiring Regulations, Eighteenth Edition, BS 7671:2018' otherwise known as the Wiring Regulations or 'Regs'. Usual amendments are published on an ad hoc bases when minor changes occur. The first major update of the 18th Edition were published during February 2020 mainly covering the section covering Electric vehicles charger installations although an addendum was published during December 2019 correcting some minor mistakes and adding some small changes. The IET also publish a series of 'Guidance Notes' in book form that provide further in-depth knowledge.

With the exception of the work covered by Part P of the Building Regulations, such as installing consumer units, new circuits or work in bathrooms, there are no laws that prevent anyone from carrying out some basic electrical work in the UK.

In British English, an electrician is colloquially known as a "spark".

United States

Although many electricians work for private contractors, many electricians get their start in the military.

The United States does not offer nationwide licensing and electrical licenses are issued by individual states. There are variations in licensing requirements, however, all states recognize three basic skill categories: level electricians. Journeyperson electricians can work unsupervised provided that they work according to a master's direction. Generally, states do not offer journeyperson permits, and journeyperson electricians and other apprentices can only work under permits issued to a master electrician. Apprentices may not work without direct supervision

Before electricians can work unsupervised, they are usually required to serve an apprenticeship lasting three to five years under the general supervision of a master electrician and usually the direct supervision of a journeyperson electrician. Schooling in electrical theory and electrical building codes is required to complete the apprenticeship program. Many apprenticeship programs provide a salary to the apprentice during training. A journeyperson electrician is a classification of licensing granted to those who have met the experience requirements for on the job training (usually 4,000 to 6,000 hours) and classroom hours (about 144 hours). Requirements include completion of two to six years of apprenticeship training and passing a licensing exam.

Reciprocity

An electrician's license is valid for work in the state where the license was issued. In addition, many states recognize licenses from other states, sometimes called interstate reciprocity participation, although there can be conditions imposed. For example, California reciprocates with Arizona, Nevada, and Utah on the condition that licenses are in good standing and have been held at the other state for five years. Nevada reciprocates with Arizona, California, and Utah. Maine reciprocates with New Hampshire and Vermont at the master level, and the state reciprocates with New Hampshire, North Dakota, Idaho, Oregon, Vermont, and Wyoming at the journeyperson level. Colorado maintains a journeyperson alliance with Alaska, Arkansas, the Dakotas, Idaho, Iowa, Minnesota, Montana, Nebraska, New Hampshire, New Mexico, Oklahoma, Utah, and Wyoming.

Tools

Electricians use a range of hand and power tools and instruments.

Two of the tools commonly used by electricians. The fish tape is used to pull conductors through conduits, or sometimes to pull conductors through hollow walls. The conduit bender is used to make accurate bends and offsets in electrical conduit.
Some of the more common tools are:

* Conduit Bender: Bender used to bend various types of Electrical Conduit. These come in many variations including hand, electrical, and hydraulic powered.
* Lineperson's Pliers: Heavy-duty pliers for general use in cutting, bending, crimping and pulling wire.
* Diagonal Pliers (also known as side cutters or Dikes): Pliers consisting of cutting blades for use on smaller gauge wires, but sometimes also used as a gripping tool for removal of nails and staples.
* Needle-Nose Pliers: Pliers with a long, tapered gripping nose of various size, with or without cutters, generally smaller and for finer work (including very small tools used in electronics wiring).
* Wire Strippers: Plier-like tool available in many sizes and designs featuring special blades to cut and strip wire insulation while leaving the conductor wire intact and without nicks. Some wire strippers include cable strippers among their multiple functions, for removing the outer cable jacket.
* Cable Cutters: Highly leveraged pliers for cutting larger cable.
* Armored Cable Cutters: Commonly referred to by the trademark 'Roto-Split', is a tool used to cut the metal sleeve on MC (Metal Clad) cable.
* Multimeter: An instrument for electrical measurement with multiple functions. It is available as analog or digital display. Common features include: voltage, resistance, and current. Some models offer additional functions.
* Unibit or Step-Bit: A metal-cutting drill bit with stepped-diameter cutting edges to enable convenient drilling holes in preset increments in stamped/rolled metal up to about 1.6mm (1/16 inch) thick. Commonly used to create custom knock-outs in a breaker panel or junction box.
* Cord, Rope or Fish Tape. Used to manipulate cables and wires through cavities. The fishing tool is pushed, dropped, or shot into the installed raceway, stud-bay or joist-bay of a finished wall or in a floor or ceiling. Then the wire or cable is attached and pulled back.
* Crimping Tools: Used to apply terminals or splices. These may be hand or hydraulic powered. Some hand tools have ratchets to insure proper pressure. Hydraulic units achieve cold welding, even for aluminum cable.
* Insulation Resistance Tester: Commonly referred to as a Megger, these testers apply several hundred to several thousand volts to cables and equipment to determine the insulation resistance value.
* Knockout Punch: For punching holes into boxes, panels, switchgear, etc. for inserting cable & pipe connectors.
* GFI/GFCI Testers: Used to test the functionality of Ground-Fault Interrupting receptacles.
* Voltmeter: An electrician's tool used to measure electrical potential difference between two points in an electric circuit.
* Other general-use tools include screwdrivers, hammers, reciprocating saws, drywall saws, flashlights, chisels, tongue and groove pliers (Commonly referred to as 'Channellock®' pliers, a famous manufacturer of this tool) and drills.

Safety

In addition to the workplace hazards generally faced by industrial workers, electricians are also particularly exposed to injury by electricity. An electrician may experience electric shock due to direct contact with energized circuit conductors or due to stray voltage caused by faults in a system. An electric arc exposes eyes and skin to hazardous amounts of heat and light. Faulty switchgear may cause an arc flash incident with a resultant blast. Electricians are trained to work safely and take many measures to minimize the danger of injury. Lockout and tagout procedures are used to make sure that circuits are proven to be de-energized before work is done. Limits of approach to energized equipment protect against arc flash exposure; specially designed flash-resistant clothing provides additional protection; grounding (earthing) clamps and chains are used on line conductors to provide a visible assurance that a conductor is de-energized. Personal protective equipment provides electrical insulation as well as protection from mechanical impact; gloves have insulating rubber liners, and work boots and hard hats are specially rated to provide protection from shock. If a system cannot be de-energized, insulated tools are used; even high-voltage transmission lines can be repaired while energized, when necessary.

Electrical workers, which includes electricians, accounted for 34% of total electrocutions of construction trades workers in the United States between 1992 and 2003.

Why-hire-a-professional-electrician_-scaled.jpeg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2489 2025-03-14 00:07:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2389) Baker

Gist

A baker is a tradesperson who bakes and sometimes sells breads and other products made of flour by using an oven or other concentrated heat source. The place where a baker works is called a bakery.

A person whose duty is to bake a cake is commonly referred to as a baker or pastry chef. However, in some cases, a person who bakes cakes as a hobby or for personal enjoyment may simply be called a home baker or cake maker.

Summary

A baker is a tradesperson who bakes and sometimes sells breads and other products made of flour by using an oven or other concentrated heat source. The place where a baker works is called a bakery.

History:

Ancient history

Since grains have been a staple food for millennia, the activity of baking is a very old one. Control of yeast, however, is relatively recent. By the fifth and sixth centuries BCE, the ancient Greeks used enclosed ovens heated by wood fires; communities usually baked bread in a large communal oven. Greeks baked dozens and possibly hundreds of types of bread; Athenaeus described seventy-two varieties.

In ancient Rome several centuries later, the first mass production of breads occurred, and "the baking profession can be said to have started at that time." Ancient Roman bakers used honey and oil in their products, creating pastries rather than breads. In ancient Rome, bakers (Latin, pistor) were sometimes slaves, who were (like other slave-artisans) sometimes manumitted. Large households in Rome normally had their own bakers.[4] During those times, most of the people used to bake their own bread but bakeries (pistrina) were popular all over the towns.

The Gauls are credited with discovering that the addition of beer froth to bread dough made well-leavened bread, marking the use of controlled yeast for bread dough.

Medieval Europe

In Medieval Europe, baking ovens were often separated from other buildings (and sometimes located outside city walls) to mitigate the risk of fire. Because bread was an important staple food, bakers' production factors (such as bolting yields, ingredients, and loaf sizes) were heavily regulated. For example, Henry III of England promulgated the Assize of Bread and Ale in 1267, subjecting all commercial bakers and brewers to various fees in order to practice their trade and imposing various regulations, such as inspection and verification of weights and measures, quality control, and price controls.[6] Soon after the enactment of the Assize, "baking became a very stable industry, and was executed much more professionally than brewing, resulting in towns and villages having fewer bakers than brewers." Because ovens were expensive capital investments and required careful operation, specialized bakeries opened.

Bakers were often part of the guild system, which was well-established by the sixteenth century: master bakers instructed apprentices and were assisted by journeymen. In Amsterdam in 1694, for example, the cake-bakers, pie-bakers, and rusk-bakers separated from an earlier Bread Bakers Guild and formed their own guild, regulating the trade. A fraternity of bakers in London existed as early as 1155, according to records of payments to the Exchequer; the Worshipful Company of Bakers was formed by charters dated 1486, 1569, and 1685. The guild still exists today, with mostly ceremonial and charitable functions. Five bakers have served as lord mayor of London.

A group of bakers is called a "tabernacle".

Ming dynasty China

In Ming dynasty China, bakers were divided into different social statuses according to their customers. Bakers were among the thousands of servants who served in the Ming Palace, including recruited cooks, imperial eunuchs, and trained serving-women (Shangshiju). Bakers often joined the occupation through apprenticeship, or by being born into a family of bakers.

In addition to the secular aspect of baking, Ming bakers also were responsible for providing pastries for use in various rituals, festivals and ceremonies, such as zongzi. In "Shi Fu Meets a Friend at Tanque" buns were provided for the construction ceremony.

Within bakeries, traditional patriarchal hierarchy controlled. For the family-owned bakery, the eldest male figure (usually the father) in the highest position of the hierarchy. For example, in Feng Menglong's story, when Mr. Bo went out looking for the family's lost silver, his wife was ordered to take care of the bakery.

Ming fiction and art records examples of various bakers; for example, in Feng Menglong's story, the Bo couple owns a bakery to sell the cakes and snacks while in Water Margin, the character Wu Dalang does not have a settled store and sells pancakes on the shoulder pole along the street The Ming-era painter Qiu Ying's work Along the River During the Qingming Festival shows food stores alongside the street and peddlers who are selling food along the streets.

The Ming work Ming Dai Tong Su Ri Yong Lei Shu, which records techniques and items needed in Ming daily life, devotes a full chapter to culinary skills, including the preparation of pancakes and other types of cakes.

The work The Plum in the Golden Vase mentions baozi (steam bun).

Columbian Exchange

The Columbian Exchange, which began in 1492, had a profound influence on the baking occupation. Access to sugar greatly increased as a result of new cultivation in the Caribbean, and ingredients such as cocoa and chocolate became available in the Old World. In the eighteenth century, processors learned how to refine sugar from sugar beets, allowing Europeans to grow sugar locally.[5] These developments led to an increase in the sophistication of baking and pastries, and the development of new products such as puff pastries and Danish dough.

18th century to present

Two important books on bread-baking were published in the 1770s: Paul-Jacques Malaouin published L'art du meinier, du boulanger et du vermicellier (The Art of the Miller, the Bread-Baker, and the Pasta-Maker) in 1775, and Antoine-Augustin Parmentier published Le parfair boulanger (The Perfect Bread-Baker) in 1778.

A study of the English city of Manchester from 1824–85, during the Industrial Revolution, determined that "baker and shopkeeper" was the third-most common occupation, with 178 male bakers, 19 female bakers, and 8 bakers of unknown gender in the city at that time. This occupation was less common that cloth manufacturer and tavern/public house worker, but more common than cotton spinner, merchant, calico printer, or grocer.

In 1895, the New York State Assembly passed a reformist "bakeshop law" which included protections for bakery workers; the law "banned employees from sleeping in the bakeries; specified the drainage, plumbing and maintenance necessary to keep the bakeries sanitary (cats were specifically allowed to stay on the premise—presumably to deal with the rats); limited the daily and weekly maximum of hours worked; and established an inspectorate to make sure these conditions were met." The legislation was soon replicated in other states. Joseph Lochner, a bakery owner in Utica, New York, was subsequently convicted of violating the law for forcing his employees to work more than sixty hours a week. He appealed his case to the U.S. Supreme Court, which decided, in the highly influential case of Lochner v. New York (1905), over a dissent from Justice Oliver Wendell Holmes, that the labor law violated a constitutional right to "freedom of contract". This case marked the beginning of a "pro-employer, laissez-faire" era, later known as the Lochner era, which "would cast a long shadow over American law, society, and politics" until the late 1930s, when Lochner was repudiated. Frustrated with the rapid deterioration of working conditions, bakery workers in New York went on strike in August 1905.

Details

Baking, process of cooking by dry heat, especially in some kind of oven. It is probably the oldest cooking method. Bakery products, which include bread, rolls, cookies, pies, pastries, and muffins, are usually prepared from flour or meal derived from some form of grain. Bread, already a common staple in prehistoric times, provides many nutrients in the human diet.

History

The earliest processing of cereal grains probably involved parching or dry roasting of collected grain seeds. Flavour, texture, and digestibility were later improved by cooking whole or broken grains with water, forming gruel or porridge. It was a short step to the baking of a layer of viscous gruel on a hot stone, producing primitive flat bread. More sophisticated versions of flat bread include the Mexican tortilla, made of processed corn, and the chapati of India, usually made of wheat.

Baking techniques improved with the development of an enclosed baking utensil and then of ovens, making possible thicker baked cakes or loaves. The phenomenon of fermentation, with the resultant lightening of the loaf structure and development of appealing flavours, was probably first observed when doughs or gruels, held for several hours before baking, exhibited spoilage caused by yeasts. Some of the effects of the microbiologically induced changes were regarded as desirable, and a gradual acquisition of control over the process led to traditional methods for making leavened bread loaves. Early baked products were made of mixed seeds with a predominance of barley, but wheat flour, because of its superior response to fermentation, eventually became the preferred cereal among the various cultural groups sufficiently advanced in culinary techniques to make leavened bread.

Brewing and baking were closely connected in early civilizations. Fermentation of a thick gruel resulted in a dough suitable for baking; a thinner mash produced a kind of beer. Both techniques required knowledge of the “mysteries” of fermentation and a supply of grain. Increasing knowledge and experience taught the artisans in the baking and brewing trades that barley was best suited to brewing, while wheat was best for baking.

By 2600 bce the Egyptians, credited with the first intentional use of leavening, were making bread by methods similar in principle to those of today. They maintained stocks of sour dough, a crude culture of desirable fermentation organisms, and used portions of this material to inoculate fresh doughs. With doughs made by mixing flour, water, salt, and leaven, the Egyptian baking industry eventually developed more than 50 varieties of bread, varying the shape and using such flavouring materials as poppy seed, sesame, and camphor. Samples found in tombs are flatter and coarser than modern bread.

The Egyptians developed the first ovens. The earliest known examples are cylindrical vessels made of baked Nile clay, tapered at the top to give a cone shape and divided inside by a horizontal shelflike partition. The lower section is the firebox, the upper section is the baking chamber. The pieces of dough were placed in the baking chamber through a hole provided in the top.

In the first two or three centuries after the founding of Rome, baking remained a domestic skill with few changes in equipment or processing methods. According to Pliny the Elder, there were no bakers in Rome until the middle of the 2nd century bce. As well-to-do families increased, women wishing to avoid frequent and tedious bread making began to patronize professional bakers, usually freed slaves. Loaves molded by hand into a spheroidal shape, generally weighing about a pound, were baked in a beehive-shaped oven fired by wood. Panis artopticius was a variety cooked on a spit, panis testuatis in an earthen vessel.

Although Roman professional bakers introduced technological improvements, many were of minor importance, and some were essentially reintroductions of earlier developments. The first mechanical dough mixer, attributed to Marcus Vergilius (sometimes spelled Virgilius) Eurysaces, a freed slave of Greek origin, consisted of a large stone basin in which wooden paddles, powered by a horse or donkey walking in circles, kneaded the dough mixture of flour, leaven, and water.

Guilds formed by the miller-bakers of Rome became institutionalized. During the 2nd century ce, under the Flavians, they were organized into a “college” with work rules and regulations prescribed by government officials. The trade eventually became obligatory and hereditary, and the baker became a kind of civil servant with limited freedom of action.

During the early Middle Ages, baking technology advances of preceding centuries disappeared, and bakers reverted to mechanical devices used by the ancient Egyptians and to more backward practices. But in the later Middle Ages the institution of guilds was revived and expanded. Several years of apprenticeship were necessary before an applicant was admitted to the guild; often an intermediate status as journeyman intervened between apprenticeship and full membership (master). The rise of the bakers’ guilds reflected significant advances in technique. A 13th-century French writer named 20 varieties of bread varying in shape, flavourings, preparation method, and quality of the meal used. Guild regulations strictly governed size and quality. But outside the cities bread was usually baked in the home. In medieval England rye was the main ingredient of bread consumed by the poor; it was frequently diluted with meal made from other cereals or leguminous seeds. Not until about 1865 did the cost of white bread in England drop below brown bread.

At that time improvements in baking technology began to accelerate rapidly, owing to the higher level of technology generally. Ingredients of greater purity and improved functional qualities were developed, along with equipment reducing the need for individual skill and eliminating hand manipulation of bread doughs. Automation of mixing, transferring, shaping, fermentation, and baking processes began to replace batch processing with continuous operations. The enrichment of bread and other bakery foods with vitamins and minerals was a major accomplishment of the mid-20th-century baking industry.

Ingredients

Flour, water, and leavening agents are the ingredients primarily responsible for the characteristic appearance, texture, and flavour of most bakery products. Eggs, milk, salt, shortening, and sugar are effective in modifying these qualities, and various minor ingredients may also be used.

Flour

Wheat flour is unique among cereal flours in that, when mixed with water in the correct proportions, its protein component forms an elastic network capable of holding gas and developing a firm spongy structure when baked. The proteinaceous substances contributing these properties are known collectively as gluten. The suitability of a flour for a given purpose is determined by the type and amount of its gluten content. Those characteristics are controlled by the genetic constitution and growing conditions of the wheat from which the flour was milled, as well as the milling treatment applied.

Low-protein, soft-wheat flour is appropriate for cakes, pie crusts, cookies (sweet biscuits), and other products not requiring great expansion and elastic structure. High-protein, hard-wheat flour is adapted to bread, hard rolls, soda crackers, and Danish pastry, all requiring elastic dough and often expanded to low densities by the leavening action.

Leavening agents

Pie doughs and similar products are usually unleavened, but most bakery products are leavened, or aerated, by gas bubbles developed naturally or folded in. Leavening may result from yeast or bacterial fermentation, from chemical reactions, or from the distribution in the batter of atmospheric or injected gases.

Yeast

All commercial breads, except salt-rising types and some rye bread, are leavened with bakers’ yeast, composed of living cells of the yeast strain Saccharomyces cerevisiae. A typical yeast addition level might be 2 percent of the dough weight. Bakeries receive yeast in the form of compressed cakes containing about 70 percent water or as dry granules containing about 8 percent water. Dry yeast, more resistant to storage deterioration than compressed yeast, requires rehydration before it is added to the other ingredients. “Cream” yeast, a commercial variety of bakers’ yeast made into a fluid by the addition of extra water, is more convenient to dispense and mix than compressed yeast, but it also has a shorter storage life and requires additional equipment for handling.

Bakers’ yeast performs its leavening function by fermenting such sugars as glucose, fructose, maltose, and sucrose. It cannot use lactose, the predominant sugar of milk, or certain other carbohydrates. The principal products of fermentation are carbon dioxide, the leavening agent, and ethanol, an important component of the aroma of freshly baked bread. Other yeast activity products also flavour the baked product and change the dough’s physical properties.

The rate at which gas is evolved by yeast during the various stages of dough preparation is important to the success of bread manufacture. Gas production is partially governed by the rate at which fermentable carbohydrates become available to the yeast. The sugars naturally present in the flour and the initial stock of added sugar are rapidly exhausted. A relatively quiescent period follows, during which the yeast cells become adapted to the use of maltose, a sugar constantly being produced in the dough by the action of diastatic enzymes on starch. The rate of yeast activity is also governed by temperature and osmotic pressure, the latter primarily a function of the water content and salt concentration.

Baking soda

Layer cakes, cookies (sweet biscuits), biscuits, and many other bakery products are leavened by carbon dioxide from added sodium bicarbonate (baking soda). Added without offsetting amounts of an acidic substance, sodium bicarbonate tends to make dough alkaline, causing flavour deterioration and discoloration and slowing carbon dioxide release. Addition of an acid-reacting substance promotes vigorous gas evolution and maintains dough acidity within a favourable range.

Carbon dioxide produced from sodium bicarbonate is initially in dissolved or combined form. The rate of gas release affects the size of the bubbles produced in the dough, consequently influencing the grain, volume, and texture of the finished product. Much research has been devoted to the development of leavening acids capable of maintaining the rate of gas release within the desired range. Acids such as acetic, from vinegar, or lactic, from sour milk, usually act too quickly; satisfactory compounds include cream of tartar (potassium acid tartrate), sodium aluminum sulfate (alum), sodium acid pyrophosphate, and various forms of calcium phosphate.

Baking powder

Instead of adding soda and leavening acids separately, most commercial bakeries and domestic bakers use baking powder, a mixture of soda and acids in appropriate amounts and with such added diluents as starch, simplifying measuring and improving stability. The end products of baking-powder reaction are carbon dioxide and some blandly flavoured harmless salts. All baking powders meeting basic standards have virtually identical amounts of available carbon dioxide, differing only in reaction time. Most commercial baking powders are of the double-acting type, giving off a small amount of available carbon dioxide during the mixing and makeup stages, then remaining relatively inert until baking raises the batter temperature. This type of action eliminates excessive loss of leavening gas, which may occur in batter left in an unbaked condition for long periods.

Entrapped air and vapour

Angel food cakes, sponge cakes, and similar products are customarily prepared without either yeast or chemical leaveners. Instead, they are leavened by air entrapped in the product through vigorous beating. This method requires a readily foaming ingredient capable of retaining the air bubbles, such as egg whites. To produce a cake of fine and uniform internal structure, the pockets of air folded in during beating are rapidly subdivided into small bubbles with such mixing utensils as wire whips, or whisks.

The vaporization of volatile fluids (e.g., ethanol) under the influence of oven heat can have a leavening effect. Water-vapour pressure, too low to be significant at normal temperatures, exerts substantial pressure on the interior walls of bubbles already formed by other means as the interior of the loaf or cake approaches the boiling point. The expansion of such puff pastry as used for napoleons (rich desserts of puff pastry layers and whipped cream or custard) and vol-au-vents (puff pastry shells filled with meat, fowl, fish, or other mixtures) is entirely due to water-vapour pressure.

Shortening

Fats and oils are essential ingredients in nearly all bakery products. Shortenings have a tenderizing effect in the finished product and often aid in the manipulation of doughs. In addition to modifying the mouth feel or texture, they often add flavour of their own and tend to round off harsh notes in some of the spice flavours.

The common fats used in bakery products are lard, beef fats, and hydrogenated vegetable oils. Butter is used in some premium and specialty products as a texturizer and to add flavour, but its high cost precludes extensive use. Cottonseed oil and soybean oil are the most common processed vegetable oils used. Corn, peanut, and coconut oils are used to a limited extent; fats occurring in other ingredients, such as egg yolks, chocolate, and nut butters, can have a shortening effect if the ingredients are present in sufficient quantity.

Breads and rolls often contain only 1 or 2 percent shortening; cakes will have 10 to 20 percent; Danish pastries prepared according to the authentic formula may have about 30 percent; pie crusts may contain even more. High usage levels require those shortenings that melt above room temperature; butter and liquid shortenings, with their lower melting point, tend to leak from the product.

Commercial shortenings may include antioxidants, to retard rancidity, and emulsifiers, to improve the shortening effect. Colours and flavours simulating butter may also be added. Margarines, emulsions of fat, water, milk solids, and salt, are popular bakery ingredients.

Fats of any kind have a destructive effect on meringues and other protein-based foams; small traces of oil left on the mixing utensils can deflate an angel food cake to unacceptably high density.

Liquids

Water is the liquid most commonly added to doughs. Milk is usually added to commercial preparations in dried form, and any moisture added in the form of eggs and butter is usually minimal. Water is not merely a diluent or inert constituent; it affects every aspect of the finished product, and careful adjustment of the amount of liquid is essential to make the dough or batter adaptable to the processing method. If dough is too wet, it will stick to equipment and have poor response to shaping and transfer operations; if too dry, it will not shape or leaven properly.

Water hydrates gluten, permitting it to aggregate in the form of a spongy cellular network, the structural basis of most bakery products. It provides a medium in which yeast can metabolize sugars to form carbon dioxide and flavouring components and allows diffusion of nutrients and metabolites throughout the mass. Water is an indispensable component of the baking-powder reaction, and it allows starch to gelatinize during baking and prevents interior browning of bakery products.

Water impurities affect dough properties. Water preferred for baking is usually of medium hardness (50 to 100 parts per million) with a neutral pH (degree of acidity), or slightly acid (low pH). Water that is too soft can result in sticky doughs, while very hard water may retard dough expansion by toughening the gluten (calcium ions, particularly, promote cross-linking of gluten protein molecules). Water sufficiently alkaline to raise the dough pH may have a deleterious effect on fermentation and on flour enzymes. Although strongly flavoured contaminants may affect the acceptability of the finished product, chlorides and fluorides at concentrations usually found in water supplies have little influence on bread doughs.

Eggs

The differences between yolks and whites must be recognized in considering the effect of eggs on bakery products. Yolks contain about 50 percent solids, of which 60 percent or more is strongly emulsified fat, and are used in bakery foods for their effect on colour, flavour, and texture. Egg whites, containing only about 12 percent solids, primarily protein, and no fat, are important primarily for their texturizing function and give foams of low density and good stability when beaten. When present in substantial amounts, they tend to promote small, uniform cell size and relatively large volume. Meringues and angel food cakes are dependent on egg white foams for their basic structure. Although fats and oils greatly diminish its foaming power, the white still contributes to the structure of layer cakes and similar confections containing substantial amounts of both shortening and egg products.

Egg products are available to bakers in frozen or dried form. Few commercial bakers break fresh eggs for ingredients, because of labour costs, unstable market conditions, and sanitary considerations. Many bakers use dried egg products because of their greater convenience and superior storage stability over frozen eggs. Processed and stored correctly, dried egg products are the functional equivalent of the fresh material, although flavour of the baked goods may be affected adversely at very high usage levels.

Sweeteners

Normal wheat flour contains about 1 percent sugars. Most are fermentable compounds, such as sucrose, maltose, glucose, and fructose. Additional maltose is formed during fermentation by the action of amyloytic enzymes (from malt and flour) on the starch. Glucose and sucrose are the sugars most frequently added to doughs and batters. The action of yeast rapidly converts the sucrose to fructose and glucose (i.e., invert sugar). Invert sugar can also be added.

Sweetening power is an important property of added sugars, but sugars also provide fermentables for yeast activity. Crust colour development is related to the amount of reducing sugars present, and a dough in which the sugars have been thoroughly depleted by yeast will produce a pale crust.

Doughs with high concentrations of dissolved substances retard fermentation because of the effect on yeast of the high osmotic pressure (low water activity) of the aqueous phase. Sugars constitute the bulk of dissolved materials in most doughs. For this reason, sweet yeast-leavened goods develop gas and expand more slowly than bread doughs.

Yeast-leavened products:

Breads and rolls

Most of the bakery foods consumed throughout the world are breads and rolls made from yeast-leavened doughs. The yeast-fermentation process leads to the development of desirable flavour and texture, and such products are nutritionally superior to products of the equivalent chemically leavened doughs, since yeast cells themselves add a wide assortment of vitamins and good quality protein.

White bread

Satisfactory white bread can be made from flour, water, salt, and yeast. (A “sourdough” addition may be substituted for commercial yeast.) Yeast-raised breads based on this simple mixture include Italian-style bread and French or Vienna breads. Such breads have a hard crust, are relatively light in colour, with a coarse and tough crumb, and flavour that is excellent in the fresh bread but deteriorates in a few hours. In the United States, commercially produced breads of this type are often modified by the addition of dough improvers, yeast foods, mold inhibitors, vitamins, minerals, and small quantities of enriching materials such as milk solids or shortening. Formulas may vary greatly from one bakery to another and between different sections of the country. The standard low-density, soft-crust bread and rolls constituting the major proportion of breads and rolls sold in the United States contain greater quantities of enriching ingredients than the lean breads described above.

Whole wheat bread

Whole wheat bread, using a meal made substantially from the entire wheat kernel instead of flour, is a dense, rather tough, dark product. Breads sold as wheat or part-whole-wheat products contain a mixture of whole grain meal with sufficient white flour to produce satisfactory dough expansion.

Bread made from crushed or ground whole rye kernels, without any wheat flour, such as pumpernickel, is dark, tough, and coarse-textured. Rye flour with the bran removed, when mixed with wheat flour, allows production of a bread with better texture and colour. In darker bread it is customary to add caramel colour to the dough. Most rye bread is flavoured with caraway seeds.

Potato bread

Potato bread, another variety that can be leavened with a primary ferment, was formerly made with a sourdough utilizing the action of wild yeasts on a potato mash and producing the typical potato-bread flavour. It is now commonly prepared from a white bread formula to which potato flour is added.

Sweet breads:

Ingredients

Sweet goods made from mixtures similar to bread doughs include “raised” doughnuts, Danish pastries, and coffee cakes. Richer in shortening, milk, and sugar than bread doughs, sweet doughs often contain whole eggs, egg yolks, egg whites, or corresponding dried products. The enriching ingredients alter the taste, produce flakier texture, and improve nutritional quality. Spices such as nutmeg, mace, cinnamon, coriander, and ginger are frequently used for sweet-dough products; other common adjuncts include vanilla, nuts and nut pastes, peels or oils of lemon or orange, raisins, candied fruit pieces, jams, and jellies.

Danish dough

Although various portion-size sweet goods are often called “Danish pastry,” the name originally referred only to products made by a special roll-in procedure, in which yeast-leavened dough sheets are interleaved with layers of butter and the layers are reduced in thickness, then folded and resheeted to obtain many thin layers of alternating shortening and dough. Danish doughs ordinarily receive little fermentation. Before the fat is rolled in, there is a period of 20 to 30 minutes in the refrigerator, allowing gas and flavour to develop. Proof time, fermentation of the piece in its final shape, is usually only 20 to 30 minutes, at lower temperatures. When properly made, these doughs yield flaky baked products, rich in shortening, with glossy crusts.

Dough preparation

The process most commonly employed in preparing dough for white bread and many specialty breads is known as the sponge-and-dough method, in which the ingredients are mixed in two distinct stages. Another conventional dough-preparation procedure, used commonly in preparing sweet doughs but rarely regular bread doughs, is the straight-dough method, in which all the ingredients are mixed in one step before fermentation. In a less conventional method, known as the “no-time” method, the fermentation step is eliminated entirely. These processes are described below.

The sponge-and-dough method

The sponge-and-dough mixing method consists of two distinct stages. In the first stage, the mixture, called the sponge, usually contains one-half to three-fourths of the flour, all of the yeast, yeast foods, and malt, and enough water to make a stiff dough. Shortening may be added at this stage, although it is usually added later, and one-half to three-fourths of the salt may be added to control fermentation. The sponge is customarily mixed in a large, horizontal dough mixer, processing about one ton per batch, and usually constructed with heat-exchange jackets, allowing temperature control. The objectives of mixing are a nearly homogeneous blend of the ingredients and “developing” of the dough by formation of the gluten into elongated and interlaced fibres that will form the basic structure of the loaf. Because intense shearing actions must be avoided, the usual dough mixer has several horizontal bars, oriented parallel to the body of the mixer, rotating slowly at 35 to 75 revolutions per minute, stretching and kneading the dough by their action. A typical mixing cycle would be about 12 minutes.

The mixed sponge is dumped into a trough, a shallow rectangular metal tank on wheels, and placed in an area of controlled temperature and humidity (e.g., 27 °C [80 °F] and 75 percent relative humidity), where it is fermented until it begins to decline in volume. The time required for this process, called the drop or break, depends on such variables as temperature, type of flour, amount of yeast, absorption, and amount of malt, which are frequently adjusted to produce a drop in about three to five hours.

At the second, or dough, stage, the sponge is returned to the mixer, and the remaining ingredients are added. The dough is developed to an optimum consistency, then either returned to the fermentation room or allowed “floor time” for further maturation.

Advantages of the sponge-and-dough method include: (1) a saving in the amount of yeast (about 20 percent less is required than for a straight dough), (2) greater volume and more-desirable texture and grain, and (3) greater flexibility allowed in operations because, in contrast to straight doughs (which must be taken up when ready), sponges can be held for later processing without marked deterioration of the final product.

The sponge method, however, involves extra handling of the dough, additional weighing and measuring, and a second mixing and thus has the disadvantage of increasing labour, equipment, and power costs.

The straight-dough method

Two of the many possible variations in the straight-dough process include the remixed straight-dough process, with a small portion of the water added at the second mix, and the no-punch method, involving extremely vigorous mixing. The straight-dough method is rarely used for white breads because it is not sufficiently adaptable to allow compensation for fluctuations in ingredient properties.

“No-time” methods

One set of procedures intended to eliminate the traditional bulk fermentation step are the “no-time” methods. Popular in the United Kingdom and Australia, these processes generally require an extremely energy-intensive mixing step, sometimes performed in a partially vacuumized chamber. Rather high additions of chemical oxidants, reducing agents, and other dough modifiers are almost always required in order to produce the desired physical properties. “No-time” is actually a misnomer, since there are always small amounts of floor time (periods when the dough is awaiting further processing) during which maturing actions lead to improvements in the dough’s physical properties. Even then, the flavour of the bread cannot be expected to match that of a traditionally processed loaf.

Makeup

After the mass of dough has completed fermentation (and has been remixed if the sponge-and-dough process is employed), it is processed by a series of devices loosely classified as makeup equipment. In the manufacture of pan bread, makeup equipment includes the divider, the rounder, the intermediate proofer, the molder, and the panner.

Dividing

The filled trough containing remixed dough is moved to the divider area or to the floor above the divider. The dough is dropped into the divider hopper, which cuts it into loaf-size pieces. Two methods are employed. In the volumetric method, the dough is forced into pockets of a known volume. The pocket contents are cut off from the main dough mass and then ejected onto a conveyor leading to the rounder. When density is kept constant, weight and volume of the dough pieces are roughly the same. In the weight-based method, a cylindrical rope of dough is continuously extruded through an orifice at a fixed rate and is cut off by a knife-edged rotor at fixed intervals. Since the dough is of consistent density, the cut pieces are of uniform weight. Like the pocket-cut pieces, the cylindrical pieces are conveyed to the rounder.

Rounding

Dough pieces leaving the divider are irregular in shape, with sticky cut surfaces from which the gas can readily diffuse. Their gluten structure is somewhat disoriented and unsuitable for molding. The rounder closes these cut surfaces, giving each dough piece a smooth and dry exterior; forms a relatively thick and continuous skin around the dough piece, reorienting the gluten structure; and shapes the dough into a ball for easier handling in subsequent steps. It performs these functions by rolling the well-floured dough piece around the surface of a drum or cone, moving it upward or downward along this surface by means of a spiral track. As a result of this action, the surface is dried both by the even distribution of dusting flour and by dehydration resulting from exposure to air; the gas cells near the surface of the ball are collapsed, forming a thick layer inhibiting the diffusion of gases from the dough; and the dough piece assumes an approximately spherical shape.

Intermediate proofing

Dough leaving the rounder is almost completely degassed. It lacks extensibility, tears easily, has rubbery consistency, and has poor molding properties. To restore a flexible, pliable structure, the dough piece must be allowed to rest while fermentation proceeds. This is accomplished by letting the dough ball travel through an enclosed cabinet, the intermediate proofer, for several minutes. Physical changes, other than gas accumulation, occurring during this period are not yet understood, but there are apparently alterations in the molecular structure of the dough rendering it more responsive to subsequent operations. Upon leaving the intermediate proofer, the dough is more pliable and elastic, its volume is increased by gas accumulation, and its skin is firmer and drier.

Most intermediate proofers are the overhead type, in which the principal part of the cabinet is raised above the floor, allowing space for other makeup machinery beneath it. Interior humidity and temperature depend on humidity accumulating from the loaves and on ambient temperatures.

Molding

The molder receives pieces of dough from the intermediate proofer and shapes them into cylinders ready to be placed in the pans. There are several types of molders, but all have four functions in common: sheeting, curling, rolling, and sealing. The dough as it comes from the intermediate proofer is a flattened spheroid; the first function of the molder is to flatten it into a thick sheet, usually by means of two or more consecutive pairs of rollers, each succeeding pair set more closely together than the preceding pair. The sheeted dough is curled into a loose cylinder by a special set of rolls or by a pair of canvas belts. The spiral of dough in the cylinder is not adherent upon leaving the curling section, and the next operation of the molder is to seal the dough piece, allowing it to expand without separating into layers. The conventional molder rolls the dough cylinder between a large drum and a smooth-surfaced semicircular compression board. Clearance between the drum and board is gradually reduced, and the dough, constantly in contact with both surfaces, becomes transversely compressed.

Panning

An automatic panning device is an integral part of most modern molders. As empty pans, carried on a conveyor, pass the end of the machine, the loaves are transferred from the molder and positioned in the pans by a compressed air-operated device. Before the filled pans are taken to the oven, the dough undergoes another fermentation, or pan-proofing, for about 20 minutes at temperatures of 40 to 50 °C (100 to 120 °F).

Continuous bread making

Many steps in conventional dough preparation and makeup have been fully automated, but none of the processes is truly continuous. In continuous systems, the dough is handled without interruption from the time the ingredients are mixed until it is deposited in the pan. The initial fermentation process is still essentially a batch procedure, but in the continuous bread-making line the traditional sponge is replaced by a liquid pre-ferment, called the broth or brew. The brew consists of a mixture of water, yeast, sugar, and portions of the flour and other ingredients, fermented for a few hours before being mixed into the dough.

After the brew has finished fermenting, it is fed along with the dry ingredients into a mixing device, which mixes all ingredients into a homogeneous mass. The batterlike material passes through a dough pump regulating the flow and delivering the mixture to a developing apparatus, where kneading work is applied. The developer is the key equipment in the continuous line. Processing about 50 kilograms (100 pounds) each 90 seconds, it changes the batter from a fluid mass having no organized structure, little extensibility, and inadequate gas retention to a smooth, elastic, film-forming dough. The dough then moves out of the developer into a metering device that constantly extrudes the dough and intermittently severs a loaf-size piece, which falls into a pan passing beneath.

Although ingredients are generally the same as those used in batch processes, closer control and more rigid specifications are necessary in continuous processing in order to assure the satisfactory operation of each unit. Changes in conditions cannot readily be made to compensate for changes occurring in ingredient properties. Oxidizers, such as bromate and iodate, are added routinely to compensate for the smaller amount of oxygen brought into the dough during mixing.

The use of fermented brews has been widely accepted in plants practicing traditional dough preparation and makeup. The handling of a fermentation mixture through pumps, pipes, valves, and tanks greatly increases efficiency and control in both batch-type and continuous systems.

Baking and depanning:

Ovens

The output of all bread-making systems, batch or continuous, is usually keyed to the oven, probably the most critical equipment in the bakery. Most modern commercial bakeries use either the tunnel oven, consisting of a metal belt passing through a connected series of baking chambers open only at the ends, or the tray oven, with a rigid baking platform carried on chain belts. Other types include the peel oven, having a fixed hearth of stone or brick on which the loaves are placed with a wooden paddle or peel; the reel oven, with shelves rotating on a central axle in Ferris wheel fashion; the rotating hearth oven; and the draw plate oven.

Advances in high-capacity baking equipment include a chamber oven with a conveyor that carries pan assemblies (called straps) along a roughly spiral path through an insulated baking chamber. The straps are automatically added to the conveyor before it enters the oven and then automatically removed and the bread dumped at the conveyor’s exit point. Although the conveyor is of a complex design, the oven as a whole is considerably simpler than most other high-capacity baking equipment and can be operated with very little labour. As a further increase in efficiency, the conveyor can also be designed to carry filled pans in a continuous path through a pan-proofing enclosure and then through the oven.

In small to medium-size retail bakeries, baking may be done in a rack oven. This consists of a chamber, perhaps two to three metres high, that is heated by electric elements or gas burners. The rack consists of a steel framework having casters at the bottom and supporting a vertical array of shelves. Bread pans containing unbaked dough pieces are placed on the shelves before the rack is pushed mechanically or manually into the oven. While baking is taking place, the rack may remain stationary or be slowly rotated.

Most ovens are heated by gas burned within the chamber, although oil or electricity may be used. Burners are sometimes isolated from the main chamber, heat transfer then occurring through induced currents of air. Baking reactions in the oven are both physical and chemical in nature. Physical reactions include film formation, gas expansion, reduction of gas solubility, and alcohol evaporation. Chemical reactions include yeast fermentation, carbon dioxide formation, starch gelatinization, gluten coagulation, sugar caramelization, and browning.

Depanners

Automatic depanners, removing the loaves from the pans, either invert the pans, jarring them to dislodge the bread, or pick the loaves out of the pans by means of suction cups attached to belts.

Chemically leavened products

Many bakery products depend on the evolution of gas from added chemical reactants as their leavening source. Items produced by this system include layer cakes, cookies, muffins, biscuits, corn bread, and some doughnuts.

The gluten proteins of the flour serve as the basic structural element in chemically leavened foods, just as they do in bread. The relatively smaller amounts of flour, the weaker (less-extensible) protein in the soft-wheat flours customarily employed, and the lower protein content of the flour, however, result in a softer, crumblier texture. In most chemically leavened foods, the protein content of the flour, inadequate in quantity and quality to support the amount of expansion required in bread, produces a product of higher density.

Prepared mixes and doughs

Prepared dry mixes, available for home use and for small and medium-size commercial bakeries, vary in complexity from self-rising flour, consisting only of salt, leavening ingredients, and flour, to elaborate cake mixes. Mixes offer the consumer ingredients measured with greater accuracy than possible with kitchen utensils and special ingredients designed for functional compatibility.

Prepared doughs for such products as biscuits and other quick breads, packaged in cans of fibre and foil laminates, are available in refrigerated form. These products carry the mix concept two steps further; the dough or batter is premixed and shaped. Unlike ordinary canned products, refrigerated doughs are not sterile but contain microbes from normal ingredient contamination. Spoilage is retarded by low storage temperature, low oxygen tension, and the high osmotic pressure of the aqueous phase.

Many boutique cookie bakeries and muffin shops that operate in shopping malls and similar locations generally use frozen batters shipped from a central plant. These batters are thawed a day or so before use, and a measured amount is scooped from the container and placed on a baking pan immediately before insertion into the oven. In this way, freshly baked cookies or muffins can be prepared in many varieties with a small amount of unskilled labour and a minimum of specialized equipment. In some cases, a central commissary supplies fully baked but frozen products, which are simply thawed (and sometimes iced and decorated) before sale.

Dough and batter formulas:

Hot breads, such as biscuits, muffins, pancakes, and scones, constitute a large and important class of chemically leavened bakery foods. They consist of flour, baking powder, salt, and liquid, with varying amounts of eggs, milk, sugar, and shortening. Other variations include the addition of fruits such as raisins, condiments such as peppers, and adjuncts such as cheese. In corn breads a considerable proportion of the flour is replaced by cornmeal. Mixing and forming methods, and the baking conditions applied, also affect product appearance, texture, and flavour. For example, a batter suitable for making corn bread might also be used to make muffins or pancakes, and each kind of finished product would vary not only in appearance but also in flavour and texture. Recipes for hot breads usually contain not more than about 15 percent shortening and 5 percent sugar. Eggs, when used, are customarily whole eggs. Milk is often used both for flavour and for its texturizing and crust-coloration properties.

Cakes

There are traditional rules for assuring “formula balance,” or the correct proportioning of ingredients, in layer cakes. For every 10 parts of flour, yellow layer cakes should contain 10 to 16 parts sugar by weight, and white layer cakes should contain 11 to 16 parts sugar. Shortening should range from 3 to 7 parts for each 10 parts of flour. The weight of liquid whole eggs should equal or exceed that of the shortening in the mixture. Total water, including the moisture in eggs and milk, should exceed the amount of sugar by 2 1/2 to 3 1/2 parts. Baking-powder weight should equal from 3 to 6 percent of flour weight; salt should equal 3 to 4 percent of flour weight. If the amount of sugar in a formula is increased, the egg content should be increased an equal amount, and more shortening should be added when the percentage of eggs is increased. Additional water is rarely added when the formula contains dry milk, but if the formula water is not sufficient to equal the reconstitution water for the milk, about 1 percent of water for each additional percent of milk solids is added.

Common cake varieties include white cake, similar in formula to yellow cake, except that the white cake uses egg whites instead of whole eggs; devil’s food cake, differing from chocolate cake chiefly in that the devil’s food batter is adjusted to an alkaline level with sodium bicarbonate; chiffon cakes, deriving their unique texture from the effect of liquid shortening on the foam structure; and gingerbread, similar to yellow cake but containing large amounts of molasses and spices.

Cookies

Recipes for cookies (called biscuits or sweet biscuits in some countries) are probably more variable than those for any other type of bakery product. Some layer-cake batters can be used for soft drop cookies, but most cookie formulas contain considerably less water than cake recipes, and cookies are baked to a lower moisture content than any normal cake. With the exception of soft types, the moisture content of cookies will be below 5 percent after baking, resulting in crisp texture and good storage stability.

Cookies are generally high in shortening and sugar. Milk and eggs are not common ingredients in commercial cookies but may be used in home recipes. Sugar granule size has a pronounced effect on cookie texture, influencing spread and expansion during baking, an effect partly caused by competition for the limited water content between the slowly dissolving sugar and the gluten of the flour.

Equipment:

Mixing

The horizontal dough mixers used for yeast-leavened products may be used for mixing chemically leavened doughs and batters. Mixers may be the batch type, similar in configuration to the household mixer, with large steel bowls, open at the top, containing the batter while it is mixed or whipped by beater paddles of various conformations. In continuous mixers the batter is pumped through an enclosed chamber while a toothed disk rapidly rotates and mixes the ingredients. The chambers may be pressurized to force gas into the batter and surrounded with a flowing heat-transfer medium to adjust the temperature.

Sheeting and cutting

Chemically leavened doughs can be formed by methods similar to those used for yeast-leavened doughs of similar consistency. In the usual sequence, the dough passes between sets of rollers, forming sheets of uniform thickness; the desired outline is cut in the sheet by stamping pressure or embossed rollers; and the scrap dough is removed for reprocessing. Many cookies and crackers are made in this way, and designs may be impressed in the dough pieces by docking pins (used primarily to puncture the sheet, preventing formation of excessively large gas bubbles) or by cutting edges partially penetrating the dough pieces.

Die forming and extruding

In addition to the sheeting and cutting methods, cookies may be shaped by die forming and extrusion. In die forming a dough casing may be applied around a centre portion of jam or other material, forming products such as fig bars; or portions of dough may be deposited, forming such drop-type cookies as vanilla wafers, chocolate chip, and oatmeal cookies. Extrusion is accomplished by means of a die plate having orifices that may be circular, rectangular, or complex in outline. The mass of dough, contained in a hopper, is pushed through these openings, forming long strands of dough. Individual cookies are formed by separating pieces from the dough strand with a wire passing across the outer surface of the die or by pulling apart the hopper and oven belt (to which the dough adheres).

Rotary molding

Cookies produced on rotary molders include sandwich-base cakes and pieces made with embossed designs. A steel cylinder, the surface covered with shallow engraved cavities, rotates past the opening in a hopper filled with cookie dough. The pockets are filled with the dough, which is sheared off from the main mass by a blade, and, as the cylinder continues its revolution, the dough pieces are ejected onto a conveyor belt leading to the band oven.

Baking

Most commercial ovens for chemically leavened products are the band types, although reel ovens are still used, especially in smaller shops or bakeries where short runs are frequent.

Air- and steam-leavened products:

Air leavening

Air-leavened bakery products, avoiding the flavours arising from chemical- and yeast-leavening systems, are particularly suitable for delicately flavoured cakes. Since the batters can be kept on the acidic side of neutrality, the negative influence of chemical leaveners on fruit flavours and vanilla is avoided.

Foams and sponges

The albumen of egg white, a protein solution, foams readily when whipped. The highly extended structure has little strength and must be supported during baking by some other protein substance, usually the gluten of flour. Because the small amount of lipids in flour tend to collapse the albumen foam, flour is gently folded into egg white foams, minimizing contact of fatty substances with the protein. Gluten sponges are denser than the lightest egg-white foams but are less subject to fat collapse.

The foam of egg yolks and whole eggs, as in pound cakes, is an air-in-oil emulsion. Proteins and starch, scattered throughout the emulsion in a dispersed condition, gradually coalesce as the batter stands or is heated. Fats and oils, in addition to yolk lipids, can be added to such systems without causing complete collapse but never achieve the low density possible with protein foams and usually have a tender, crumbly texture, unlike the more elastic structure of albumen-based products.

Wafers and biscuits

Rye wafers made of whipped batters are modern versions of an ancient Scandinavian food. High-moisture dough or batter, containing a substantial amount of rye flour and some wheat flour, is whipped, extruded onto an oven belt, scored and docked, then baked slowly until almost dry. Alternatively, the strips of dough may be cut after they are baked.

Beaten biscuits, an old specialty of the American South, are also made from whipped batter. Air is beaten into a stiff folded dough with many strokes of a rolling pin or similar utensil. Round pieces cut from the dough are pricked with a fork to prevent development of large bubbles, then baked slowly. The baked biscuit is similar to a soft cracker.

Steam leavening

All leavened products rely to some extent on water-vapour pressure to expand the vesicles or gas bubbles during the latter stages of baking, but some items also utilize the leavening action produced by the rapid buildup of steam as the interior of the product reaches the boiling point. These foods include puff pastries, used for patty shells and napoleons, and choux pastry, often used for cream puff and éclair cases.

Puff pastry

Puff pastry, often used in French pastries, is formed from layered fat and dough. The proportion of fat is usually high, rarely less than 30 percent of the finished raw piece. The dough should be extensible but not particularly elastic; for this reason mixtures of hard and soft wheat flour are often used. The fat should have an almost waxy texture and must remain solid through the sheeting steps. Butter, although frequently used, is not particularly suitable for puff pastry because its low melting point causes it to blend into the dough during the sheeting process. Bakers specializing in puff pastry often use special margarines containing high-melting-point fats.

There are several methods of making puff pastry. In the basic procedure dough is rolled into a rectangular layer of uniform thickness, and the fat is spread over two-thirds of the surface. The dough is next folded, producing three dough strata enclosing two fat layers. This preparation is next chilled by refrigeration, then rolled, reducing thickness until it reaches approximately the area of the original unfolded dough. The folding, refrigeration, and rolling procedure is repeated several times, and after the final rolling the dough is reduced to the thickness desired in the shaped raw piece.

Correctly prepared puff pastry will expand as much as 10 times during baking because of the evolution of large volumes of steam at the interface between shortening and dough. The focuses for gassing are the microscopic air bubbles rolled into the dough during the layering process. If layering has been properly conducted, the finished pieces will be symmetrical and well shaped, with crisp, flaky outer layers.

Choux pastry

Choux pastry, used for cream puffs, is made by an entirely different method. Flour, salt, butter, and boiling water are mixed together, forming a fairly stiff dough, and whole eggs are incorporated by beating. Small pieces of the dough are baked on sheets, initially at high temperature. The air bubbles formed during mixing expand rapidly at baking temperatures, filling the interior with large, irregular cells, while the outside browns and congeals, forming a rather firm case. The interior, largely hollow, can be injected with such sweet or savoury fillings as whipped cream or shrimp in sauce.

Unleavened products: pie crusts

Pie crusts are the major volume item of unleavened products prepared by modern bakeries. Small amounts of baking powder or soda are sometimes added to pie-crust doughs, mostly in domestic cookery. This addition, although increasing tenderness, tends to eliminate the desirable flakiness and permits the filling liquid to soak into the crust more rapidly.

Pie crusts are usually simple mixtures of flour, water, shortening, and salt. The shortening proportion is about 30 to 40 percent of the dough. The amount of water is kept low, and the mixing process is kept short to minimize development of elasticity, which leads to shrinkage and development of toughness on baking. For flaky crust, the fat should not be completely dispersed through the dough but should remain in small particles. Commercial producers often employ special mixers using reciprocating, intermeshing arms to gently knead the dough. The doughs are chilled before mixing and forming to reduce smearing of the shortening.

Flakiness is also related to the type of shortening used. Lard is popular in home cookery for this reason and also because of its satisfying flavour. Because shortening should be solid at the temperature of mixing, oils are undesirable.

Milk or small amounts of corn sugar may be added to improve crust browning and for their flavour effect. About 1 to 2 percent of the dough will be salt.

Flat breads

A large part of the world’s population consumes so-called flat breads on a daily basis. Tortillas and pita bread are representative examples. Traditional tortillas are made from a paste of ground corn kernels that have been soaked in hot lime water. Corn tortillas contain no leaveners, although a wheat-flour version, which is gradually replacing the corn product, frequently contains a small amount of baking powder. Pita bread is a very thin disk of yeast-leavened dough that has been prepared so as to cause separation of the top and bottom surfaces of the baked product except at the circumference.

The dough portion of pizzas also can be considered a type of flat bread. Other examples can be found that vary widely in size, shape, and composition, although nearly all of them are based on a lean, yeast-leavened dough of rather tough consistency.

Mixing and forming

The mixing and bulk fermentation (if any) of flat-bread doughs can be performed in conventional equipment and vary only in minor details from the procedures used for loaf bread. There are two basic methods for forming the dough into circles: (1) separating the dough mass into pieces of the correct size for individual servings, rounding the chunks into roughly spheroidal shape, and passing the balls between pairs of rotating steel cylinders that flatten them into thin circles and (2) forming the dough mass into a continuous sheet of uniform thickness from which circles are cut. In addition, some pizzas are made by placing the dough balls on a baking pan and then pressing them to the desired thickness with a descending steel plate.

Baking

Thin disks of dough tend to balloon into ball-shaped objects in the oven, especially if the edges have been sealed by the cutting method (as is usually the case). Although these balloons collapse in later stages of baking or upon cooling, the initial rapid evolution of gas in the interior leaves the top and bottom surfaces more or less separated. Separation is a desirable feature for pita bread and some other varieties, and it can be enhanced by baking the dough circles in a very hot oven. For tortillas, on the other hand, separation is not desirable, so these products are mostly grilled or baked on a hot plate, heating them first on one side and then on the other.

Ballooning can also be prevented by “docking” (i.e., penetrating the top surface with many small punctures) or by slow baking. Of course, in the traditional pizza preparation method, ballooning is prevented by the load of sauce and other toppings placed on the crust before baking. Matzo dough is unleavened, but it still needs to be docked in order to prevent excessive expansion of the thin sheet in the oven.

Market preparation:

Slicing

Bread often is marketed in sliced form. Slicing is performed by parallel arrays of saw blades through which the loaves are carried by gravity or by conveyors. The blades may be endless bands carried on rotating drums, or relatively short strips held in a reciprocating frame. Most bread is sliced while still fairly warm, and the difficulty of cutting the sticky, soft crumb has led to development of coated blades and blade-cleaning devices. Horizontal slicing of hamburger rolls and similar products is accomplished by circular (disk) blades, usually two blades in a slicer, between which a connected array of four or six rolls is carried by a belt. The cutter blades are separated to avoid cutting completely through the roll, in order to leave a “hinge.”

Freezing

Freezing is an indispensable bakery industry process. Ordinary bread and rolls are rarely distributed and sold in frozen form because of the excessive cost in relation to product value, but a substantial percentage of all specialty products is sold in frozen form. Most bakery products respond well to freezing, although some cream fillings must be specially formulated to avoid syneresis, or gel breakdown. Rapid chilling in blast freezers is preferred, although milder methods may be used. Storage at −18 °C (0 °F) or lower is essential for quality maintenance. Thawing and refreezing is harmful to quality. Frozen bakery products can dehydrate under freezer conditions and must be packaged in containers resistant to moisture-vapour transfer.

Wrapping

Most American consumers prefer wrapped bread, and the trend toward wrapping is growing in other countries. Sanitary and aesthetic considerations dictate protection of the product from environmental contamination during distribution and display. Waxed paper was originally the only film used to package bread, after which cellophane became popular, and then polyethylene, polypropylene, and combination laminates became common. Other bakery products are packaged in a variety of containers ranging from open bags of greaseproof material to plastic trays with sealed foil overwraps.

Canning

The market for bakery products in tin cans is small, but hunters and campers find canned foods convenient. Canning protects against drying and environmental contamination, but texture staling and some degree of flavour staling still occur. In processing, an amount of dough or batter known to fill exactly the available space after baking is placed in a can, and the cover is loosely fastened to allow gases to escape. The product is then baked in a conventional oven, the lid is hermetically sealed immediately after baking, and the sealed can is sprayed with water to cool it. Vacuum sealing, needed to assure storage stability, can be routinely achieved by this method. Special can linings and sealing compounds are needed to survive oven temperatures, and the exterior should be dark-coloured (e.g., olive drab) in order to absorb radiant heat in the oven, avoiding long baking times. Spores of some pathogens are not killed by the conditions reached in the centre of the baked product, but pH and osmotic pressure can be adjusted to prevent growth of spoilage organisms. There is no record of food poisoning attributable to canned bakery food.

Quality maintenance:

Spoilage by microbes

Bakery products are subject to the microbiological spoilage problems affecting other foods. If moisture content is kept below 12 to 14 percent (depending on the composition), growth of yeast, bacteria, and molds is completely inhibited. Nearly all crackers and cookies fall below this level, although jams, marshmallow, and other adjuncts may be far higher in moisture content. Breads, cakes, sweet rolls, and some other bakery foods may contain as much as 38 to 40 percent water when freshly baked and are subject to attack by many fungi and a few bacteria.

Fungi

To obtain maximum shelf life free of mold spoilage, high levels of sanitation must be maintained in baking and packing areas. Oven heat destroys all fungal life-forms, and any spoilage by these organisms is due to reinoculation after baking. A number of compounds have been proposed for use as fungistats in bread. Some have proved to be innocuous to molds, toxic to humans, or both. Soluble salts of propionic acid, principally sodium propionate, have been accepted and extend shelf life appreciably in the absence of a massive inoculum. Sorbic acid (or potassium sorbate) and acetic acid also have a protective effect.

The only widespread food poisoning in which bread has been a vector has resulted from ergot, a fungus infection of the rye plant. Ergot contamination of bread made from rye, or from blends of rye and wheat, has caused epidemics leading to numerous deaths.

Bacteria

Bacteria associated with bread spoilage include Bacillus mesentericus, responsible for “ropy” bread, and the less common but more spectacular Micrococcus prodigiosus, causative agent of “bleeding bread.” Neither ropy bread nor bleeding bread is particularly toxic. Enzymes secreted by B. mesentericus change the starch inside the loaf into a gummy substance stretching into strands when a piece of the bread is pulled apart. In addition to ropiness, the spoiled bread will have an off-aroma sometimes characterized as fruity or pineapple-like. Formerly, when ropiness occurred, bakers acidified doughs with vinegar as a protective measure, but this type of spoilage is rare in bread from modern bakeries.

M. prodigiosus causes red spots to appear in bread. At an advanced stage those spots of high bacterial population may liquefy, emphasizing the similarity to blood, which has sometimes led the superstitious to attribute religious significance to the phenomenon. The organism will not survive ordinary baking temperatures—unlike B. mesentericus, which forms spores capable of survival in the centre of the loaf, where the temperature rises only to about 100 °C (212 °F).

Baked goods containing such high-moisture adjuncts as pastry creams and pie fillings are susceptible to contamination by food-spoilage organisms, including Salmonella and Streptococcus. Cream and custard pies are recognized health hazards when stored at room temperature for any length of time, and some communities ban their sale during summer. Storage in frozen form eliminates the hazard.

Staling

Undesirable changes in bakery products can occur independently of microbial action. Staling involves changes in texture, flavour, and appearance. Firming of the interior, or “crumb,” is a highly noticeable alteration in bread and other low-density, lean products. Elasticity is lost, and the structure becomes crumbly. Although loss of moisture produces much the same effect, texture staling can occur without any appreciable drying. Such firming is due to changes in the molecular status of the starch, specifically to a kind of aggregation of sections of the long-chain molecules into micelles, making the molecules more rigid and less soluble than in the newly gelatinized granule. Bread that has undergone texture staling can be softened by heating to about 60–65 °C (140–150 °F). However, its texture does not return to that of fresh bread, being gummier and more elastic. In addition, care must be exercised to prevent drying during heating.

Starch retrogradation, the cause of ordinary texture staling of the crumb, can be slowed by the addition of certain compounds to the dough. Most of the effective chemicals are starch-complexing agents. Monoglycerides of fatty acids have been widely used as dough additives to retard staling in the finished loaf.

baker.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2490 2025-03-15 00:02:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2390) Physiotherapist

Gist

A physiotherapist, also known as a physical therapist, is a healthcare professional who helps people affected by injury, illness, or disability through movement and exercise, manual therapy, education, and advice, aiming to maintain health and manage pain.

A physiotherapist, or physical therapist, works with patients to help them manage pain, balance, mobility, and motor function. Most people at some point in their lifetime will work with a physiotherapist. You may have been referred to one after a car accident, after surgery, or to address low back pain.

Physical therapy, also known as physiotherapy, is a healthcare profession that uses physical interventions, patient education, and other strategies to promote, maintain, or restore health and function through movement and exercise.

Physical therapy is a professional career that has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field.

Summary

Physical therapy is a health profession that aims to improve movement and mobility in persons with compromised physical functioning. Professionals in the field are known as physical therapists.

History of physical therapy

Although the use of exercise as part of a healthy lifestyle is ancient in its origins, modern physical therapy appears to have originated in the 19th century with the promotion of massage and manual muscle therapy in Europe. In the early 20th century, approaches in physical therapy were used in the United States to evaluate muscle function in those affected by polio. Physical therapists developed programs to strengthen muscles when possible and helped polio patients learn how to use their remaining musculature to accomplish functional mobility activities. About the same time, physical therapists in the United States were also trained to work with soldiers returning from World War I; these therapists were known as “reconstruction aides.” Some worked in hospitals close to the battlefields in France to begin early rehabilitation of wounded soldiers. Typical patients were those with amputated limbs, head injuries, and spinal cord injuries. Physical therapists later practiced in a wide variety of settings, including private practices, hospitals, rehabilitation centres, nursing homes, public schools, and home health agencies. In each of those settings, therapists work with other members of the health care team toward common goals for the patient.

Patients of physical therapy

Often, persons who undergo physical therapy have experienced a decrease in quality of life as a result of physical impairments or functional limitations caused by disease or injury. Individuals who often are in need of physical therapy include those with back pain, elderly persons with arthritis or balance problems, injured athletes, infants with developmental disabilities, and persons who have had severe burns, strokes, or spinal cord injuries. Persons whose endurance for movement is affected by heart or lung problems or other illnesses are also helped by exercise and education to build activity tolerance and improve muscle strength and efficiency of movement during functional activities. Individuals with limb deficiencies are taught to use prosthetic replacement devices.

Patient management

Physical therapists complete an examination of the individual and work with him or her to determine goals that can be achieved primarily through exercise prescription and functional training to improve movement. Education is a key component of patient management. Adults with impairments and functional limitations can be taught to recover or improve movements impaired by disease and injury and to prevent injury and disability caused by abnormal posture and movement. Infants born with developmental disabilities are helped to learn movements they have never done before, with an emphasis on functional mobility for satisfying participation in family and community activities. Some problems, such as pain, may be addressed with treatments, including mobilization of soft tissues and joints, electrotherapy, and other physical agents.

Progress in physical therapy

New areas of practice are continually developing in the field of physical therapy. The scope of practice of a growing specialty in women’s health, for example, includes concerns such as incontinence, pelvic/vaginal pain, prenatal and postpartum musculosketelal pain, osteoporosis, rehabilitation following breast surgery, and lymphedema (accumulation of fluids in soft tissues). Females across the life span, from the young athlete to the childbearing, menopausal, or elderly woman, can benefit from physical therapy. Education for prevention, wellness, and exercise is another important area in addressing physical health for both men and women.

Details

Physical therapy (PT), also known as physiotherapy, is a healthcare profession, as well as the care provided by physical therapists who promote, maintain, or restore health through patient education, physical intervention, disease prevention, and health promotion. Physical therapist is the term used for such professionals in the United States, and physiotherapist is the term used in many other countries.

The career has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. PTs practice in many settings, both public and private.

In addition to clinical practice, other aspects of physical therapy practice include research, education, consultation, and health administration. Physical therapy is provided as a primary care treatment or alongside, or in conjunction with, other medical services. In some jurisdictions, such as the United Kingdom, physical therapists may have the authority to prescribe medication.

Overview

Physical therapy addresses the illnesses or injuries that limit a person's abilities to move and perform functional activities in their daily lives. PTs use an individual's history and physical examination to arrive at a diagnosis and establish a management plan and, when necessary, incorporate the results of laboratory and imaging studies like X-rays, CT-scan, or MRI findings. Physical therapists can use sonography to diagnose and manage common musculoskeletal, nerve, and pulmonary conditions. Electrodiagnostic testing (e.g., electromyograms and nerve conduction velocity testing) may also be used.

PT management commonly includes prescription of or assistance with specific exercises, manual therapy, and manipulation, mechanical devices such as traction, education, electrophysical modalities which include heat, cold, electricity, sound waves, radiation, assistive devices, prostheses, orthoses, and other interventions. In addition, PTs work with individuals to prevent the loss of mobility before it occurs by developing fitness and wellness-oriented programs for healthier and more active lifestyles, providing services to individuals and populations to develop, maintain, and restore maximum movement and functional ability throughout the lifespan. This includes providing treatment in circumstances where movement and function are threatened by aging, injury, disease, or environmental factors. Functional movement is central to what it means to be healthy.

Physical therapy is a professional career that has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field. PTs practice in many settings, such as privately-owned physical therapy clinics, outpatient clinics or offices, health and wellness clinics, rehabilitation hospital facilities, skilled nursing facilities, extended care facilities, private homes, education and research centers, schools, hospices, industrial and these workplaces or other occupational environments, fitness centers and sports training facilities.

Physical therapists also practice in non-patient care roles such as health policy,[9][10] health insurance, health care administration and as health care executives. Physical therapists are involved in the medical-legal field serving as experts, performing peer review and independent medical examinations.

Education varies greatly by country. The span of education ranges from some countries having little formal education to others having doctoral degrees and post-doctoral residencies and fellowships.

Regarding its relationship to other healthcare professions, physiotherapy is one of the allied health professions. World Physiotherapy has signed a "memorandum of understanding" with the four other members of the World Health Professions Alliance "to enhance their joint collaboration on protecting and investing in the health workforce to provide safe, quality and equitable care in all settings".

Additional Information

Physical therapy is a common treatment that can help you recover after an injury or surgery, or manage symptoms from a health condition that affects how you move. It’s a combination of exercises, stretches and movements that’ll increase your strength, flexibility and mobility to help you move safely and more confidently.

Overview:

What is physical therapy (physiotherapy)?

Physical therapy, or physiotherapy, is treatment that helps you improve how your body performs physical movements. It can be part of a generalized pain management plan or a specific treatment for an injury or health condition. It’s common to need physical therapy after many types of surgery, too. You might also need physical therapy to help prevent injuries before they happen.

You’ll work with a physical therapist — a healthcare provider who’ll make sure you’re safe during your therapy.

How long you’ll need physical therapy depends on which injuries or health conditions you have and which area of your body needs help moving better. Some people only need a few weeks of physiotherapy to help with a short-term issue. Others need it for months or years to manage symptoms of a chronic (long-term) condition.

What does physical therapy treat?

Most people start physical therapy after a healthcare provider diagnoses an injury or condition. Examples include:

* Sports injuries.
* Neck pain.
* Back pain.
* Knee pain.
* Hip pain.
* Carpal tunnel syndrome.
* Tendinopathy (including tendinitis).
* Rotator cuff tears.
* Knee ligament injuries (like ACL tears).
* Temporomandibular joint (TMJ) disorders.
* Concussions.
* Strokes.
* Spinal cord injuries.
* Traumatic brain injuries.

You might need physiotherapy to manage a chronic condition, including:

* Chronic obstructive pulmonary disease (COPD).
* Cerebral palsy.
* Multiple sclerosis (MS).
* Muscular dystrophy.
* Parkinson’s disease.
* Cystic fibrosis.

What are the types of physiotherapy?

Physical therapy is a combination of hands-on techniques (a therapist moving part of your body) and exercises or movements you perform with a physical therapist’s supervision. Physical therapy can include:

* Stretching.
* Strength training (with or without weights or exercise equipment).
* Massage.
* Heat or cold therapy.
* Hydrotherapy.
* Transcutaneous electrical nerve stimulation (TENS).

Physical therapy is usually an outpatient treatment, which means you aren’t staying in a hospital or healthcare facility while you do it. You might start therapy if you’re staying in the hospital after an injury or surgery then continue it after you go home.

Depending on where you live and which type of physical therapy you need, you might do your therapy at a specialized clinic, in the hospital or even in your own home. You might be able to do physical therapy with a virtual visit, either on a video call or over the phone (telehealth).

shutterstock_1243542709-1030x687.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2491 2025-03-16 00:01:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2391) Sailor

Gist

Sailor, mariner, salt, seaman, tar are terms for a person who leads a seafaring life. A sailor or seaman is one whose occupation is on board a ship at sea, especially a member of a ship's crew below the rank of petty officer: a sailor before the mast; an able-bodied seaman.

Sailors, or deckhands, operate and maintain the vessel and deck equipment. They make up the deck crew and keep all parts of a ship, other than areas related to the engine and motor, in good working order.

Summary

Sailor, mariner, salt, seaman, tar are terms for a person who leads a seafaring life. A sailor or seaman is one whose occupation is on board a ship at sea, especially a member of a ship's crew below the rank of petty officer: a sailor before the mast; an able-bodied seaman.

Sailors—also called deckhands— operate and maintain vessels and deck equipment, and keep their ship in good working order. Sailors stand watch for hazards or other vessels in the ship's path, and keep track of navigational buoys to stay on course.

Details

A sailor, seaman, mariner, or seafarer is a person who works aboard a watercraft as part of its crew, and may work in any one of a number of different fields that are related to the operation and maintenance of a ship.

The profession of the sailor is old, and the term sailor has its etymological roots in a time when sailing ships were the main mode of transport at sea, but it now refers to the personnel of all watercraft regardless of the mode of transport, and encompasses people who operate ships professionally, be it for a military navy or civilian merchant navy, as a sport or recreationally. In a navy, there may be further distinctions: sailor may refer to any member of the navy even if they are based on land, while seaman may refer to a specific enlisted rank.

Professional mariners

Seafarers hold a variety of professions and ranks, each of which carries unique responsibilities which are integral to the successful operation of an ocean-going vessel. A ship's crew can generally be divided into four main categories: the deck department, the engineering department, the steward's department, and others.

Deck department

Officer positions in the deck department include but are not limited to: master and his chief, second and third officers. The official classifications for unlicensed members of the deck department are able seaman and ordinary seaman.[1] With some variation, the chief mate is most often charged with the duties of cargo mate. Second Mates are charged with being the medical officer in case of a medical emergency. All three mates each do four-hour morning and afternoon shifts on the bridge, when underway at sea.

A common deck crew for a ship includes:

* (1) Captain / Master
* (1) Chief Officer / First Mate
* (1) Second Officer / Second Mate
* (1) Third Officer / Third Mate
* (1) Boatswain (unlicensed Petty Officer: Qualified member Deck Dept.)
* (2) Able seamen (unlicensed qualified rating)
* (2) Ordinary seamen (entry-level rating)
* (0-1) Deck cadet / unlicensed trainee navigator / Midshipman

Engineering department

A ship's engineering department consists of the members of a ship's crew that operates and maintains the propulsion and other systems on board the vessel. Marine engineering staff also deal with the "hotel" facilities on board, notably the sewage, lighting, air conditioning and water systems. Engineering staff manages bulk fuel transfers, from a fuel-supply barge in port. When underway at sea, the second and third engineers will often be occupied with oil transfers from storage tanks, to active working tanks. Cleaning of oil purifiers is another regular task. Engineering staff is required to have training in firefighting and first aid. Additional duties include maintaining the ship's boats and performing other nautical tasks. Engineers play a key role in cargo loading/discharging gear and safety systems, though the specific cargo discharge function remains the responsibility of deck officers and deck workers.

A common engineering crew for a ship includes:

* (1) Chief Engineer
* (1) Second Engineer / First Assistant Engineer
* (1) Third Engineer / Second Assistant Engineer
* (1) Fourth Engineer / Third Assistant Engineer
* (1) Motorman (unlicensed Junior Engineer: Qualified member Engine Dept.)
* (2) Oiler (unlicensed qualified rating)
* (2) Entry-level rating Wiper
* (0–1) Engine Cadet / unlicensed Trainee engineer

American ships also carry a qualified member of the engine department. Other possible positions include motorman, machinist, electrician, refrigeration engineer and tankerman.

Steward's department

A typical steward's department for a cargo ship is a chief steward, a chief cook and a steward's assistant. All three positions are typically filled by unlicensed personnel.

The chief steward directs, instructs, and assigns personnel performing such functions as preparing and serving meals; cleaning and maintaining officers' quarters and steward department areas; and receiving, issuing, and inventorying stores.

The chief steward also plans menus, compiles supply, overtime, and cost control records. The steward may requisition or purchase stores and equipment. Galley's roles may include baking.

A chief steward's duties may overlap with those of the steward's assistant, the chief cook, and other Steward's department crewmembers.

A person in the United States Merchant Marine has to have a Merchant Mariner's Document issued by the United States Coast Guard in order to serve as a chief steward. All chief cooks who sail internationally are similarly documented by their respective countries because of international conventions and agreements.

The only time that steward department staff are charged with duties outside the steward department is during the execution of the fire and boat drill.

Other departments

Various types of staff officer positions may exist on board a ship, including junior assistant purser, senior assistant purser, purser, chief purser, medical doctor, professional nurse, marine physician assistant, and hospital corpsman. In the USA these jobs are considered administrative positions and are therefore regulated by Certificates of Registry issued by the United States Coast Guard. Pilots are also merchant marine officers and are licensed by the Coast Guard.

Working conditions

Working conditions vary according to the nature of the sailor's employment. Whilst sailors may be employed to be at sea for extended periods of time, it is often not the case, according to the U.S. Navy, that sailors will spend the entirety of that period at sea. Since ships are often docked at a port for a significant period, it is more often the case that sailors spend '6 to 9 months' at sea.

Mariners spend extended periods at sea. Most deep-sea mariners are hired for one or more voyages that last for several months. There is no job security after that. The length of time between voyages varies by job availability and personal preference.

The rate of unionization for these workers in the United States is about 36 percent, much higher than the average for all occupations. Consequently, merchant marine officers and seamen, both veterans and beginners, are hired for voyages through union hiring halls or directly by shipping companies. Hiring halls fill jobs by the length of time the person has been registered at the hall and by their union seniority. Hiring halls typically are found in major seaports.

At sea, on larger vessels members of the deck department usually stand watch for four hours and are off for eight hours, seven days a week.

Mariners work in all weather conditions. Working in damp and cold conditions often is inevitable, although ships try to avoid severe storms while at sea. It is uncommon for modern vessels to suffer disasters such as fire, explosion, or a sinking. Yet workers face the possibility of having to abandon ship on short notice if it collides with other vessels or runs aground. Mariners also risk injury or death from falling overboard and from hazards associated with working with machinery, heavy loads, and dangerous cargo. However, modern safety management procedures, advanced emergency communications, and effective international rescue systems place modern mariners in a much safer position.

Most newer vessels are air conditioned, soundproofed from noisy machinery, and equipped with comfortable living quarters. These amenities have helped ease the sometimes difficult circumstances of long periods away from home. Also, modern communications such as email, instant messaging and social media platforms link modern mariners to their families. Nevertheless, some mariners dislike the long periods away from home and the confinement aboard ship. They consequently leave the profession.

Life at sea

Professional mariners live on the margins of society, with much of their life spent beyond the reach of land. They face cramped, stark, noisy, and dangerous conditions at sea. Yet men and women still go to sea. For some, the attraction is a life unencumbered with the restraints of life ashore. Seagoing adventure and a chance to see the world also appeal to many seafarers. Whatever the calling, those who live and work at sea invariably confront social isolation.

Findings by the Seafarer's International Research Center indicate a leading cause of mariners leaving the industry is "almost invariably because they want to be with their families". U.S. merchant ships typically do not allow family members to accompany seafarers on voyages. Industry experts increasingly recognize isolation, stress, and fatigue as occupational hazards. Advocacy groups such as International Labor Organization, a United Nations agency, and the Nautical Institute seek improved international standards for mariners.

Helen Sampson, a professor at Cardiff University, notes that a key challenge facing mariners is an adjustment to timezones as the ship sails through various oceans. An adopted solution is to gradually adjust the timings of the ship which often leads to wake-up times being adjusted periodically. Sampson further notes that ships often have a 'dry ship' or 'no alcohol' policy which prohibits even the possession of alcohol with 'random testing' taking place 'fairly regularly'.

One's service aboard ships typically extends for months at a time, followed by protracted shore leave. However, some seamen secure jobs on ships they like and stay aboard for years. In rare cases, veteran mariners choose never to go ashore when in port.

Further, the quick turnaround of many modern ships, spending only a matter of hours in port, limits a seafarer's free-time ashore. Moreover, some seafarers entering U.S. ports from a watch list of 25 countries deemed high-risk face restrictions on shore leave due to security concerns in a post 9/11 environment. However, shore leave restrictions while in U.S. ports impact American seamen as well. For example, the International Organization of Masters, Mates & Pilots notes a trend of U.S. shipping terminal operators restricting seamen from traveling from the ship to the terminal gate. Further, in cases where transit is allowed, special "security fees" are at times assessed.

Such restrictions on shore leave coupled with reduced time in port by many ships translate into longer periods at sea. Mariners report that extended periods at sea living and working with shipmates who for the most part are strangers takes getting used to. At the same time, there is an opportunity to meet people from a wide range of ethnic and cultural backgrounds. Recreational opportunities have improved aboard some U.S. ships, which may feature gyms and day rooms for watching movies, swapping sea stories, and other activities. And in some cases, especially tankers, it is made possible for a mariner to be accompanied by members of his family. However, a mariner's off-duty time at sea is largely a solitary affair, pursuing hobbies, reading, writing letters, and sleeping.

Internet accessibility is fast coming to the sea with the advent of cheap satellite communication, mainly from Inmarsat. The availability of affordable roaming SIM cards with online top-up facilities have also contributed to improved connection with friends and family at home.

why-did-they-take-these-from-us-v0-2n0kgxkxb8c91.jpg?width=1080&crop=smart&auto=webp&s=91d73ffec7bca09c32f38ea710005dcb66574618


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2492 2025-03-17 00:02:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2392) Trademark

Gist

A trademark is a unique symbol or word(s) used to represent a business or its products. Once registered, that same symbol or series of words cannot be used by any other organization, forever, as long as it remains in use and proper paperwork and fees are paid.

Summary

A trademark, any visible sign or device used by a business enterprise to identify its goods and distinguish them from those made or carried by others. Trademarks may be words or groups of words, letters, numerals, devices, names, the shape or other presentation of products or their packages, colour combinations with signs, combinations of colours, and combinations of any of the enumerated signs. They are often a key element of brand marketing.

By indicating the origin of goods and services, trademarks serve two important purposes. They provide manufacturers and traders with protection from unfair competition (one person representing or passing for sale his goods as the goods of another), and they provide customers with protection from imitations (assuring them of a certain expected quality). In terms of the protection of the rights of trademark holders, the law in most countries extends beyond the rule of unfair competition, for a trademark is considered the property of the holder; and, as such, unauthorized use of the trademark constitutes not only misrepresentation and fraud but also a violation of the holder’s private property rights.

In most countries, registration is a prerequisite for ownership and protection of the mark. In the United States, however, the trademark right is granted by the mere use of the mark; registering the mark provides the owner only with certain procedural advantages and is not a prerequisite for legal protection.

It is not necessary for the mark to be in use before a registration application is filed, although most countries require applicants to have a bona fide intent to use the mark after registration. Formerly, the United States was one of the few countries requiring actual use prior to registration. Under the Trademark Law Revision Act of 1988, the United States permits registration upon application attesting to an intent to use the trademark in the near future.

In many countries, ownership of a trademark is not acknowledged until the mark has been registered and gone uncontested for a given period of time, so as to afford protection to a prior user of the mark. Even after that period has passed, the prior user may move to have the registration canceled. After a certain number of years (from three to seven, depending on the country), the registration and ownership become uncontestable.

For a mark to be registered, it must be distinctive. In many cases a mark, when first brought into use, may not have been distinctive, but over time the public may have attached a secondary meaning to it, forming a specific association between the mark and the product, thus making the mark distinctive, hence registrable.

When a question of infringement (unauthorized use) of a trademark arises, the primary legal question addressed in court is whether the accused infringer’s use of the mark is likely to confuse the purchasing public. In most countries, including the United States, protection against infringement extends to goods or services similar to those covered by the registration. In countries following British law (some 66 nations), an infringement action can, however, be brought only for the precise goods identified in the registration.

For a long time the rights of a trademark could not be transferred separately from the business to which it was attached. Now, however, because trademarks are deemed property, they may be sold, inherited, or leased, as long as such a transfer of rights does not deceive the public. In most countries a public notice of such a transfer must be given. A common form of transfer is international licensing, whereby a trademark holder allows the use of his mark in a foreign country for a fee. Often in such instances the foreign licensee must meet certain product quality requirements so that his use of the mark does not deceive the consumer.

There are some instances in which the right of trademark may be lost. The two most serious reasons for loss of trademark are the failure to use a registered trademark and the use of a trademark that becomes a generic term. In many countries if a trademark is not used within a certain number of years, the rights of protection of the mark are forfeited. In the United States when a trademark becomes a generic term in the public’s mind (such as Aspirin, Kleenex, or Linoleum) the courts may decide that the trademark holder no longer has rights of protection. In other countries the courts are not concerned if the mark is considered generic, and the original trademark holder retains all rights and privileges of the mark.

Although each nation has its own trademark law, there are increasingly multinational efforts to ease registration and enforcement practices. The first international agreement was the Paris Convention for the Protection of Industrial Property of 1883, which has been regularly revised ever since. It sets minimum standards for trademark protection and provides similar treatment for foreign trademark holders as for nationals. Approximately 100 countries are party to the Paris Convention. Uniform trademark laws have been enacted by the African Intellectual Property Organization in 13 French-speaking African countries, the Andean Common Market in Colombia, Ecuador, and Peru, in the Benelux and Scandinavian countries, and under the Central American Treaty on Industrial Property (Costa Rica, El Salvador, Guatemala, and Nicaragua). In addition, nearly 30 countries (mostly European but including Morocco, Algeria, Vietnam, and North Korea) adhere to the Madrid Agreement, which provides for a single application process through filing in a central office located in Geneva.

Details

A trademark (also written trade mark or trade-mark) is a form of intellectual property that consists of a word, phrase, symbol, design, or a combination that identifies a product or service from a particular source and distinguishes it from others. Trademarks can also extend to non-traditional marks like drawings, symbols, 3D shapes like product designs or packaging, sounds, scents, or specific colors used to create a unique identity. For example, Pepsi® is a registered trademark associated with soft drinks, and the distinctive shape of the Coca-Cola® bottle is a registered trademark protecting Coca-Cola's packaging design.

The primary function of a trademark is to identify the source of goods or services and prevent consumers from confusing them with those from other sources. Legal protection for trademarks is typically secured through registration with governmental agencies, such as the United States Patent and Trademark Office (USPTO) or the European Union Intellectual Property Office (EUIPO). Registration provides the owner certain exclusive rights and provides legal remedies against unauthorized use by others.

Trademark laws vary by jurisdiction but generally allow owners to enforce their rights against infringement, dilution, or unfair competition. International agreements, such as the Paris Convention and the Madrid Protocol, simplify the registration and protection of trademarks across multiple countries. Additionally, the TRIPS Agreement sets minimum standards for trademark protection and enforcement that all member countries must follow.

Terminology:

Trademarks

The term trademark can also be spelled trade mark in regions such as the EU, UK, and Australia, and as trade-mark in Canada. Despite the different spellings, all three terms denote the same concept.

In the United States, the Lanham Act defines a trademark as any word, phrase, symbol, design, or combination of these things used to identify goods or services. Trademarks help consumers recognize a brand in the marketplace and distinguish it from competitors. A service mark, also covered under the Lanham Act, is a type of trademark used to identify services rather than goods.[20] The term trademark is used to refer to both trademarks and service marks.

Similarly, the World Intellectual Property Organization (WIPO) defines a trademark as a sign capable of distinguishing the goods or services of one enterprise from those of other enterprises. WIPO administers the Madrid Protocol, which allows trademark owners worldwide to file one application to register their trademark in multiple countries.

Almost anything that identifies the source of goods or services can serve as a trademark. In addition to words, slogans, designs, or combinations of these, trademarks can also include non-traditional marks like sounds, scents, or colors.

Under the broad heading of trademarks, there are several specific types commonly encountered, such as trade dress, collective marks, and certification marks:

* Trade dress: the design and packaging of a product.[22] For example, the distinctive decor of the Hard Rock Cafe restaurant chain is considered trade dress. The Lanham Act protects trade dress when it serves as a source identifier, similar to a trademark.
* Collective mark: A type of trademark used by members of a collective organization to indicate that the goods or services originate from members who meet the organization's admission standards. For instance, the collective mark BEST WESTERN is used by its members for hotel services.
* Certification mark: A type of trademark used by authorized individuals or businesses to indicate to consumers that specific goods or services, or their providers, meet the quality standards set by a certifying organization. For example, the ENERGY STAR certification mark is used by authorized users to show that their products meet the energy efficiency standards established by the U.S. Environmental Protection Agency.

Trademark symbols:

A trademark may be designated by the following symbols:

™: For unregistered trademarks related to goods.
℠: For unregistered service marks connected to services.
®: Reserved for registered trademarks.

Registered trademark symbol:

Trademark symbol

While ™ and ℠ apply to unregistered marks (™ for goods and ℠ for services), the ® symbol indicates official registration with the relevant national authority Using the ® symbol for unregistered trademarks is misleading and can be treated as unfair business practice. It may also result in civil or criminal penalties.

Brand vs. trademark

A brand is a marketing concept that reflects how consumers perceive a product or service. It has a much wider meaning and refers to the proprietary visual, emotional, rational, and cultural image that customers associate with a company or product.

A trademark, by contrast, offers legal protection for a brand with enforceable rights over the brand's identity and distinguishing elements.

famous_logos_trademark-min.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2493 2025-03-18 00:02:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2393) Primate

Gist

A primate is any animal that belongs to the group that includes humans, monkeys and animals like monkeys without a tail (apes).

Humans are primates–a diverse group that includes some 200 species. Monkeys, lemurs and apes are our cousins, and we all have evolved from a common ancestor over the last 60 million years. Because primates are related, they are genetically similar.

There are now only about 20 living species of apes and they are divided into two major groups. These are the: Lesser Apes, containing the gibbons. Great Apes, containing the orang-utans, gorillas, chimpanzees and humans.

Summary

A primate, in zoology, is any mammal of the group that includes the lemurs, lorises, tarsiers, monkeys, apes, and humans. The order Primates, including more than 500 species, is the third most diverse order of mammals, after rodents (Rodentia) and bats (Chiroptera).

Although there are some notable variations between some primate groups, they share several anatomic and functional characteristics reflective of their common ancestry. When compared with body weight, the primate brain is larger than that of other terrestrial mammals, and it has a fissure unique to primates (the Calcarine sulcus) that separates the first and second visual areas on each side of the brain. Whereas all other mammals have claws or hooves on their digits, only primates have flat nails. Some primates do have claws, but even among these there is a flat nail on the big toe (hallux). In all primates except humans, the hallux diverges from the other toes and together with them forms a pincer capable of grasping objects such as branches. Not all primates have similarly dextrous hands; only the catarrhines (Old World monkeys, apes, and humans) and a few of the lemurs and lorises have an opposable thumb. Primates are not alone in having grasping feet, but as these occur in many other arboreal mammals (e.g., squirrels and opossums), and as most present-day primates are arboreal, this characteristic suggests that they evolved from an ancestor that was arboreal. So too does primates’ possession of specialized nerve endings (Meissner’s corpuscles) in the hands and feet that increase tactile sensitivity. As far as is known, no other placental mammal has them. Primates possess dermatoglyphics (the skin ridges responsible for fingerprints), but so do many other arboreal mammals.

The eyes face forward in all primates so that the eyes’ visual fields overlap. Again, this feature is not by any means restricted to primates, but it is a general feature seen among predators. It has been proposed, therefore, that the ancestor of the primates was a predator, perhaps insectivorous. The optic fibres in almost all mammals cross over (decussate) so that signals from one eye are interpreted on the opposite side of the brain, but, in some primate species, up to 40 percent of the nerve fibres do not cross over.

Primate teeth are distinguishable from those of other mammals by the low, rounded form of the molar and premolar cusps, which contrast with the high, pointed cusps or elaborate ridges of other placental mammals. This distinction makes fossilized primate teeth easy to recognize.

Fossils of the earliest primates date to the Early Eocene Epoch (56 million to 41.2 million years ago) or perhaps to the Late Paleocene Epoch (59.2 million to 56 million years ago). Though they began as an arboreal group, and many (especially the platyrrhines, or New World monkeys) have remained thoroughly arboreal, many have become at least partly terrestrial, and many have achieved high levels of intelligence. It is certainly no accident that the most intelligent of all forms of life, the only one capable of constructing the Encyclopædia Britannica, belongs to this order.

By the 21st century the populations of approximately 75 percent of all primate species were falling, and some 60 percent were considered either threatened or endangered species. Habitat loss and fragmentation from logging, mining, urban sprawl, and the conversion of natural areas to agriculture and livestock raising are the primary threats to many species. Other causes of widespread population declines include hunting and poaching, the pet trade, the illegal trade in primate body parts, and the susceptibility of some primates to infection with human diseases.

Details

Primates is an order of mammals, which is further divided into the strepsirrhines, which include lemurs, galagos, and lorisids; and the haplorhines, which include tarsiers and simians (monkeys and apes). Primates arose 74–63 million years ago first from small terrestrial mammals, which adapted for life in tropical forests: many primate characteristics represent adaptations to the challenging environment among tree tops, including large brain sizes, binocular vision, color vision, vocalizations, shoulder girdles allowing a large degree of movement in the upper limbs, and opposable thumbs (in most but not all) that enable better grasping and dexterity. Primates range in size from Madame Berthe's mouse lemur, which weighs 30 g (1 oz), to the eastern gorilla, weighing over 200 kg (440 lb). There are 376–524 species of living primates, depending on which classification is used. New primate species continue to be discovered: over 25 species were described in the 2000s, 36 in the 2010s, and six in the 2020s.

Primates have large brains (relative to body size) compared to other mammals, as well as an increased reliance on visual acuity at the expense of the sense of smell, which is the dominant sensory system in most mammals. These features are more developed in monkeys and apes, and noticeably less so in lorises and lemurs. Some primates, including gorillas, humans and baboons, are primarily ground-dwelling rather than arboreal, but all species have adaptations for climbing trees. Arboreal locomotion techniques used include leaping from tree to tree and swinging between branches of trees (brachiation); terrestrial locomotion techniques include walking on two hindlimbs (bipedalism) and modified walking on four limbs (quadrupedalism) via knuckle-walking.

Primates are among the most social of all animals, forming pairs or family groups, uni-male harems, and multi-male/multi-female groups. Non-human primates have at least four types of social systems, many defined by the amount of movement by adolescent females between groups. Primates have slower rates of development than other similarly sized mammals, reach maturity later, and have longer lifespans. Primates are also the most cognitively advanced animals, with humans (genus Homo) capable of creating complex languages and sophisticated civilizations, and non-human primates are recorded to use tools. They may communicate using facial and hand gestures, smells and vocalizations.

Close interactions between humans and non-human primates (NHPs) can create opportunities for the transmission of zoonotic diseases, especially virus diseases including herpes, measles, ebola, rabies and hepatitis. Thousands of non-human primates are used in research around the world because of their psychological and physiological similarity to humans. About 60% of primate species are threatened with extinction. Common threats include deforestation, forest fragmentation, monkey drives, and primate hunting for use in medicines, as pets, and for food. Large-scale tropical forest clearing for agriculture most threatens primates.

Etymology

The English name primates is derived from Old French or French primat, from a noun use of Latin primat-, from primus ('prime, first rank'). The name was given by Carl Linnaeus because he thought this the "highest" order of animals. The relationships among the different groups of primates were not clearly understood until relatively recently, so the commonly used terms are somewhat confused. For example, ape has been used either as an alternative for monkey or for any tailless, relatively human-like primate.

Sir Wilfrid Le Gros Clark was one of the primatologists who developed the idea of trends in primate evolution and the methodology of arranging the living members of an order into an "ascending series" leading to humans. Commonly used names for groups of primates such as prosimians, monkeys, lesser apes, and great apes reflect this methodology. According to our current understanding of the evolutionary history of the primates, several of these groups are paraphyletic, or rather they do not include all the descendants of a common ancestor.

In contrast with Clark's methodology, modern classifications typically identify (or name) only those groupings that are monophyletic; that is, such a named group includes all the descendants of the group's common ancestor.

All groups with scientific names are clades, or monophyletic groups, and the sequence of scientific classification reflects the evolutionary history of the related lineages. Groups that are traditionally named are shown on the right; they form an "ascending series" (per Clark, see above), and several groups are paraphyletic:

* Prosimians contain two monophyletic groups (the suborder Strepsirrhini, or lemurs, lorises and allies, as well as the tarsiers of the suborder Haplorhini); it is a paraphyletic grouping because it excludes the Simiiformes, which also are descendants of the common ancestor Primates.
* Monkeys comprise two monophyletic groups, New World monkeys and Old World monkeys, but is paraphyletic because it excludes hominoids, superfamily Hominoidea, also descendants of the common ancestor Simiiformes.
* Apes as a whole, and the great apes, are paraphyletic if the terms are used such that they exclude humans.

Thus, the members of the two sets of groups, and hence names, do not match, which causes problems in relating scientific names to common (usually traditional) names. Consider the superfamily Hominoidea: In terms of the common names on the right, this group consists of apes and humans and there is no single common name for all the members of the group. One remedy is to create a new common name, in this case hominoids. Another possibility is to expand the use of one of the traditional names. For example, in his 2005 book, the vertebrate palaeontologist Benton wrote, "The apes, Hominoidea, today include the gibbons and orangutan ... the gorilla and chimpanzee ... and humans"; thereby Benton was using apes to mean hominoids. In that case, the group heretofore called apes must now be identified as the non-human apes.

As of 2021, there is no consensus as to whether to accept traditional (that is, common), but paraphyletic, names or to use monophyletic names only; or to use 'new' common names or adaptations of old ones. Both competing approaches can be found in biological sources, often in the same work, and sometimes by the same author. Thus, Benton defines apes to include humans, then he repeatedly uses ape-like to mean 'like an ape rather than a human'; and when discussing the reaction of others to a new fossil he writes of "claims that Orrorin ... was an ape rather than a human".

image?url=https%3A%2F%2Fdata.enviweb.cz%2Fimgs%2F2022-4%2Fprimati.jpg&w=917&q=75


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2494 2025-03-19 00:09:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2394) Urology/Urologist

Gist

A urologist is a medical doctor specializing in diagnosing and treating conditions of the urinary tract and male reproductive system, including conditions like kidney stones, prostate problems, and urinary tract infections.

A urologist is a specialist surgeon who treats anyone with a problem with their kidneys, bladder, prostate and male reproductive organs.

Urologists treat conditions involving the genitourinary system. For all patients, urologists treat conditions affecting the kidneys, ureters, bladder, and urethra. For female patients, urologists additionally treat conditions affecting the pelvic floor, such as pelvic organ prolapse and urinary incontinence.

Urologists may also operate to remove stones that have formed in the urinary tract, and they may perform operations to remove cancers of the kidneys, bladder, and testicles.

A urologist is a doctor who has special training in diagnosing and treating diseases of the urinary organs in females and the urinary and reproductive organs in males.

Summary

Urology is a medical specialty involving the diagnosis and treatment of diseases and disorders of the urinary tract and of the male reproductive organs. (The urinary tract consists of the kidneys, the bladder, the ureters, and the urethra.)

The modern specialty derives directly from the medieval lithologists, who were itinerant healers specializing in the surgical removal of bladder stones. In 1588 the Spanish surgeon Francisco Diaz wrote the first treatises on diseases of the bladder, kidneys, and urethra; he is generally regarded as the founder of modern urology. Most modern urologic procedures developed during the 19th century. At that time flexible catheters were developed for examining and draining the bladder, and in 1877 the German urologist Max Nitze developed the cystoscope. The cystoscope is a tubelike viewing instrument equipped with an electric light on its end. By introducing the instrument through the urethra, the urologist is able to view the interior of the bladder. The first decades of the early 20th century witnessed the introduction of various X-ray techniques that have proved extremely useful in diagnosing disorders of the urinary tract. Urologic surgery was largely confined to the removal of bladder stones until the German surgeon Gustav Simon in 1869 demonstrated that human patients could survive the removal of one kidney, provided the remaining kidney was healthy.

Most of the modern urologist’s patients are male, for two reasons: (1) the urinary tract in females may be treated by gynecologists, and (2) much of the urologist’s work has to do with the prostate gland, which encircles the male urethra close to the juncture between the urethra and the bladder. The prostate gland is often the site of cancer; even more frequently, it enlarges in middle or old age and encroaches on the urethra, causing partial or complete obstruction of the flow of urine. The urologist treats prostate enlargement either by totally excising the prostate or by reaming a wider passageway through it. Urologists may also operate to remove stones that have formed in the urinary tract, and they may perform operations to remove cancers of the kidneys, bladder, and testicles.

Details

Urology (from Greek οὖρον ouron "urine" and -λογία -logia "study of"), also known as genitourinary surgery, is the branch of medicine that focuses on surgical and medical diseases of the urinary system and the reproductive organs. Organs under the domain of urology include the kidneys, adrenal glands, ureters, urinary bladder, urethra, and the male reproductive organs.

The urinary and reproductive tracts are closely linked, and disorders of one often affect the other. Thus a major spectrum of the conditions managed in urology exists under the domain of genitourinary disorders. Urology combines the management of medical (i.e., non-surgical) conditions, such as urinary-tract infections and benign prostatic hyperplasia, with the management of surgical conditions such as bladder or prostate cancer, kidney stones, congenital abnormalities, traumatic injury, and stress incontinence.

Urological techniques include minimally invasive robotic and laparoscopic surgery, laser-assisted surgeries, and other scope-guided procedures. Urologists receive training in open and minimally invasive surgical techniques, employing real-time ultrasound guidance, fiber-optic endoscopic equipment, and various lasers in the treatment of multiple benign and malignant conditions. Urology is closely related to (and urologists often collaborate with the practitioners of) oncology, nephrology, gynaecology, andrology, pediatric surgery, colorectal surgery, gastroenterology, and endocrinology.

Urology is one of the most competitive and highly sought surgical specialties for physicians, with new urologists comprising less than 1.5% of United States medical-school graduates each year.[4][5]

Urologists are physicians which have specialized in the field after completing their general degree in medicine. Upon successful completion of a residency program, many urologists choose to undergo further advanced training in a subspecialty area of expertise through a fellowship lasting an additional 12 to 36 months. Subspecialties may include: urologic surgery, urologic oncology and urologic oncological surgery, endourology and endourologic surgery, urogynecology and urogynecologic surgery, reconstructive urologic surgery (a form of reconstructive surgery), minimally-invasive urologic surgery, pediatric urology and pediatric urologic surgery (including adolescent urology, the treatment of premature or delayed puberty, and the treatment of congenital urological syndromes, malformations, and deformations), transplant urology (the field of transplant medicine and surgery concerned with transplantation of organs such as the kidneys, bladder tissue, ureters, and, recently, math), voiding dysfunction, paruresis, neurourology, and androurology and sexual medicine. Additionally, some urologists supplement their fellowships with a master's degree (2–3 years) or with a Ph.D. (4–6 years) in related topics to prepare them for academic as well as focused clinical employment.

Subdisciplines

As a medical discipline that involves the care of many organs and physiological systems, urology can be broken down into several subdisciplines. At many larger academic centers and university hospitals that excel in patient care and clinical research, urologists often specialize in a particular sub discipline.

Endourology

Endourology is the branch of urology that deals with the closed manipulation of the urinary tract. It has lately grown to include all minimally invasive urologic surgical procedures. As opposed to open surgery, endourology is performed using small cameras and instruments inserted into the urinary tract. Transurethral surgery has been the cornerstone of endourology. Most of the urinary tract can be reached via the urethra, enabling prostate surgery, surgery of tumors of the urothelium, stone surgery, and simple urethral and ureteral procedures. Recently, the addition of laparoscopy and robotics has further subdivided this branch of urology.

Laparoscopy

Laparoscopy is a rapidly evolving branch of urology and has replaced some open surgical procedures. Robot-assisted surgery of the prostate, kidney, and ureter has been expanding this field. Today, many prostatectomies in the United States are carried out by so-called robotic assistance. This has created controversy, however, as robotics greatly increase the cost of surgery and the benefit for the patient may or may not be proportional to the extra cost. Moreover, current (2011) market situation for robotic equipment is a de facto monopoly of one publicly held corporation which further fuels the cost-effectiveness controversy.

Urologic oncology

Urologic oncology concerns the surgical treatment of malignant genitourinary diseases such as cancer of the prostate, adrenal glands, bladder, kidneys, ureters, testicles, and math, as well as the skin and subcutaneous tissue and muscle and fascia of those areas (that particular subspecialty overlaps with dermatological oncology and related areas of oncology). The treatment of genitourinary cancer is managed by either a urologist or an oncologist, depending on the treatment type (surgical or medical). Most urologic oncologists in Western countries use minimally invasive techniques (laparoscopy or endourology, robotic-assisted surgery) to manage urologic cancers amenable to surgical management.

Neurourology

Neurourology concerns nervous system control of the genitourinary system, and of conditions causing abnormal urination. Neurological diseases and disorders such as a stroke, multiple sclerosis, Parkinson's disease, and spinal cord injury can disrupt the lower urinary tract and result in conditions such as urinary incontinence, detrusor overactivity, urinary retention, and detrusor sphincter dyssynergia. Urodynamic studies play an important diagnostic role in neurourology. Therapy for nervous system disorders includes clean intermittent self-catheterization of the bladder, anticholinergic drugs, injection of Botulinum toxin into the bladder wall and advanced and less commonly used therapies such as sacral neuromodulation. Less marked neurological abnormalities can cause urological disorders as well—for example, abnormalities of the sensory nervous system are thought by many researchers to play a role in disorders of painful or frequent urination (e.g. painful bladder syndrome also known as interstitial cystitis).

Pediatric urology

Pediatric urology concerns urologic disorders in children. Such disorders include cryptorchidism (undescended testes), congenital abnormalities of the genitourinary tract, enuresis, underdeveloped genitalia (due to delayed growth or delayed puberty, often an endocrinological problem), and vesicoureteral reflux.

Andrology

Andrology is the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men such as prostate cancer, male fertility problems, and surgery of the male reproductive system. It is the counterpart to gynaecology, which deals with medical issues that are specific to female health, especially reproductive and urologic health.

Reconstructive urology

Reconstructive urology is a highly specialized field of male urology that restores both structure and function to the genitourinary tract. Prostate procedures, full or partial hysterectomies, trauma (auto accidents, gunshot wounds, industrial accidents, straddle injuries, etc.), disease, obstructions, blockages (e.g., urethral strictures), and occasionally, childbirth, can necessitate reconstructive surgery. The urinary bladder, ureters (the tubes that lead from the kidneys to the urinary bladder) and genitalia are other examples of reconstructive urology.

Female urology

Female urology is a branch of urology dealing with overactive bladder, pelvic organ prolapse, and urinary incontinence. Many of these physicians also practice neurourology and reconstructive urology as mentioned above. Female urologists (many of whom are men) complete a 1–3-year fellowship after completion of a 5–6-year urology residency. Thorough knowledge of the female pelvic floor together with intimate understanding of the physiology and pathology of voiding are necessary to diagnose and treat these disorders. Depending on the cause of the individual problem, a medical or surgical treatment can be the solution. Their field of practice heavily overlaps with that of urogynecologists, physicians in a sub-discipline of gynecology, who have done a three-year fellowship after a four-year OBGYN residency.

Additional Information

Urology is a part of health care that deals with diseases of the male and female urinary tract (kidneys, ureters, bladder and urethra). Since health problems in these body parts can happen to everyone, urologic health is important.

Urology is known as a surgical specialty. Besides surgery, a urologist is a doctor with wisdom of internal medicine, pediatrics, gynecology and other parts of health care. This is because a urologist encounters a wide range of clinical problems. The scope of urology is big and the American Urological Association has named seven subspecialty parts:

* Pediatric Urology (children's urology)
* Urologic Oncology (urologic cancers)
* Renal (kidney) Transplant
* Male Infertility
* Calculi (urinary tract stones)
* Female Urology
* Neurourology (nervous system control of genitourinary organs)

Who takes care of urology patients?

If you have a problem with urologic health you might see a urologist. You might also see a person on the urologist's care team.

urology.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2495 2025-03-20 00:03:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2395) Liquid paraffin

Gist

Liquid Paraffin belongs to the group of medicines called laxatives used to treat constipation associated with piles, hernia, cardiovascular disorders, endoscopy, bowel clearance before radioscopy, pre/post-operative conditions, elderly and bed-ridden patients.

Liquid paraffin, a refined mineral oil, is primarily used as a laxative to relieve constipation by softening stools and lubricating the intestines. It's also used topically as an emollient to treat dry, rough, and irritated skin.

Liquid paraffin, also known as paraffin oil or mineral oil, is a highly refined, colorless, odorless, and oily liquid composed of saturated hydrocarbons derived from petroleum, used in cosmetics and medicine, and distinct from kerosene.

Summary

Paraffin hydrocarbon is any of the saturated hydrocarbons having the general formula CnH2n+2, C being a carbon atom, H a hydrogen atom, and n an integer. The paraffins are major constituents of natural gas and petroleum. Paraffins containing fewer than 5 carbon atoms per molecule are usually gaseous at room temperature, those having 5 to 15 carbon atoms are usually liquids, and the straight-chain paraffins having more than 15 carbon atoms per molecule are solids. Branched-chain paraffins have a much higher octane number rating than straight-chain paraffins and, therefore, are the more desirable constituents of gasoline. The hydrocarbons are immiscible with water. All paraffins are colourless.

Details

Liquid paraffin, also known as paraffinum liquidum, paraffin oil, liquid paraffin oil or Russian mineral oil, is a very highly refined mineral oil used in cosmetics and medicine. Cosmetic or medicinal liquid paraffin should not be confused with the paraffin (i.e. kerosene) used as a fuel. The generic sense of paraffin meaning alkane led to regional differences for the meanings of both paraffin and paraffin oil. It is a transparent, colorless, nearly odorless, and oily liquid that is composed of saturated hydrocarbons derived from petroleum.

The term paraffinum perliquidum is sometimes used to denote light liquid paraffin, while the term paraffinum subliquidum is sometimes used to denote a thicker mineral oil.

History

Petroleum is said to have been used as a medicine since 400 BC, and has been mentioned in the texts of classical writers Herodotus, Plutarch, Dioscorides, Pliny, and others. It was used extensively by early Arabians and was important in early Indian medicine. Its first use internally is attributed to Robert A. Chesebrough, who patented it in 1872 for the manufacture of a "new and useful product from petroleum." After Sir W. Arbuthnot Lane, who was then Chief Surgeon of Guy's Hospital, recommended it as a treatment for intestinal stasis and chronic constipation in 1913, liquid paraffin gained more popularity.

Usage in medicine

Liquid paraffin is primarily used as a pediatric laxative in medicine and is a popular treatment for constipation and encopresis. Because of its ease of titration, the drug is convenient to synthesize. It acts primarily as a stool lubricant, and is thus not associated with abdominal cramps, diarrhea, flatulence, disturbances in electrolytes, or tolerance over long periods of usage, side effects that osmotic and stimulant laxatives often engender (however, some literature suggests that these may still occur). The drug acts by softening the feces and coats the intestine with an oily film. Because of this it reduces the pain caused by certain conditions such as piles (haemorrhoids). These traits make the drug ideal for chronic childhood constipation and encopresis, when large doses or long-term usage is necessary.

Consensus has not been entirely reached on the safety of the drug for children. While the drug is widely accepted for the management of childhood constipation in North America and Australia, the drug is used much less in the United Kingdom. The drug is endorsed by the American Academy of Pediatrics and the North American Society for Gastroenterology and Nutrition, with the latter organization outlining it as a first choice for the management of pediatric constipation. The drug is suggested to never be used in cases in which the patient is neurologically impaired or has a potential swallowing dysfunction due to potential respiration complications. Lipoid pneumonia due to mineral oil aspiration is thus a recognized severe complication of this medication, and there is a need for a heightened awareness among caregivers about the potential dangers of inappropriate mineral oil use. Some go as far as saying that it should never be used with children due to this risk.

Liquid paraffin is also used in combination with magnesium as an osmotic laxative, sold under the trade name Mil-Par (among others).

Additionally, it may be used as a release agent, binder, or lubricant on capsules and tablets.

Usage in cosmetics

Liquid paraffin is a hydrating and cleansing agent. Hence, it is used in several cosmetics both for skin and hair products. It is also used as one of the ingredients of after wax wipes.

Health

Upon being taken orally, liquid paraffin might interfere with the absorption of fat-soluble vitamins, though evidence does not seem to fully support this. It can be absorbed into the intestinal wall and may cause foreign-body granulomatous reactions in some rat species. These reactions might not occur in humans, however. Some evidence suggests that it engenders a lack of carcinogenicity. If liquid paraffin enters the lungs, it can cause lipoid pneumonia.

If injected, it can cause granulomatous reactions.

Additional Information

Liquid Parafin is a prescription medicine used in the treatment of dry skin. It relieves dry skin conditions such as eczema, ichthyosis, and pruritus of the elderly. It works by preventing water loss from the outer layer of skin. This relieves dryness and leaves the skin soft and hydrated.

Liquid Parafin is for external use only. You should use it in the dose and duration as advised by your doctor. The affected area should be clean and dry before application. You must wash your hands thoroughly before and after applying this medicine. This medicine should be used regularly to get the most benefit from it. Do not use more than you need as it will not clear your condition faster and some side effects may be increased. If your condition goes on for longer than four weeks or gets worse at any time, let your doctor know. You can help this medication work better by keeping the affected areas clean.

Liquid Parafin is an emollient (substance that softens or soothes the skin). It works by moisturizing dry skin, thereby relieving dryness and itching.

Liquid Parafin is generally considered safe to use during pregnancy. Animal studies have shown low or no adverse effects to the developing baby; however, there are limited human studies.

light-liquid-paraffin-oil.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2496 2025-03-21 00:01:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2396) Tarpaulin

Gist

Tarpaulins, often called tarps, are waterproof sheeting materials used for temporary protection against weather, debris, and other environmental factors, commonly used in construction, agriculture, and outdoor activities.

The word tarpaulin comes from tar and palling—another 17th Century name for sheets used to cover objects on ships. Sailors also made waterproof clothing from tarpaulins including tricorn hats, choosing the style in an act of defiance to mimic what the officers wore.

PVC tarpaulin is a heavy-duty material that is made from a combination of PVC and polyester, which commonly used for tents, covers, and canopy. It is a highly durable and waterproof material that can last for years without any maintenance. It is also resistant to mildew, UV rays, and other environmental hazards.8

Details

A tarpaulin or tarp is a large sheet of strong, flexible, water-resistant or waterproof material, often cloth such as canvas or polyester coated with polyurethane, or made of plastics such as polyethylene. Tarpaulins often have reinforced grommets at the corners and along the sides to form attachment points for rope, allowing them to be tied down or suspended.

Inexpensive modern tarpaulins are made from woven polyethylene; This material has become so commonly used for tarpaulins that people in some places refer to it colloquially as "poly tarp" or "polytarp".

Uses

Tarpaulins are used in many ways to protect persons and things from wind, rain, and sunlight. They are used during construction or after disasters to protect partially built or damaged structures, to prevent mess during painting and similar activities, and to contain and collect debris. They are used to protect the loads of open trucks and wagons, to keep wood piles dry, and for shelters such as tents or other temporary structures.

Tarpaulins are also used for advertisement printing, most notably for billboards. Perforated tarpaulins are typically used for medium to large advertising, or for protection on scaffoldings; the aim of the perforations (from 20% to 70%) is to reduce wind vulnerability.

Polyethylene tarpaulins have also proven to be a popular source when an inexpensive, water-resistant fabric is needed. Many amateur builders of plywood sailboats turn to polyethylene tarpaulins for making their sails, as it is inexpensive and easily worked. With the proper type of adhesive tape, it is possible to make a serviceable sail for a small boat with no sewing.

Plastic tarps are sometimes used as a building material in communities of indigenous North Americans. Tipis made with tarps are known as tarpees.

Types

Tarpaulins can be classified based on a diversity of factors, such as material type (polyethylene, canvas, vinyl, etc.), thickness, which is generally measured in mils or generalized into categories (such as "regular duty", "heavy duty", "super heavy duty", etc.), and grommet strength (simple vs. reinforced), among others.

Actual tarp sizes are generally about three to five percent smaller in each dimension than nominal size; for example, a tarp nominally 20 ft × 20 ft (6.1 m × 6.1 m) will actually measure about 19 ft × 19 ft (5.8 m × 5.8 m). Grommets may be aluminum, stainless steel, or other materials. Grommet-to-grommet distances are typically between 18 in (460 mm) and 5 ft (1.5 m). The weave count is often between 8 and 12 per square inch: the greater the count, the greater its strength. Tarps may also be washable or non-washable and waterproof or non-waterproof, and mildewproof vs. non-mildewproof. Tarp flexibility is especially significant under cold conditions.

Additional Information:

The origins of tarpaulin and its early uses.

To begin with, the word tarp is an abbreviation for tarpaulin, which is made from the words "tar" and "palling". Pall is a heavy, thick cloth, and tar is the name for the deep, oily, sticky substance.

Tarpaulin has been around for centuries, with its origins dating back to ancient times. The ancient Greeks and Romans used canvas tarps to protect their goods during transportation by sea. In the 18th century, sailors began using tarred canvas to cover their ships' cargo, which is where the term "tarpaulin" comes from.

Tarps, often known as tarpaulins, were first used for high-seas navigation. Tar-coated canvas sheets (palls) were fastened over the ship's cargo to provide a water-resistant barrier against sea spray, precipitation, and snowfall throughout the voyage. As the Industrial Revolution took hold, tarpaulin production became more widespread, and new materials such as rubber and PVC were introduced to make tarps more durable and weather-resistant.

Today, tarpaulin is used for a wide range of applications, from covering vehicles and equipment to providing shelter during outdoor events.

The development of new materials and manufacturing techniques.

As the demand for tarpaulin increased, manufacturers began experimenting with new materials and manufacturing techniques to improve the durability and weather-resistance of tarps. In the mid-19th century, rubber-coated fabrics were introduced, which provided better protection against water and weather. In the 20th century, PVC-coated fabrics became popular due to their high strength and resistance to chemicals and UV rays. Today, tarpaulin manufacturers continue to innovate, using materials such as polyethylene and polypropylene to create lightweight, yet durable tarps for a variety of applications.

Tarpaulin's role in military operations and disaster relief efforts.

Tarpaulin has played a crucial role in military operations and disaster relief efforts throughout history. During World War II, tarps were used to cover equipment and supplies, protect soldiers from the elements, and even camouflage tanks and other vehicles. In more recent times, tarps have been used in disaster relief efforts to provide temporary shelter for displaced individuals and protect supplies and equipment from the elements. Tarpaulin's versatility and durability make it an essential tool in emergency situations.

The rise of tarpaulin as a popular material for outdoor events and construction projects.

In recent years, tarpaulin has become a popular material for outdoor events and construction projects. Its waterproof and UV-resistant properties make it ideal for providing temporary shelter and protection from the elements. Tarpaulin is often used to cover scaffolding and construction sites, as well as to create temporary structures for outdoor events like festivals and concerts. Its affordability and versatility have made it a go-to material for many industries.

The future of tarpaulin and its potential for new applications.

As technology continues to advance, so does the potential for new applications of tarpaulin. One area of interest is in the development of smart tarpaulin, which could have sensors embedded in the material to monitor things like temperature, humidity, and air quality. This could have applications in agriculture, where farmers could use the data to optimize crop growth, or in disaster relief efforts, where the sensors could help first responders assess the needs of affected areas. Additionally, tarpaulin could be used in the development of wearable technology, such as clothing that can adjust to changing weather conditions. The possibilities are endless, and it will be exciting to see how tarpaulin continues to evolve in the future.

image?unique=d59d9f1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2497 2025-03-22 00:07:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2397) Palpitation

Gist

Palpitations are a feeling of an irregular heartbeat, such as a racing, pounding, or fluttering heart. They can be felt in the chest, neck, or throat.

Stress, exercise, medication or, rarely, a medical condition can trigger them. Although heart palpitations can be worrisome, they're usually harmless. Rarely, heart palpitations can be a symptom of a more serious heart condition, such as an irregular heartbeat (arrhythmia), that might require treatment.

In medical terms, palpitation refers to the sensation of a racing, fluttering, or pounding heartbeat, often accompanied by a heightened awareness of one's own heart activity.

Summary

A palpitation — a skipped, extra or irregular heartbeat — is a type of abnormal heart rhythm, or arrhythmia. It occurs when an electrical signal fires from the wrong place at the wrong time, causing the heart to beat out of rhythm.

Many people are unaware of minor irregular heartbeats, and even completely healthy people have extra or skipped heartbeats once in a while. Palpitations are more common as you age. Usually, these occasional arrhythmias are nothing to worry about. But in some cases, extra or irregular beats can cause bothersome symptoms or lead to other types of sustained, rapid heart rhythms.

What causes palpitations?

Occasional, harmless palpitations can have many causes:

* Stress or anxiety
* Strenuous activity
* Extreme fatigue
* Hormonal changes caused by pregnancy, menopause or menstruation
* Low blood pressure
* Caffeine
* Nicotine
* Alcohol
* Stimulant medications, including pseudoephedrine (a decongestant)
* Increasing age

However, some palpitations may be symptoms of a more serious condition, such as:

* Cardiomyopathy
* Hyperthyroidism
* Heart valve disease
* Other, more dangerous arrhythmias

What are the symptoms of palpitations or an irregular heartbeat?

Many people experience palpitations (the feeling that their heart is momentarily racing or pounding), a skipped or extra beat or a fluttering or forceful beat.

When you feel a "skipped" beat, what you are probably experiencing is an early heartbeat. Because the heart contracts before the ventricles have had time to fill with blood, there is little or no blood pushed out to the body. Therefore you don't feel that contraction as a beat. The next beat will feel more forceful, as an extra volume of blood is then pushed out.

However, some symptoms are more serious. Consult with your doctor if you experience any of these:

* Frequent palpitations
* Fainting
* Dizziness
* Unusual sweating
* Lightheadedness
* Chest pains

Details

Palpitations occur when a person becomes aware of their heartbeat. The heartbeat may feel hard, fast, or uneven in their chest.

Symptoms include a very fast or irregular heartbeat. Palpitations are a sensory symptom. They are often described as a skipped beat, a rapid flutter, or a pounding in the chest or neck.

Palpitations are not always the result of a physical problem with the heart and can be linked to anxiety. However, they may signal a fast or irregular heartbeat. Palpitations can be brief or long-lasting. They can be intermittent or continuous. Other symptoms can include dizziness, shortness of breath, sweating, headaches, and chest pain.

There are a variety of causes of palpitations not limited to the following:

Palpitation may be associated with coronary heart disease, perimenopause, hyperthyroidism, adult heart muscle diseases like hypertrophic cardiomyopathy, congenital heart diseases like atrial septal defects, diseases causing low blood oxygen such as asthma, emphysema or a blood clot in the lungs; previous chest surgery; kidney disease; blood loss and pain; anemia; drugs such as antidepressants, statins, alcohol, nicotine, caffeine, cocaine and amphetamines; electrolyte imbalances of magnesium, potassium and calcium; and deficiencies of nutrients such as taurine, arginine, iron or vitamin B12.

The pathophysiology, evaluation, diagnoses, and treatments available for palpitations can vary and should be discussed with a medical professional.

Signs and symptoms

Three common descriptions of palpitation are:

* "flip-flopping" (or "stop and start") is often caused by premature contraction of the atrium or ventricle. The pause after the contraction causes the "stop." The "start" comes from the next forceful contraction.
* rapid "fluttering in the chest" suggests arrhythmias. Regular "fluttering" points to supraventricular or ventricular arrhythmias, including sinus tachycardia. Irregular "fluttering" suggests atrial fibrillation, atrial flutter, or tachycardia with variable block.
* "pounding in the neck" or neck pulsations, often due to cannon A waves in the jugular vein. These occur when the right atrium contracts against a closed tricuspid valve.

Palpitations often come with other symptoms. Knowing these links can help determine if they are dangerous or harmless. However, these links are not definitive and should be evaluated by a licensed healthcare provider to ensure an accurate diagnosis and proper care.

Palpitations associated with chest discomfort or chest pain suggests coronary artery disease. Palpitation associated with light-headedness, fainting or near fainting suggest low blood pressure and may signify a life-threatening cardiac dysrhythmia. Palpitation that occurs regularly with exertion suggests a rate-dependent bypass tract or hypertrophic cardiomyopathy.

If a benign cause for these symptoms isn't found at the first visit, then prolonged heart monitoring at home or in the hospital setting may be needed. Noncardiac symptoms should also be elicited since the palpitations may be caused by a normal heart responding to a metabolic or inflammatory condition. Weight loss could suggest hyperthyroidism. Palpitation can be precipitated by vomiting or diarrhea that leads to electrolyte disorders and hypovolemia. Hyperventilation, hand tingling, and nervousness are common when anxiety or panic disorder is the cause of the palpitations.

Causes

The responsibility for the perception of heartbeat by neural pathways is not clear. It has been hypothesized that these pathways include different structures located both at the intra-cardiac and extra-cardiac level. Palpitations are a widely diffuse complaint and particularly in subjects affected by structural heart disease. The list of causes of palpitations is long, and in some cases, the etiology is unable to be determined. In one study reporting the etiology of palpitations, 43% were found to be cardiac, 31% psychiatric, and approximately 10% were classified as miscellaneous (medication induced, thyrotoxicosis, caffeine, cocaine, anemia, amphetamine, mastocytosis).

The cardiac etiologies of palpitations are the most life-threatening and include ventricular sources (premature ventricular contractions (PVC), ventricular tachycardia and ventricular fibrillation), atrial sources (atrial fibrillation, atrial flutter) high output states (anemia, AV fistula, Paget's disease of bone or pregnancy), structural abnormalities (congenital heart disease, cardiomegaly, aortic aneurysm, or acute left ventricular failure), and miscellaneous sources (postural orthostatic tachycardia syndrome abbreviated as POTS, Brugada syndrome, and sinus tachycardia).

Palpitation can be attributed to one of five main causes:

* Extra-cardiac stimulation of the sympathetic nervous system (inappropriate stimulation of the sympathetic and parasympathetic, particularly the vagus nerve, (which innervates the heart), can be caused by anxiety and stress due to acute or chronic elevations in glucocorticoids and catecholamines. Gastrointestinal distress such as bloating or indigestion, along with muscular imbalances and poor posture, can also irritate the vagus nerve causing palpitations)
* Sympathetic overdrive (panic disorder, low blood sugar, hypoxia, antihistamines (levocetirizine), low red blood cell count, heart failure, mitral valve prolapse).
* Hyperdynamic circulation (valvular incompetence, thyrotoxicosis, hypercapnia, high body temperature, low red blood cell count, pregnancy).
* Abnormal heart rhythms (ectopic beat, premature atrial contraction, junctional escape beat, premature ventricular contraction, atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, ventricular fibrillation, heart block).
* Pectus Excavatum, also known as funnel chest, is a chest wall deformity. When the breastbone (sternum) and attached ribs are sunken in enough to put excess pressure on the heart and lungs which can cause tachycardia and skipped beats.

Palpitations can occur during times of catecholamine excess, such as during exercise or at times of stress. The cause of the palpitations during these conditions is often a sustained supraventricular tachycardia or ventricular tachyarrhythmia.

Supraventricular tachycardias can also be induced at the termination of exercise when the withdrawal of catecholamines is coupled with a surge in the vagal tone. Palpitations secondary to catecholamine excess may also occur during emotionally startling experiences, especially in patients with a long QT syndrome.

Psychiatric problems

Anxiety and stress elevate the body's level of cortisol and adrenaline, which in turn can interfere with the normal functioning of the parasympathetic nervous system resulting in overstimulation of the vagus nerve. Vagus nerve induced palpitation is felt as a thud, a hollow fluttery sensation, or a skipped beat, depending on at what point during the heart's normal rhythm the vagus nerve fires. In many cases, the anxiety and panic of experiencing palpitations cause a patient to experience further anxiety and increased vagus nerve stimulation. The link between anxiety and palpitation may also explain why many panic attacks involve an impending sense of cardiac arrest. Similarly, physical and mental stress may contribute to the occurrence of palpitation, possibly due to the depletion of certain micronutrients involved in maintaining healthy psychological and physiological function. Gastrointestinal bloating, indigestion and hiccups have also been associated with overstimulation of the vagus nerve causing palpitations, due to branches of the vagus nerve innervating the GI tract, diaphragm, and lungs.

Many psychiatric conditions can result in palpitations including depression, generalized anxiety disorder, panic attacks, and somatization. However one study noted that up to 67% of patients diagnosed with a mental health condition had an underlying arrhythmia. There are many metabolic conditions that can result in palpitations including, hyperthyroidism, hypoglycemia, hypocalcemia, hyperkalemia, hypokalemia, hypermagnesemia, hypomagnesemia, and pheochromocytoma.

Medication

The medications most likely to result in palpitations include sympathomimetic agents, anticholinergic drugs, vasodilators and withdrawal from beta blockers.

Excessive consumption of caffeine, commonly found in coffee, tea, and energy drinks, is a well-known trigger. Recreational drugs such as marijuana, cocaine, amphetamines, and MDMA (Ecstasy) are also associated with palpitations and pose significant cardiovascular risks. These substances can lead to serious health issues, including vasospasm-related angina, heart attacks, and strokes. Understanding the impact of these substances is crucial for both prevention and management of palpitations.

Additional Information

Palpitations feel like your heart is racing, pounding, fluttering or like you have missed heartbeats. Palpitations can last seconds, minutes or longer. You may feel this in your chest, neck, or throat.

Palpitations can happen at anytime, even if you are resting or doing normal activities. Although they can be unpleasant, palpitations are common and, in most cases, harmless.

Causes of palpitations

Palpitations can be caused by heart conditions including:

* arrhythmia (abnormal heart rhythm)
* cardiomyopathy
* congenital heart conditions
* heart attack
* heart failure
* heart valve disease.

Other causes of palpitations include:

* alcohol
* caffeine
* certain medicines (both prescription and over-the-counter)
* ectopic beats (early or extra heartbeats)
* hormonal changes (due to pregnancy or menopause)
* intense exercise
* recreational drugs
* smoking
* stress and anxiety
* triggering foods (such as spicy or rich food).

They can also be caused by other medical conditions like an overactive thyroid and anaemia (lack of iron).

When to get medical help

You should make an appointment to see your GP if:

* your palpitations last a long time, don't improve or get worse
* you have a history of heart problems
* you're concerned about the palpitations.

You should call 999 if you have palpitations and experience any of the following symptoms:

* severe shortness of breath
* chest pain or tightness
* dizziness and light headedness
* fainting or blackouts.

Diagnosing palpitations

Your GP may arrange for you to have a trace of heart (electrocardiogram/ECG) to check whether the heart rate is regular and at a normal rate. This painless test lasts a few minutes.

If your ECG shows something abnormal, or your symptoms continue to bother you, you may need to have further tests or heart monitoring over a longer period. Visit our ECG page or speak to your doctor if you're concerned about this and similar tests.

Treating palpitations

As palpitations are often harmless, they usually don't need treatment. However, you'll need treatment if tests show your palpitations are caused by an underlying heart condition.

The type of treatment you'll have depends on your condition. For example, if you're diagnosed with an arrhythmia, your doctor might prescribe beta blockers to regulate your heart rate and rhythm.

Preventing palpitations

If you don't need treatment, the easiest way to manage your symptoms at home is to avoid the triggers that bring on your palpitations. This might include:

* avoiding or drinking less caffeinated drinks
* avoiding or drinking less alcohol (no more than the recommended limit of 14 units a week)
* avoiding foods and activities that trigger palpitations in you (try keeping a symptom diary so you can recognise and avoid triggers)
* managing your stress levels
* not smoking or using tobacco products.

Living a healthier lifestyle can be hard at first, but it’s important for your overall quality of life. Visit our healthy living hub to read about how you can start to eat healthier and manage things like smoking and stress today.

More Information:

What is a heart palpitation?

Heart palpitation is the feeling a person has that the heart is beating fast or irregularly. Sometimes children report a missing heart beat feeling (lasting for only a second or a few seconds) or the feeling that the heart is beating faster (and sometimes stronger) than it should in conditions of complete rest. Less commonly, children complain of a fast heart when they are exercising or the heart rate remaining high even after exercise has stopped.

What causes palpitation?

Heart palpitations are very common in children and are generally not associated with heart problems or congenital heart defects. Common causes of palpitations are emotional stress and anxiety, significant physical deconditioning (being not physically fit), asthma medications, fizzy drinks and other foods or drinks with caffeine. Young children often report palpitations as chest pain as they are not able to describe their complaint with appropriate words. In some rare cases, palpitations can be the sign of an underlying medical condition such as thyroid gland problems, low haemoglobin level in the blood (anaemia) or cardiac arrhythmias (abnormal heart rhythm, most commonly associated with a fast heart rate and due to a problem with the electrical system of the heart).

What to do if I have palpitations?

Palpitations are generally self-limiting events. However, if palpitations are associated with chest pain, dizziness, lightheaded feeling or true fainting, sweating, paleness and vomiting or difficulty in breathing then they might indicate an underlying cardiac problem. If these symptoms are present, an appointment with your primary care physician or your paediatrician should be booked straight away. A medical opinion should also be sought if you child has palpitations and you have a family history of arrhythmias or heart muscle disease (cardiomyopathy, congestive cardiac failure, or sudden unexpected death). If your child is complaining of palpitations that are followed by fainting or he/she looks non responsive or if he/she looks progressively more pale, sweaty and tired/breathless you should call the emergency medical services immediately (999 in London and the UK).

How to investigate the cause of palpitations?

If your paediatrician is unsure about the cause of palpitations or is concerned by it, your child will be referred to a paediatric cardiologist who will be able to identify the cause of the symptoms. By the time you are referred, your paediatrician will have performed investigations to exclude non cardiac causes (blood tests). The paediatric cardiologist will be very interested in the circumstances in which the palpitations occur, what triggers them, how often and for how long they occur. For this reason it is important to keep a written record in a diary of the number, duration and symptoms associated with the palpitations. The paediatric cardiologist will perform an ECG (a recording of the electrical activity of your child’s heart) and an echocardiogram (a detailed ultrasound scan of the heart) in order to confirm that the heart is built and functions normally. The paediatric cardiologist might request some more detailed tests such as longer term ECG monitoring (for instance 24 hour ECG Holter which records the heart rhythm for 24 hours). In some occasions, particularly if the palpitations occur or are triggered by exercise, your child might undergo an exercise stress test to try and reproduce the symptoms during exercise in a controlled environment. Based on the results of these investigations, the paediatric cardiologist will be able to understand if your child has an arrhythmia as the cause of the palpitations. Finding that the palpitations are due to a cardiac arrhythmia is not always bad news. In fact, most arrhythmias are not life threatening. Isolated ectopic premature beats (from either the back chambers of the heart (atria) or from the front chambers of the heart (ventricles)) and supra-ventricular tachycardias (SVT which are fast heart beats in sequence originating from the atria) are the most common arrhythmias that cause palpitations in children. The indication to treat an arrhythmia will depend on the nature of the arrhythmia, the age of the child, the number and duration of the episodes of palpitation and the symptoms associated with palpitation. Some children with cardiac arrhythmias will need medical treatment, usually initially with drugs. In some older children there might be the option of a keyhole day case procedure to burn or freeze the small area of the heart from which the arrhythmia is originating. Very rarely, children with heart palpitations will need implantable pacemakers or defibrillators

Within-article-768x681.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2498 2025-03-24 00:02:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2398) Population

Gist

In general, "population" refers to the total number of people or inhabitants in a specific area or country.

Population, in human biology, is the whole number of inhabitants occupying an area (such as a country or the world) and continually being modified by increases (births and immigrations) and losses (deaths and emigrations).

In general, "population" refers to the total number of inhabitants or organisms of a particular species living in a specific area.

In everyday usage, "population" often refers to the number of people living in a particular place, like a city, country, or the entire world.

Summary

According to the UN, the world's population surpassed 8 billion on 15 November 2022, an increase of 1 billion since 12 March 2012. According to a separate estimate by the United Nations, Earth's population exceeded seven billion in October 2011. According to UNFPA, growth to such an extent offers unprecedented challenges and opportunities to all of humanity.

According to papers published by the United States Census Bureau, the world population hit 6.5 billion on 24 February 2006. The United Nations Population Fund designated 12 October 1999 as the approximate day on which world population reached 6 billion. This was about 12 years after the world population reached 5 billion in 1987, and six years after the world population reached 5.5 billion in 1993. The population of countries such as Nigeria is not even known to the nearest million, so there is a considerable margin of error in such estimates.

Researcher Carl Haub calculated that a total of over 100 billion people have probably been born in the last 2000 years.

Details

Population, in human biology, is the whole number of inhabitants occupying an area (such as a country or the world) and continually being modified by increases (births and immigrations) and losses (deaths and emigrations). As with any biological population, the size of a human population is limited by the supply of food, the effect of diseases, and other environmental factors. Human populations are further affected by social customs governing reproduction and by the technological developments, especially in medicine and public health, that have reduced mortality and extended the life span.

Few aspects of human societies are as fundamental as the size, composition, and rate of change of their populations. Such factors affect economic prosperity, health, education, family structure, crime patterns, language, culture—indeed, virtually every aspect of human society is touched upon by population trends.

Is overpopulation a problem?


The study of human populations is called demography—a discipline with intellectual origins stretching back to the 18th century, when it was first recognized that human mortality could be examined as a phenomenon with statistical regularities. Especially influential was English economist and demographer Thomas Malthus, who is best known for his theory that population growth will always tend to outrun the food supply and that betterment of humankind is impossible without stern limits on reproduction. This thinking is commonly referred to as Malthusianism.

Demography casts a multidisciplinary net, drawing insights from economics, sociology, statistics, medicine, biology, anthropology, and history. Its chronological sweep is lengthy: limited demographic evidence for many centuries into the past, and reliable data for several hundred years are available for many regions. The present understanding of demography makes it possible to project (with caution) population changes several decades into the future.

At its most basic level, the components of population change are few indeed. A closed population (that is, one in which immigration and emigration do not occur) can change according to the following simple equation: the population (closed) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths during the interval. In other words, only addition by births and reduction by deaths can change a closed population.

Populations of nations, regions, continents, islands, or cities, however, are rarely closed in the same way. If the assumption of a closed population is relaxed, in- and out-migration can increase and decrease population size in the same way as do births and deaths; thus, the population (open) at the end of an interval equals the population at the beginning of the interval, plus births during the interval, minus deaths, plus in-migrants, minus out-migrants. Hence the study of demographic change requires knowledge of fertility (births), mortality (deaths), and migration. These, in turn, affect not only population size and growth rates but also the composition of the population in terms of such attributes as gender, age, ethnic or racial composition, and geographic distribution.

Fertility

Demographers distinguish between fecundity, the underlying biological potential for reproduction, and fertility, the actual level of achieved reproduction. (Confusingly, these English terms have opposite meanings from their parallel terms in French, where fertilité is the potential and fécondité is the realized; similarly ambiguous usages also prevail in the biological sciences, thereby increasing the chance of misunderstanding.) The difference between biological potential and realized fertility is determined by several intervening factors, including the following: (1) most women do not begin reproducing immediately upon the onset of puberty, which itself does not occur at a fixed age; (2) some women with the potential to reproduce never do so; (3) some women become widowed and do not remarry; (4) various elements of social behaviour restrain fertility; and (5) many human couples choose consciously to restrict their fertility by means of sexual abstinence, contraception, abortion, or sterilization.

The magnitude of the gap between potential and realized fertility can be illustrated by comparing the highest known fertilities with those of typical European and North American women in the late 20th century. A well-studied high-fertility group is the Hutterites of North America, a religious sect that views fertility regulation as sinful and high fertility as a blessing. Hutterite women who married between 1921 and 1930 are known to have averaged 10 children per woman. Meanwhile, women in much of Europe and North America averaged about two children per woman during the 1970s and 1980s—a number 80 percent less than that achieved by the Hutterites. Even the highly fertile populations of developing countries in Africa, Asia, and Latin America produce children at rates far below that of the Hutterites.

The general message from such evidence is clear enough: in much of the world, human fertility is considerably lower than the biological potential. It is strongly constrained by cultural regulations, especially those concerning marriage and sexuality, and by conscious efforts on the part of married couples to limit their childbearing.

Dependable evidence on historical fertility patterns in Europe is available back to the 18th century, and estimates have been made for several earlier centuries. Such data for non-European societies and for earlier human populations are much more fragmentary. The European data indicate that even in the absence of widespread deliberate regulation there were significant variations in fertility among different societies. These differences were heavily affected by socially determined behaviours such as those concerning marriage patterns. Beginning in France and Hungary in the 18th century, a dramatic decline in fertility took shape in the more developed societies of Europe and North America, and in the ensuing two centuries fertility declines of fully 50 percent took place in nearly all of these countries. Since the 1960s fertility has been intentionally diminished in many developing countries, and remarkably rapid reductions have occurred in the most populous, the People’s Republic of China.

Biological factors affecting human fertility

Reproduction is a quintessentially biological process, and hence all fertility analyses must consider the effects of biology. Such factors, in rough chronological order, include:

* the age of onset of potential fertility (or fecundability in demographic terminology);

* the degree of fecundability—i.e., the monthly probability of conceiving in the absence of contraception;

* the incidence of spontaneous abortion and stillbirth;

* the duration of temporary infecundability following the birth of a child; and

* the age of onset of permanent sterility.

The age at which women become fecund apparently declined significantly during the 20th century; as measured by the age of menarche (onset of menstruation), British data suggest a decline from 16–18 years in the mid-19th century to less than 13 years in the late 20th century. This decline is thought to be related to improving standards of nutrition and health. Since the average age of marriage in western Europe has long been far higher than the age of menarche, and since most children are born to married couples, this biological lengthening of the reproductive period is unlikely to have had major effects upon realized fertility in Europe. In settings where early marriage prevails, however, declining age at menarche could increase lifetime fertility.

Fecundability also varies among women past menarche. The monthly probabilities of conception among newlyweds are commonly in the range of 0.15 to 0.25; that is, there is a 15–25-percent chance of conception each month. This fact is understandable when account is taken of the short interval (about two days) within each menstrual cycle during which fertilization can take place. Moreover, there appear to be cycles during which ovulation does not occur. Finally, perhaps one-third or more of fertilized ova fail to implant in the uterus or, even if they do implant, spontaneously abort during the ensuing two weeks, before pregnancy would be recognized. As a result of such factors, women of reproductive age who are not using contraceptive methods can expect to conceive within five to 10 months of becoming sexually active. As is true of all biological phenomena, there is surely a distribution of fecundability around average levels, with some women experiencing conception more readily than others.

Spontaneous abortion of recognized pregnancies and stillbirth also are fairly common, but their incidence is difficult to quantify. Perhaps 20 percent of recognized pregnancies fail spontaneously, most in the earlier months of gestation.

Following the birth of a child, most women experience a period of temporary infecundability, or biological inability to conceive. The length of this period seems to be affected substantially by breast-feeding. In the absence of breast-feeding, the interruption lasts less than two months. With lengthy, frequent breast-feeding it can last one or two years. This effect is thought to be caused by a complex of neural and hormonal factors stimulated by suckling.

A woman’s fecundability typically peaks in her 20s and declines during her 30s; by their early 40s as many as 50 percent of women are affected by their own or their husbands’ sterility. After menopause, essentially all women are sterile. The average age at menopause is in the late 40s, although some women experience it before reaching 40 and others not until nearly 60.

Contraception

Contraceptive practices affect fertility by reducing the probability of conception. Contraceptive methods vary considerably in their theoretical effectiveness and in their actual effectiveness in use (“use-effectiveness”). Modern methods such as oral pills and intrauterine devices (IUDs) have use-effectiveness rates of more than 95 percent. Older methods such as the condom and diaphragm can be more than 90-percent effective when used regularly and correctly, but their average use-effectiveness is lower because of irregular or incorrect use.

The effect upon fertility of contraceptive measures can be dramatic: if fecundability is 0.20 (a 20-percent chance of pregnancy per month of exposure), then a 95-percent effective method will reduce this to 0.01 (a 1-percent chance).

Abortion

Induced abortion reduces fertility not by affecting fecundability but by terminating pregnancy. Abortion has long been practiced in human societies and is quite common in some settings. The officially registered fraction of pregnancies terminated by abortion exceeds one-third in some countries, and significant numbers of unregistered abortions probably occur even in countries reporting very low rates.

Sterilization

Complete elimination of fecundability can be brought about by sterilization. The surgical procedures of tubal ligation and vasectomy have become common in diverse nations and cultures. In the United States, for example, voluntary sterilization has become the most prevalent single means of terminating fertility, typically adopted by couples who have achieved their desired family size. In India, sterilization has been encouraged on occasion by various government incentive programs and, for a short period during the 1970s, by quasi-coercive measures.

Mortality

As noted above, the science of demography has its intellectual roots in the realization that human mortality, while consisting of unpredictable individual events, has a statistical regularity when aggregated across a large group. This recognition formed the basis of a wholly new industry—that of life assurance, or insurance. The basis of this industry is the life table, or mortality table, which summarizes the distribution of longevity—observed over a period of years—among members of a population. This statistical device allows the calculation of premiums—the prices to be charged the members of a group of living subscribers with specified characteristics, who by pooling their resources in this statistical sense provide their heirs with financial benefits.

Overall human mortality levels can best be compared by using the life-table measure life expectancy at birth (often abbreviated simply as life expectancy), the number of years of life expected of a newborn baby on the basis of current mortality levels for persons of all ages. Life expectancies of premodern populations, with their poor knowledge of sanitation and health care, may have been as low as 25–30 years. The largest toll of death was that exacted in infancy and childhood: perhaps 20 percent of newborn children died in their first 12 months of life and another 30 percent before they reached five years of age.

In the developing countries by the 1980s, average life expectancy lay in the range of 55 to 60 years, with the highest levels in Latin America and the lowest in Africa. In the same period, life expectancy in the developed countries of western Europe and North America approached 75 years, and fewer than 1 percent of newborn children died in their first 12 months.

For reasons that are not well understood, life expectancy of females usually exceeds that of males, and this female advantage has grown as overall life expectancy has increased. In the late 20th century this female advantage was seven years (78 years versus 71 years) in the industrial market economies (comprising western Europe, North America, Japan, Australia, and New Zealand). It was eight years (74 years versus 66 years) in the nonmarket economies of eastern Europe.

The epidemiologic transition

The epidemiologic transition is that process by which the pattern of mortality and disease is transformed from one of high mortality among infants and children and episodic famine and epidemic affecting all age groups to one of degenerative and man-made diseases (such as those attributed to smoking) affecting principally the elderly. It is generally believed that the epidemiologic transitions prior to the 20th century (i.e., those in today’s industrialized countries) were closely associated with rising standards of living, nutrition, and sanitation. In contrast, those occurring in developing countries have been more or less independent of such internal socioeconomic development and more closely tied to organized health care and disease control programs developed and financed internationally. There is no doubt that 20th-century declines in mortality in developing countries have been far more rapid than those that occurred in the 19th century in what are now the industrialized countries.

Infant mortality

Infant mortality is conventionally measured as the number of deaths in the first year of life per 1,000 live births during the same year. Roughly speaking, by this measure worldwide infant mortality approximates 80 per 1,000; that is, about 8 percent of newborn babies die within the first year of life.

This global average disguises great differences. In certain countries of Asia and Africa, infant mortality rates exceed 150 and sometimes approach 200 per 1,000 (that is, 15 or 20 percent of children die before reaching the age of one year). Meanwhile, in other countries, such as Japan and Sweden, the rates are well below 10 per 1,000, or 1 percent. Generally, infant mortality is somewhat higher among males than among females.

In developing countries substantial declines in infant mortality have been credited to improved sanitation and nutrition, increased access to modern health care, and improved birth spacing through the use of contraception. In industrialized countries in which infant mortality rates were already low the increased availability of advanced medical technology for newborn—in particular, prematurely born—infants provides a partial explanation.

Infanticide

The deliberate killing of newborn infants has long been practiced in human societies. It seems to have been common in the ancient cultures of Greece, Rome, and China, and it was practiced in Europe until the 19th century. In Europe, infanticide included the practice of “overlaying” (smothering) an infant sharing a bed with its parents and the abandonment of unwanted infants to the custody of foundling hospitals, in which one-third to four-fifths of incumbents failed to survive.

In many societies practicing infanticide, infants were not deemed to be fully human until they underwent a rite of initiation that took place from a few days to several years after birth, and therefore killing before such initiation was socially acceptable. The purposes of infanticide were various: child spacing or fertility control in the absence of effective contraception; elimination of illegitimate, deformed, orphaned, or twin children; or gender preferences.

With the development and spread of the means of effective fertility regulation, infanticide has come to be strongly disapproved in most societies, though it continues to be practiced in some isolated traditional cultures.

Mortality among the elderly

During the 1970s and 1980s in industrialized countries there were unexpectedly large declines in mortality among the elderly, resulting in larger-than-projected numbers of the very old. In the United States, for example, the so-called frail elderly group aged 85 years and older increased nearly fourfold between 1950 and 1980, from 590,000 to 2,461,000. Given the high incidence of health problems among the very old, such increases have important implications for the organization and financing of health care.

Marriage

One of the main factors affecting fertility, and an important contributor to the fertility differences among societies in which conscious fertility control is uncommon, is defined by the patterns of marriage and marital disruption. In many societies in Asia and Africa, for example, marriage occurs soon after the sexual maturation of the woman, around age 17. In contrast, delayed marriage has long been common in Europe, and in some European countries the average age of first marriage approaches 25 years.

In the 20th century dramatic changes have taken place in the patterns of marital dissolution caused by widowhood and divorce. Widowhood has long been common in all societies, but the declines of mortality (as discussed above) have sharply reduced the effects of this source of marital dissolution on fertility. Meanwhile, divorce has been transformed from an uncommon exception to an experience terminating a large proportion (sometimes more than a third) of marriages in some countries. Taken together, these components of marriage patterns can account for the elimination of as little as 20 percent to as much as 50 percent of the potential reproductive years.

Many Western countries have experienced significant increases in the numbers of cohabiting unmarried couples. In the 1970s some 12 percent of all Swedish couples living together aged 16 to 70 were unmarried. When in the United States in 1976 the number of such arrangements approached 1,000,000, the Bureau of the Census formulated a new statistical category—POSSLQ—denoting persons of the opposite gender sharing living quarters. Extramarital fertility as a percentage of overall fertility accordingly has risen in many Western countries, accounting for one in five births in the United States, one in five in Denmark, and one in three in Sweden.

Migration

Since any population that is not closed can be augmented or depleted by in-migration or out-migration, migration patterns must be considered carefully in analyzing population change. The common definition of human migration limits the term to permanent change of residence (conventionally, for at least one year), so as to distinguish it from commuting and other more frequent but temporary movements.

Human migrations have been fundamental to the broad sweep of human history and have themselves changed in basic ways over the epochs. Many of these historical migrations have by no means been the morally uplifting experiences depicted in mythologies of heroic conquerors, explorers, and pioneers; rather they frequently have been characterized by violence, destruction, bondage, mass mortality, and genocide—in other words, by human suffering of profound magnitudes.

Early human migrations

Early humans were almost surely hunters and gatherers who moved continually in search of food supplies. The superior technologies (tools, clothes, language, disciplined cooperation) of these hunting bands allowed them to spread farther and faster than had any other dominant species; humans are thought to have occupied all the continents except Antarctica within a span of about 50,000 years. As the species spread away from the tropical parasites and diseases of its African origins, mortality rates declined and population increased. This increase occurred at microscopically small rates by the standards of the past several centuries, but over thousands of years it resulted in a large absolute growth to a total that could no longer be supported by finding new hunting grounds. There ensued a transition from migratory hunting and gathering to migratory slash-and-burn agriculture. The consequence was the rapid geographical spread of crops, with wheat and barley moving east and west from the Middle East across the whole of Eurasia within only 5,000 years.

About 10,000 years ago a new and more productive way of life, involving sedentary agriculture, became predominant. This allowed greater investment of labour and technology in crop production, resulting in a more substantial and securer food source, but sporadic migrations persisted.

The next pulse of migration, beginning around 4000 to 3000 bce, was stimulated by the development of seagoing sailing vessels and of pastoral nomadry. The Mediterranean Basin was the centre of the maritime culture, which involved the settlement of offshore islands and led to the development of deep-sea fishing and long-distance trade. Other favoured regions were those of the Indian Ocean and South China Sea. Meanwhile, pastoral nomadry involved biological adaptations both in humans (allowing them to digest milk) and in species of birds and mammals that were domesticated.

Both seafarers and pastoralists were intrinsically migratory. The former were able to colonize previously uninhabited lands or to impose their rule by force over less mobile populations. The pastoralists were able to populate the extensive grassland of the Eurasian Steppe and the African and Middle Eastern savannas, and their superior nutrition and mobility gave them clear military advantages over the sedentary agriculturalists with whom they came into contact. Even as agriculture continued to improve with innovations such as the plow, these mobile elements persisted and provided important networks by which technological innovations could be spread widely and rapidly.

That complex of human organization and behaviour commonly termed Western civilization arose out of such developments. Around 4000 bce seafaring migrants from the south overwhelmed the local inhabitants of the Tigris–Euphrates floodplain and began to develop a social organization based upon the division of labour into highly skilled occupations, technologies such as irrigation, bronze metallurgy, and wheeled vehicles, and the growth of cities of 20,000–50,000 persons. Political differentiation into ruling classes and ruled masses provided a basis for imposition of taxes and rents that financed the development of professional soldiers and artisans, whose specialized skills far surpassed those of pastoralists and agriculturalists. The military and economic superiority that accompanied such skills allowed advanced communities to expand both by direct conquest and by the adoption of this social form by neighbouring peoples. Thus migration patterns played an important role in creating the early empires and cultures of the ancient world.

By about 2000 bce such specialized human civilizations occupied much of the then-known world—the Middle East, the eastern Mediterranean, South Asia, and the Far East. Under these circumstances human migration was transformed from unstructured movements across unoccupied territories by nomads and seafarers into quite new forms of interaction among the settled civilizations.

These new forms of human migration produced disorder, suffering, and much mortality. As one population conquered or infiltrated another, the vanquished were usually destroyed, enslaved, or forcibly absorbed. Large numbers of people were captured and transported by slave traders. Constant turmoil accompanied the ebb and flow of populations across the regions of settled agriculture and the Eurasian and African grasslands. Important examples include the Dorian incursions in ancient Greece in the 11th century bce, the Germanic migrations southward from the Baltic to the Roman Empire in the 4th to 6th century ce, the Norman raids and conquests of Britain between the 8th and 12th centuries ce, and the Bantu migrations in Africa throughout the Christian Era.

Modern mass migrations

Mass migrations over long distances were among the new phenomena produced by the population increase and improved transportation that accompanied the Industrial Revolution. The largest of these was the so-called Great Atlantic Migration from Europe to North America, the first major wave of which began in the late 1840s with mass movements from Ireland and Germany. These were caused by the failure of the potato crop in Ireland and in the lower Rhineland, where millions had become dependent upon this single source of nutrition. These flows eventually subsided, but in the 1880s a second and even larger wave of mass migration developed from eastern and southern Europe, again stimulated in part by agricultural crises and facilitated by improvements in transportation and communication. Between 1880 and 1910 some 17,000,000 Europeans entered the United States; overall, the total amounted to 37,000,000 between 1820 and 1980.

Since World War II equally large long-distance migrations have occurred. In most cases groups from developing countries have moved into the industrialized countries of the West. Some 13,000,000 migrants have become permanent residents of western Europe since the 1960s. More than 10,000,000 permanent immigrants have been admitted legally to the United States since the 1960s, and illegal immigration has almost surely added several millions more.

Forced migrations

Slave migrations and mass expulsions have been part of human history for millennia. The largest slave migrations were probably those compelled by European slave traders operating in Africa from the 16th to the 19th century. During that period perhaps 20,000,000 slaves were consigned to American markets, though substantial numbers died in the appalling conditions of the Atlantic passage.

The largest mass expulsion is probably that imposed by the Nazi government of Germany, which deported 7,000,000–8,000,000 persons, including some 5,000,000 Jews later exterminated in concentration camps. After World War II, 9,000,000–10,000,000 ethnic Germans were more or less forcibly transported into Germany, and perhaps 1,000,000 members of minority groups deemed politically unreliable by the Soviet government were forcibly exiled to Central Asia. Earlier deportations of this type included the movement of 150,000 British convicts to Australia between 1788 and 1867 and the 19th-century exile of 1,000,000 Russians to Siberia.

Forced migrations since World War II have been large indeed. Some 14,000,000 persons fled in one direction or the other at the partition of British India into India and Pakistan. Nearly 10,000,000 left East Pakistan (now Bangladesh) during the fighting in 1971; many of them stayed on in India. An estimated 3,000,000–4,000,000 persons fled from the war in Afghanistan during the early 1980s. More than 1,000,000 refugees have departed Vietnam, Cuba, Israel, and Ethiopia since World War II. Estimates during the 1980s suggested that approximately 10,000,000 refugees had not been resettled and were in need of assistance.

Internal migrations

The largest human migrations today are internal to nation-states; these can be sizable in rapidly increasing populations with large rural-to-urban migratory flows.

Early human movements toward urban areas were devastating in terms of mortality. Cities were loci of intense infection; indeed, many human viral diseases are not propagated unless the population density is far greater than that common under sedentary agriculture or pastoral nomadism. Moreover, cities had to import food and raw materials from the hinterlands, but transport and political disruptions led to erratic patterns of scarcity, famine, and epidemic. The result was that cities until quite recently (the mid-19th century) were demographic sinkholes, incapable of sustaining their own populations.

Urban growth since World War II has been very rapid in much of the world. In developing countries with high overall population growth rates the populations of some cities have been doubling every 10 years or less (see below Population composition).

Natural increase and population growth

Natural increase. Put simply, natural increase is the difference between the numbers of births and deaths in a population; the rate of natural increase is the difference between the birthrate and the death rate. Given the fertility and mortality characteristics of the human species (excluding incidents of catastrophic mortality), the range of possible rates of natural increase is rather narrow. For a nation, it has rarely exceeded 4 percent per year; the highest known rate for a national population—arising from the conjunction of a very high birthrate and a quite low death rate—is that experienced in Kenya during the 1980s, in which the natural increase of the population approximated 4.1 percent per annum. Rates of natural increase in other developing countries generally are lower; these countries averaged about 2.5 percent per annum during the same period. Meanwhile the rates of natural increase in industrialized countries are very low: the highest is approximately 1 percent, most are in the neighbourhood of several tenths of 1 percent, and some are slightly negative (that is, their populations are slowly decreasing).

Population growth

The rate of population growth is the rate of natural increase combined with the effects of migration. Thus a high rate of natural increase can be offset by a large net out-migration, and a low rate of natural increase can be countered by a high level of net in-migration. Generally speaking, however, these migration effects on population growth rates are far smaller than the effects of changes in fertility and mortality.

Population “momentum”

An important and often misunderstood characteristic of human populations is the tendency of a highly fertile population that has been increasing rapidly in size to continue to do so for decades after the onset of even a substantial decline in fertility. This results from the youthful age structure of such a population, as discussed below. These populations contain large numbers of children who have still to grow into adulthood and the years of reproduction. Thus even a dramatic decline in fertility, which affects only the numbers at age zero, cannot prevent the continuing growth of the number of adults of childbearing age for at least two or three decades.

Eventually, of course, as these large groups pass through the childbearing years to middle and older age, the smaller numbers of children resulting from the fertility decline lead to a moderation in the rate of population growth. But the delays are lengthy, allowing very substantial additional population growth after fertility has declined. This phenomenon gives rise to the term population momentum, which is of great significance to developing countries with rapid population growth and limited natural resources. The nature of population growth means that the metaphor of a “population bomb” used by some lay analysts of population trends in the 1960s was really quite inaccurate. Bombs explode with tremendous force, but such force is rapidly spent. A more appropriate metaphor for rapid population growth is that of a glacier, since a glacier moves at a slow pace but with enormous effects wherever it goes and with a long-term momentum that is unstoppable.

Population composition

The most important characteristics of a population—in addition to its size and the rate at which it is expanding or contracting—are the ways in which its members are distributed according to age, gender, ethnic or racial category, and residential status (urban or rural).

Age distribution

Perhaps the most fundamental of these characteristics is the age distribution of a population. Demographers commonly use population pyramids to describe both age and gender distributions of populations. A population pyramid is a bar chart or graph in which the length of each horizontal bar represents the number (or percentage) of persons in an age group; for example, the base of such a chart consists of a bar representing the youngest segment of the population, those persons less than, say, five years old. Each bar is divided into segments corresponding to the numbers (or proportions) of males and females. In most populations the proportion of older persons is much smaller than that of the younger, so the chart narrows toward the top and is more or less triangular, like the cross section of a pyramid; hence the name. Youthful populations are represented by pyramids with a broad base of young children and a narrow apex of older people, while older populations are characterized by more uniform numbers of people in the age categories. Population pyramids reveal markedly different characteristics for three nations: high fertility and rapid population growth (Mexico), low fertility and slow growth (United States), and very low fertility and negative growth (West Germany).

Contrary to a common belief, the principal factor tending to change the age distribution of a population—and, hence, the general shape of the corresponding pyramid—is not the death or mortality rates, but rather the rate of fertility. A rise or decline in mortality generally affects all age groups in some measure, and hence has only limited effects on the proportion in each age group. A change in fertility, however, affects the number of people in only a single age group—the group of age zero, the newly born. Hence a decline or increase in fertility has a highly concentrated effect at one end of the age distribution and thereby can have a major influence on the overall age structure. This means that youthful age structures correspond to highly fertile populations, typical of developing countries. The older age structures are those of low-fertility populations, such as are common in the industrialized world.

Gender ratio

A second important structural aspect of populations is the relative numbers of males and females who compose it. Generally, slightly more males are born than females (a typical ratio would be 105 or 106 males for every 100 females). On the other hand, it is quite common for males to experience higher mortality at virtually all ages after birth. This difference is apparently of biological origin. Exceptions occur in countries such as India, where the mortality of females may be higher than that of males in childhood and at the ages of childbearing because of unequal allocation of resources within the family and the poor quality of maternal health care.

The general rules that more males are born but that females experience lower mortality mean that during childhood males outnumber females of the same age, the difference decreases as the age increases, at some point in the adult life span the numbers of males and females become equal, and as higher ages are reached the number of females becomes disproportionately large. For example, in Europe and North America, among persons more than 70 years of age in 1985, the number of males for every 100 females was only about 61 to 63. (According to the Population Division of the United Nations, the figure for the Soviet Union was only 40, which may be attributable to high male mortality during World War II as well as to possible increases in male mortality during the 1980s.)

The gender ratio within a population has significant implications for marriage patterns. A scarcity of males of a given age depresses the marriage rates of females in the same age group or usually those somewhat younger, and this in turn is likely to reduce their fertility. In many countries, social convention dictates a pattern in which males at marriage are slightly older than their spouses. Thus if there is a dramatic rise in fertility, such as that called the “baby boom” in the period following World War II, a “marriage squeeze” can eventually result; that is, the number of males of the socially correct age for marriage is insufficient for the number of somewhat younger females. This may lead to deferment of marriage of these women, a contraction of the age differential of marrying couples, or both. Similarly, a dramatic fertility decline in such a society is likely to lead eventually to an insufficiency of eligible females for marriage, which may lead to earlier marriage of these women, an expansion of the age gap at marriage, or both. All of these effects are slow to develop; it takes at least 20 to 25 years for even a dramatic fall or rise in fertility to affect marriage patterns in this way.

Ethnic or racial composition

The populations of all nations of the world are more or less diverse with respect to ethnicity or race. (Ethnicity here includes national, cultural, religious, linguistic, or other attributes that are perceived as characteristic of distinct groups.) Such divisions in populations often are regarded as socially important, and statistics by race and ethnic group are therefore commonly available. The categories used for such groups differ from nation to nation, however; for example, a person of Pakistani origin is considered “black” or “coloured” in the United Kingdom but would probably be classified as “white” or “Asian” in the United States. For this reason, international comparisons of ethnic and racial groups are imprecise, and this component of population structure is far less objective as a measure than are the categories of age and gender discussed above.

Geographical distribution and urbanization

It goes without saying that populations are scattered across space. The typical measure of population in relation to land area, that of population density, is often a meaningless one, since different areas vary considerably in their value for agricultural or other human purposes. Moreover, a high population density in an agrarian society, dependent upon agriculture for its sustenance, is likely to be a severer constraint upon human welfare than would the same density in a highly industrialized society, in which the bulk of national product is not of agricultural origin.

Also of significance in terms of geographical distribution is the division between rural and urban areas. For many decades there has been a nearly universal flow of populations from rural into urban areas. While definitions of urban areas differ from country to country and region to region, the most highly urbanized societies in the world are those of western and northern Europe, Australia, New Zealand, temperate South America, and North America; in all of these the fraction of the population living in urban areas exceeds 75 percent, and it has reached 85 percent in West Germany. An intermediate stage of urbanization exists in the countries making up much of tropical Latin America, where 50 to 65 percent of the population lives in cities. Finally, in many of the developing countries of Asia and Africa the urbanization process has only recently begun, and it is not uncommon to find less than one-third of the population living in urban areas.

The rapidity of urbanization in some countries is quite astonishing. The population of Mexico City in 1960 was around 5,000,000; it was estimated to be about 17,000,000 in 1985 and was projected to reach 26,000,000 to 31,000,000 by 2000. A rule of thumb for much of the developing world is that the rate of growth of urban areas is twice that of the population as a whole. Thus in a population growing 3 percent annually (doubling in about 23.1 years), it is likely that the urban growth rate is at least 6 percent annually (doubling in about 11.6 years).

Population theories

Population size and change play such a fundamental role in human societies that they have been the subject of theorizing for millennia. Most religious traditions have had something to say on these matters, as did many of the leading figures of the ancient world.

In modern times the subject of demographic change has played a central role in the development of the politico-economic theory of mercantilism; the classical economics of Adam Smith, David Ricardo, and others; the cornucopian images of utopians such as the Marquis de Condorcet; the contrasting views of Malthus as to the natural limits imposed on human population; the sociopolitical theories of Marx, Engels, and their followers; the scientific revolutions engendered by Darwin and his followers; and so on through the pantheon of human thought. Most of these theoretical viewpoints have incorporated demographic components as elements of far grander schemes. Only in a few cases have demographic concepts played a central role, as in the case of the theory of the demographic transition that evolved during the 1930s as a counter to biological explanations of fertility declines that were then current.

Population theories in antiquity

The survival of ancient human societies despite high and unpredictable mortality implies that all societies that persisted were successful in maintaining high fertility. They did so in part by stressing the duties of marriage and procreation and by stigmatizing persons who failed to produce children. Many of these pronatalist motives were incorporated into religious dogma and mythology, as in the biblical injunction to “be fruitful and multiply, and populate the earth,” the Hindu laws of Manu, and the writings of Zoroaster.

The ancient Greeks were interested in population size, and Plato’s Republic incorporated the concept of an optimal population size of 5,040 citizens, among whom fertility was restrained by conscious birth control. The leaders of imperial Rome, however, advocated maximizing population size in the interest of power, and explicitly pronatalist laws were adopted during the reign of Augustus to encourage marriage and fertility.

The traditions of Christianity on this topic are mixed. The pronatalism of the Old Testament and the Roman Empire was embraced with some ambivalence by a church that sanctified celibacy among the priesthood. Later, during the time of Thomas Aquinas, the church moved toward more forceful support of high fertility and opposition to birth control.

Islamic writings on fertility were equally mixed. The 14th-century Arab historian Ibn Khaldūn incorporated demographic factors into his grand theory of the rise and fall of empires. According to his analysis, the decline of an empire’s population necessitates the importation of foreign mercenaries to administer and defend its territories, resulting in rising taxes, political intrigue, and general decadence. The hold of the empire on its hinterland and on its own populace weakens, making it a tempting target for a vigorous challenger. Thus Ibn Khaldūn saw the growth of dense human populations as generally favourable to the maintenance and increase of imperial power.

On the other hand, contraception was acceptable practice in Islam from the days of the Prophet, and extensive attention was given to contraceptive methods by the great physicians of the Islamic world during the Middle Ages. Moreover, under Islamic law the fetus is not considered a human being until its form is distinctly human, and hence early abortion was not forbidden.

Mercantilism and the idea of progress

The wholesale mortality caused by the Black Death during the 14th century contributed in fundamental ways to the development of mercantilism, the school of thought that dominated Europe from the 16th through the 18th century. Mercantilists and the absolute rulers who dominated many states of Europe saw each nation’s population as a form of national wealth: the larger the population, the richer the nation. Large populations provided a larger labour supply, larger markets, and larger (and hence more powerful) armies for defense and for foreign expansion. Moreover, since growth in the number of wage earners tended to depress wages, the wealth of the monarch could be increased by capturing this surplus. In the words of Frederick II the Great of Prussia, “the number of the people makes the wealth of states.” Similar views were held by mercantilists in Germany, France, Italy, and Spain. For the mercantilists, accelerating the growth of the population by encouraging fertility and discouraging emigration was consistent with increasing the power of the nation or the king. Most mercantilists, confident that any number of people would be able to produce their own subsistence, had no worries about harmful effects of population growth. (To this day similar optimism continues to be expressed by diverse schools of thought, from traditional Marxists on the left to “cornucopians” on the right.)

Physiocrats and the origins of demography

By the 18th century the Physiocrats were challenging the intensive state intervention that characterized the mercantilist system, urging instead the policy of laissez-faire. Their targets included the pronatalist strategies of governments; Physiocrats such as François Quesnay argued that human multiplication should not be encouraged to a point beyond that sustainable without widespread poverty. For the Physiocrats, economic surplus was attributable to land, and population growth could therefore not increase wealth. In their analysis of this subject matter the Physiocrats drew upon the techniques developed in England by John Graunt, Edmond Halley, Sir William Petty, and Gregory King, which for the first time made possible the quantitative assessment of population size, the rate of growth, and rates of mortality.

The Physiocrats had broad and important effects upon the thinking of the classical economists such as Adam Smith, especially with respect to the role of free markets unregulated by the state. As a group, however, the classical economists expressed little interest in the issue of population growth, and when they did they tended to see it as an effect rather than as a cause of economic prosperity.

Utopian views

In another 18th-century development, the optimism of mercantilists was incorporated into a very different set of ideas, those of the so-called utopians. Their views, based upon the idea of human progress and perfectibility, led to the conclusion that once perfected, mankind would have no need of coercive institutions such as police, criminal law, property ownership, and the family. In a properly organized society, in their view, progress was consistent with any level of population, since population size was the principal factor determining the amount of resources. Such resources should be held in common by all persons, and if there were any limits on population growth, they would be established automatically by the normal functioning of the perfected human society. Principal proponents of such views included Condorcet, William Godwin, and Daniel Malthus, the father of the Reverend Thomas Robert Malthus. Through his father the younger Malthus was introduced to such ideas relating human welfare to population dynamics, which stimulated him to undertake his own collection and analysis of data; these eventually made him the central figure in the population debates of the 19th and 20th centuries.

Malthus and his successors

In 1798 Malthus published An Essay on the Principle of Population as It Affects the Future Improvement of Society, with Remarks on the Speculations of Mr. Godwin, M. Condorcet, and Other Writers. This hastily written pamphlet had as its principal object the refutation of the views of the utopians. In Malthus’ view, the perfection of a human society free of coercive restraints was a mirage, because the capacity for the threat of population growth would always be present. In this, Malthus echoed the much earlier arguments of Robert Wallace in his Various Prospects of Mankind, Nature, and Providence (1761), which posited that the perfection of society carried with it the seeds of its own destruction, in the stimulation of population growth such that “the earth would at last be overstocked, and become unable to support its numerous inhabitants.”

Not many copies of Malthus’ essay, his first, were published, but it nonetheless became the subject of discussion and attack. The essay was cryptic and poorly supported by empirical evidence. Malthus’ arguments were easy to misrepresent, and his critics did so routinely.

The criticism had the salutary effect of stimulating Malthus to pursue the data and other evidence lacking in his first essay. He collected information on one country that had plentiful land (the United States) and estimated that its population was doubling in less than 25 years. He attributed the far lower rates of European population growth to “preventive checks,” giving special emphasis to the characteristic late marriage pattern of western Europe, which he called “moral restraint.” The other preventive checks to which he alluded were birth control, abortion, adultery, and homosexuality, all of which as an Anglican minister he considered immoral.

In one sense, Malthus reversed the arguments of the mercantilists that the number of people determined the nation’s resources, adopting the contrary argument of the Physiocrats that the resource base determined the numbers of people. From this he derived an entire theory of society and human history, leading inevitably to a set of provocative prescriptions for public policy. Those societies that ignored the imperative for moral restraint—delayed marriage and celibacy for adults until they were economically able to support their children—would suffer the deplorable “positive checks” of war, famine, and epidemic, the avoidance of which should be every society’s goal. From this humane concern about the sufferings from positive checks arose Malthus’ admonition that poor laws (i.e., legal measures that provided relief to the poor) and charity must not cause their beneficiaries to relax their moral restraint or increase their fertility, lest such humanitarian gestures become perversely counterproductive.

Having stated his position, Malthus was denounced as a reactionary, although he favoured free medical assistance for the poor, universal education at a time that this was a radical idea, and democratic institutions at a time of elitist alarums about the French Revolution. Malthus was accused of blasphemy by the conventionally religious. The strongest denunciations of all came from Marx and his followers (see below). Meanwhile, the ideas of Malthus had important effects upon public policy (such as reforms in the English Poor Laws) and upon the ideas of the classical and neoclassical economists, demographers, and evolutionary biologists, led by Charles Darwin. Moreover, the evidence and analyses produced by Malthus dominated scientific discussion of population during his lifetime; indeed, he was the invited author of the article “Population” for the supplement (1824) to the fourth, fifth, and sixth editions of the Encyclopædia Britannica. Though many of Malthus’ gloomy predictions have proved to be misdirected, that article introduced analytical methods that clearly anticipated demographic techniques developed more than 100 years later.

The latter-day followers of Malthusian analysis deviated significantly from the prescriptions offered by Malthus. While these “neo-Malthusians” accepted Malthus’ core propositions regarding the links between unrestrained fertility and poverty, they rejected his advocacy of delayed marriage and his opposition to birth control. Moreover, leading neo-Malthusians such as Charles Bradlaugh and Annie Besant could hardly be described as reactionary defenders of the established church and social order. To the contrary, they were political and religious radicals who saw the extension of knowledge of birth control to the lower classes as an important instrument favouring social equality. Their efforts were opposed by the full force of the establishment, and both spent considerable time on trial and in jail for their efforts to publish materials—condemned as obscene—about contraception.

Marx, Lenin, and their followers

While both Karl Marx and Malthus accepted many of the views of the classical economists, Marx was harshly and implacably critical of Malthus and his ideas. The vehemence of the assault was remarkable. Marx reviled Malthus as a “miserable parson” guilty of spreading a “vile and infamous doctrine, this repulsive blasphemy against man and nature.” For Marx, only under capitalism does Malthus’ dilemma of resource limits arise. Though differing in many respects from the utopians who had provoked Malthus’ rejoinder, Marx shared with them the view that any number of people could be supported by a properly organized society. Under the socialism favoured by Marx, the surplus product of labour, previously appropriated by the capitalists, would be returned to its rightful owners, the workers, thereby eliminating the cause of poverty. Thus Malthus and Marx shared a strong concern about the plight of the poor, but they differed sharply as to how it should be improved. For Malthus the solution was individual responsibility as to marriage and childbearing; for Marx the solution was a revolutionary assault upon the organization of society, leading to a collective structure called socialism.

The strident nature of Marx’s attack upon Malthus’ ideas may have arisen from his realization that they constituted a potentially fatal critique of his own analysis. “If [Malthus’] theory of population is correct,” Marx wrote in 1875 in his Critique of the Gotha Programme (published by Engels in 1891), “then I cannot abolish this [iron law of wages] even if I abolish wage-labor a hundred times, because this law is not only paramount over the system of wage-labor but also over every social system.”

The anti-Malthusian views of Marx were continued and extended by Marxians who followed him. For example, although in 1920 Lenin legalized abortion in the revolutionary Soviet Union as the right of every woman “to control her own body,” he opposed the practice of contraception or abortion for purposes of regulating population growth. Lenin’s successor, Joseph Stalin, adopted a pronatalist argument verging on the mercantilist, in which population growth was seen as a stimulant to economic progress. As the threat of war intensified in Europe in the 1930s, Stalin promulgated coercive measures to increase Soviet population growth, including the banning of abortion despite its status as a woman’s basic right. Although contraception is now accepted and practiced widely in most Marxist-Leninist states, some traditional ideologists continue to characterize its encouragement in Third-World countries as shabby Malthusianism.

The Darwinian tradition

Charles Darwin, whose scientific insights revolutionized 19th-century biology, acknowledged an important intellectual debt to Malthus in the development of his theory of natural selection. Darwin himself was not much involved in debates about human populations, but many who followed in his name as “social Darwinists” and “eugenicists” expressed a passionate if narrowly defined interest in the subject.

In Darwinian theory the engine of evolution is differential reproduction of different genetic stocks. The concern of many social Darwinists and eugenicists was that fertility among those they considered the superior human stocks was far lower than among the poorer—and, in their view, biologically inferior—groups, resulting in a gradual but inexorable decline in the quality of the overall population. While some attributed this lower fertility to deliberate efforts of people who needed to be informed of the dysgenic effects of their behaviour, others saw the fertility decline itself as evidence of biological deterioration of the superior stocks. Such simplistic biological explanations attracted attention to the socioeconomic and cultural factors that might explain the phenomenon and contributed to the development of the theory of the demographic transition.

Theory of the demographic transition

The classic explanation of European fertility declines arose in the period following World War I and came to be known as demographic transition theory. (Formally, transition theory is a historical generalization and not truly a scientific theory offering predictive and testable hypotheses.) The theory arose in part as a reaction to crude biological explanations of fertility declines; it rationalized them in solely socioeconomic terms, as consequences of widespread desire for fewer children caused by industrialization, urbanization, increased literacy, and declining infant mortality.

The factory system and urbanization led to a diminution in the role of the family in industrial production and a reduction of the economic value of children. Meanwhile, the costs of raising children rose, especially in urban settings, and universal primary education postponed their entry into the work force. Finally, the lessening of infant mortality reduced the number of births needed to achieve a given family size. In some versions of transition theory, a fertility decline is triggered when one or more of these socioeconomic factors reach certain threshold values.

Until the 1970s transition theory was widely accepted as an explanation of European fertility declines, although conclusions based on it had never been tested empirically. More recently careful research on the European historical experience has forced reappraisal and refinement of demographic transition theory. In particular, distinctions based upon cultural attributes such as language and religion, coupled with the spread of ideas such as those of the nuclear family and the social acceptability of deliberate fertility control, appear to have played more important roles than were recognized by transition theorists.

Trends in world population

Before considering modern population trends separately for developing and industrialized countries, it is useful to present an overview of older trends. It is generally agreed that only 5,000,000–10,000,000 humans (i.e., one one-thousandth of the present world population) were supportable before the agricultural revolution of about 10,000 years ago. By the beginning of the Christian era, 8,000 years later, the human population approximated 300,000,000, and there was apparently little increase in the ensuing millennium up to the year 1000 ce. Subsequent population growth was slow and fitful, especially given the plague epidemics and other catastrophes of the Middle Ages. By 1750, conventionally the beginning of the Industrial Revolution in Britain, world population may have been as high as 800,000,000. This means that in the 750 years from 1000 to 1750, the annual population growth rate averaged only about one-tenth of 1 percent.

The reasons for such slow growth are well known. In the absence of what is now considered basic knowledge of sanitation and health (the role of bacteria in disease, for example, was unknown until the 19th century), mortality rates were very high, especially for infants and children. Only about half of newborn babies survived to the age of five years. Fertility was also very high, as it had to be to sustain the existence of any population under such conditions of mortality. Modest population growth might occur for a time in these circumstances, but recurring famines, epidemics, and wars kept long-term growth close to zero.

From 1750 onward population growth accelerated. In some measure this was a consequence of rising standards of living, coupled with improved transport and communication, which mitigated the effects of localized crop failures that previously would have resulted in catastrophic mortality. Occasional famines did occur, however, and it was not until the 19th century that a sustained decline in mortality took place, stimulated by the improving economic conditions of the Industrial Revolution and the growing understanding of the need for sanitation and public health measures.

The world population, which did not reach its first 1,000,000,000 until about 1800, added another 1,000,000,000 persons by 1930. (To anticipate further discussion below, the third was added by 1960, the fourth by 1974, and the fifth before 1990.) The most rapid growth in the 19th century occurred in Europe and North America, which experienced gradual but eventually dramatic declines in mortality. Meanwhile, mortality and fertility remained high in Asia, Africa, and Latin America.

Beginning in the 1930s and accelerating rapidly after World War II, mortality went into decline in much of Asia and Latin America, giving rise to a new spurt of population growth that reached rates far higher than any previously experienced in Europe. The rapidity of this growth, which some described as the “population explosion,” was due to the sharpness in the falls in mortality that in turn were the result of improvements in public health, sanitation, and nutrition that were mostly imported from the developed countries. The external origins and the speed of the declines in mortality meant that there was little chance that they would be accompanied by the onset of a decline in fertility. In addition, the marriage patterns of Asia and Latin America were (and continue to be) quite different from those in Europe; marriage in Asia and Latin America is early and nearly universal, while that in Europe is usually late and significant percentages of people never marry.

These high growth rates occurred in populations already of very large size, meaning that global population growth became very rapid both in absolute and in relative terms. The peak rate of increase was reached in the early 1960s, when each year the world population grew by about 2 percent, or about 68,000,000 people. Since that time both mortality and fertility rates have decreased, and the annual growth rate has fallen moderately, to about 1.7 percent. But even this lower rate, because it applies to a larger population base, means that the number of people added each year has risen from about 68,000,000 to 80,000,000.

The developing countries since 1950

After World War II there was a rapid decline in mortality in much of the developing world. In part this resulted from wartime efforts to maintain the health of armed forces from industrialized countries fighting in tropical areas. Since all people and governments welcome proven techniques to reduce the incidence of disease and death, these efforts were readily accepted in much of the developing world, but they were not accompanied by the kinds of social and cultural changes that had occurred earlier and had led to fertility declines in industrialized countries.

The reduction in mortality, unaccompanied by a reduction in fertility, had a simple and predictable outcome: accelerating population growth. By 1960 many developing countries had rates of increase as high as 3 percent a year, exceeding by two- or threefold the highest rates ever experienced by European populations. Since a population increasing at this rate will double in only 23 years, the populations of such countries expanded dramatically. In the 25 years between 1950 and 1975, the population of Mexico increased from 27,000,000 to 60,000,000; Iran from 14,000,000 to 33,000,000; Brazil from 53,000,000 to 108,000,000; and China from 554,000,000 to 933,000,000.

The greatest population growth rates were reached in Latin America and in Asia during the mid- to late 1960s. Since then, these regions have experienced variable but sometimes substantial fertility declines along with continuing mortality declines, resulting in usually moderate and occasionally large declines in population growth. The most dramatic declines have been those of the People’s Republic of China, where the growth rate was estimated to have declined from well over 2 percent per year in the 1960s to about half that in the 1980s, following official adoption of a concerted policy to delay marriage and limit childbearing within marriage. The predominance of the Chinese population in East Asia means that this region has experienced the most dramatic declines in population growth of any of the developing regions.

Over the same period population growth rates have declined only modestly—and in some cases have actually risen—in other developing regions. In South Asia the rate has declined only from 2.4 to 2.0 percent; in Latin America, from about 2.7 to about 2.3 percent. Meanwhile, in Africa population growth has accelerated from 2.6 percent to more than 3 percent over the same period, following belated significant declines in mortality not accompanied by similar reductions in fertility.

The industrialized countries since 1950

For many industrialized countries, the period after World War II was marked by a “baby boom.” One group of four countries in particular—the United States, Canada, Australia, and New Zealand—experienced sustained and substantial rises in fertility from the depressed levels of the prewar period. In the United States, for example, fertility rose by two-thirds, reaching levels in the 1950s not seen since 1910.

A second group of industrialized countries, including most of western Europe and some eastern European countries (notably Czechoslovakia and East Germany), experienced what might be termed “baby boomlets.” For a few years after the war, fertility increased as a result of marriages and births deferred during wartime. These increases were modest and relatively short-lived, however, when compared with those of the true baby-boom countries mentioned above. In many of these European countries fertility had been very low in the 1930s; their postwar baby boomlets appeared as three- to four-year “spikes” in the graph of their fertility rates, followed by two full decades of stable fertility levels. Beginning in the mid-1960s, fertility levels in these countries began to move lower again and, in many cases, fell to levels comparable to or lower than those of the 1930s.

A third group of industrialized countries, consisting of most of eastern Europe along with Japan, showed quite different fertility patterns. Most did not register low fertility in the 1930s but underwent substantial declines in the 1950s after a short-lived baby boomlet. In many of these countries the decline persisted into the 1960s, but in some it was reversed in response to governmental incentives.

By the 1980s the fertility levels in most industrialized countries were very low, at or below those needed to maintain stable populations. There are two reasons for this phenomenon: the postponement of marriage and childbearing by many younger women who entered the labour force, and a reduction in the numbers of children born to married women.

Population projections

Demographic change is inherently a long-term phenomenon. Unlike populations of insects, human populations have rarely been subject to “explosion” or “collapse” in numbers. Moreover, the powerful long-term momentum that is built into the human age structure means that the effects of fertility changes become apparent only in the far future. For these and other reasons, it is by now conventional practice to employ the technology of population projection as a means of better understanding the implications of trends.

Population projections represent simply the playing out into the future of a set of assumptions about future fertility, mortality, and migration rates. It cannot be stated too strongly that such projections are not predictions, though they are misinterpreted as such frequently enough. A projection is a “what-if” exercise based on explicit assumptions that may or may not themselves be correct. As long as the arithmetic of a projection is done correctly, its utility is determined by the plausibility of its central assumptions. If the assumptions embody plausible future trends, then the projection’s outputs may be plausible and useful. If the assumptions are implausible, then so is the projection. Because the course of demographic trends is hard to anticipate very far into the future, most demographers calculate a set of alternative projections that, taken together, are expected to define a range of plausible futures, rather than to predict or forecast any single future. Because demographic trends sometimes change in unexpected ways, it is important that all demographic projections be updated on a regular basis to incorporate new trends and newly developed data.

A standard set of projections for the world and for its constituent countries is prepared every two years by the Population Division of the United Nations. These projections include a low, medium, and high variant for each country and region.

Additional Information

The global population is a subject of immense importance, impacting various aspects of life on Earth. Understanding the dynamics of world population involves delving into both physical and human factors that influence population distribution and its myriad impacts. This article aims to provide a comprehensive overview of these factors, their implications, and the challenges they pose.

Physical Factors Influencing Population Distribution

Physical factors play a crucial role in determining where populations settle and thrive. These factors include:

Climate: Climate significantly influences population distribution. Regions with moderate climates, neither too hot nor too cold, tend to attract more inhabitants. Extreme climates, such as deserts or polar regions, are less populated due to harsh living conditions.
Topography: The geographical features of an area, such as mountains, plains, and bodies of water, affect population distribution. Flat plains and fertile valleys often have denser populations, whereas mountainous or rugged terrains may have sparse populations.
Water Resources: Access to water is essential for human settlement and agriculture. Areas with abundant freshwater sources are more likely to support larger populations compared to regions facing water scarcity.
Natural Disasters: The susceptibility to natural disasters like earthquakes, hurricanes, and floods influences population distribution. People tend to avoid areas prone to such calamities, preferring safer locales.

Human Factors Shaping Population Trends

Human factors also play a significant role in shaping population dynamics:

Economic Opportunities: Economic opportunities attract people to urban centers and industrial regions where employment prospects are higher. This leads to urbanization and population concentration in specific areas.
Political Stability: Political stability fosters population growth and development by creating a conducive environment for families to thrive. Conversely, regions plagued by conflict or political unrest often experience population displacement.
Social Factors: Cultural norms, social amenities, and quality of life influence where people choose to live. Societies with better healthcare, education, and infrastructure tend to have higher population densities.
Migration Patterns: Migration, whether voluntary or forced, significantly impacts population distribution. Factors such as seeking better opportunities, escaping persecution, or environmental degradation drive migration flows.

Impacts of Population Growth

Population growth has far-reaching implications for society, the environment, and the economy:

Pressure on Resources: As populations grow, there is increased demand for resources such as food, water, and energy. This can lead to resource depletion, environmental degradation, and competition for scarce resources.
Urbanization and Infrastructure Strain: Rapid population growth often results in urbanization, putting pressure on infrastructure and services like housing, transportation, and healthcare. Overcrowded cities struggle to meet the needs of their residents, leading to issues like congestion and pollution.
Environmental Degradation: Large populations exert pressure on the environment through deforestation, pollution, and habitat destruction. This can have detrimental effects on biodiversity, ecosystems, and the planet’s overall health.
Social and Economic Challenges: High population densities can strain social services and lead to socio-economic disparities. Issues like poverty, unemployment, and inadequate healthcare become more pronounced in densely populated areas.

Factors Influencing Population Growth

Several factors contribute to population growth or decline within specific regions:

Fertility Rates: The average number of children born to women in a given population significantly impacts population growth. High fertility rates contribute to rapid population growth, while declining fertility rates can lead to population stabilization or decline.
Mortality Rates: Mortality rates, particularly infant and child mortality, influence population growth by affecting the number of births and life expectancy. Low mortality rates contribute to population growth, while high mortality rates can offset natural population increase.
Migration: Migration patterns, whether internal or international, play a crucial role in shaping population trends. Migration can either contribute to population growth in destination areas or lead to population decline in areas of origin.

Conclusion

The world population is influenced by a complex interplay of physical and human factors, leading to diverse distribution patterns and impacts. Understanding these dynamics is essential for addressing challenges related to population growth, resource management, and sustainable development. By acknowledging the factors shaping population trends and implementing appropriate policies, societies can strive towards a balanced and prosperous future for all.

world-population.jpg?w=833&ssl=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2499 2025-03-26 00:20:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2399) Oil Well

Gist

An oil well is a hole drilled into the earth to extract petroleum (oil and natural gas) from underground reservoirs.

An oil well is a deep, narrow hole in the ground that's used for bringing oil to the surface. You'll find oil wells in parts of Texas and Oklahoma, but you probably won't be able to create one in your own backyard.

When a petroleum company finds an oil-rich area, they start drilling oil wells there, drawing oil from the ground, processing it, and selling it. Oil wells have been around since fourth century China, when workers bored holes using drills attached to long bamboo poles. In North America, the oldest oil wells were drilled by a heavy metal tool being dropped repeatedly in one spot, before the spinning type of tool known as a rotary drill was invented.

Summary

An oil well is a hole dug into the Earth that serves the purpose of bringing oil or other hydrocarbons - such as natural gas - to the surface. Oil wells almost always produce some natural gas and frequently bring water up with the other petroleum products.

Types of Wells

There are numerous different ways that oil well can be drilled to maximize the output of the well while minimizing other costs. The most common type of well drilled today is known as a conventional well. These wells are wells drilled in the traditional sense in that a location is chosen above the reservoir and the well is drilled vertically downward. Additionally, wells with a small amount of deviation in their path from the vertical are also considered to be conventional. This slight turning of the well is obtained during drilling by using a type of steerable device that shifts the direction the well is being dug. These wells are the most common and are fairly inexpensive to drill.

Horizontal wells are an alternative type of well used when conventional wells do not yield enough fuel. These wells are drilled and steered to enter a deposit nearly horizontally. These wells can hit targets and stimulate reservoirs in ways that a vertical well cannot. Combined with hydraulic fracturing previously unproductive rocks can be used as sources for natural gas. Examples of these types of deposits include formations that contain shale gas or tight gas.

Other types of wells include offshore wells, which are wells that are drilled in the water instead of onshore. These provide access to previous inaccessible oil deposits. Multilateral wells are wells used occasionally that have several branches off of the main borehole that drain a separate part of the reservoir.

Drilling a Well

To drill a well, a specialized piece of equipment known as a drilling rig bores a hole through many layers of dirt and rock until it reaches the oil and gas reservoir where the oil is held. The size of the borehole differs from well to well, but is generally around 12.5 to 90 centimeters wide. To cut through this rock, the drill is pushed down by the weight of the piping above it. This piping is used to pump a thick fluid known as mud into the well. This mud assists in the drilling process by maintaining the pressure below ground in the well as well as by collecting debris created from the drilling and bringing it up to the surface. As the drill digs deeper sections of piping are attached to lengthen the well.

Details

An oil well is a drillhole boring in Earth that is designed to bring petroleum oil hydrocarbons to the surface. Usually some natural gas is released as associated petroleum gas along with the oil. A well that is designed to produce only gas may be termed a gas well. Wells are created by drilling down into an oil or gas reserve and if necessary equipped with extraction devices such as pumpjacks. Creating the wells can be an expensive process, costing at least hundreds of thousands of dollars, and costing much more when in difficult-to-access locations, e.g., offshore. The process of modern drilling for wells first started in the 19th century but was made more efficient with advances to oil drilling rigs and technology during the 20th century.

Wells are frequently sold or exchanged between different oil and gas companies as an asset – in large part because during falls in the price of oil and gas, a well may be unproductive, but if prices rise, even low-production wells may be economically valuable. Moreover, new methods, such as hydraulic fracturing (a process of injecting gas or liquid to force more oil or natural gas production) have made some wells viable. However, peak oil and climate policy surrounding fossil fuels have made fewer of these wells and costly techniques viable.

However, a large number of neglected or poorly maintained wellheads is a large environmental issue: they may leak methane or other toxic substances into local air, water and soil systems. This pollution often becomes worse when wells are abandoned or orphaned – i.e., where wells no longer economically viable are no longer maintained by their (former) owners. A 2020 estimate by Reuters suggested that there were at least 29 million abandoned wells internationally, creating a significant source of greenhouse gas emissions worsening climate change.

History

The earliest known oil wells were drilled in China in 347 CE. These wells had depths of up to about 240 metres (790 ft) and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine producing salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. The ancient records of China and Japan are said to contain many allusions to the use of natural gas for lighting and heating. Petroleum was known as burning water in Japan in the 7th century.

According to Kasem Ajram, petroleum was distilled by the Persian alchemist Muhammad ibn Zakarīya Rāzi (Rhazes) in the 9th century, producing chemicals such as kerosene in the alembic (al-ambiq), and which was mainly used for kerosene lamps. Arab and Persian chemists also distilled crude oil in order to produce flammable products for military purposes. Through Islamic Spain, distillation became available in Western Europe by the 12th century.

Some sources claim that from the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan, to produce naphtha for the petroleum industry. These places were described by Marco Polo in the 13th century, who described the output of those oil wells as hundreds of shiploads. When Marco Polo in 1264 visited Baku, on the shores of the Caspian Sea, he saw oil being collected from seeps. He wrote that "on the confines toward Geirgine there is a fountain from which oil springs in great abundance, in as much as a hundred shiploads might be taken from it at one time."

In 1846, Baku (settlement Bibi-Heybat) the first ever well was drilled with percussion tools to a depth of 21 metres (69 ft) for oil exploration. In 1846–1848, the first modern oil wells were drilled on the Absheron Peninsula north-east of Baku, by Russian engineer Vasily Semyonov applying the ideas of Nikolay Voskoboynikov.

Ignacy Łukasiewicz, a Polish pharmacist and petroleum industry pioneer drilled one of the world's first modern oil wells in 1854 in Polish village Bóbrka, Krosno County[ who in 1856 built one of the world's first oil refineries.

In North America, the first commercial oil well entered operation in Oil Springs, Ontario in 1858, while the first offshore oil well was drilled in 1896 in the Summerland Oil Field on the California Coast.

The earliest oil wells in modern times were drilled percussively, by repeatedly raising and dropping a bit on the bottom of a cable into the borehole. In the 20th century, cable tools were largely replaced with rotary drilling, which could drill boreholes to much greater depths and in less time. The record-depth Kola Borehole used a mud motor while drilling to achieve a depth of over 12,000 metres (12 km; 39,000 ft; 7.5 mi).

Until the 1970s, most oil wells were essentially vertical, although lithological variations cause most wells to deviate at least slightly from true vertical (see deviation survey). However, modern directional drilling technologies allow for highly deviated wells that can, given sufficient depth and with the proper tools, actually become horizontal. This is of great value as the reservoir rocks that contain hydrocarbons are usually horizontal or nearly horizontal; a horizontal wellbore placed in a production zone has more surface area in the production zone than a vertical well, resulting in a higher production rate. The use of deviated and horizontal drilling has also made it possible to reach reservoirs several kilometers or miles away from the drilling location (extended reach drilling), allowing for the production of hydrocarbons located below locations that are difficult to place a drilling rig on, environmentally sensitive, or populated.

Cost

The cost to drill a well depends mainly on the daily rate of the drilling rig, the extra services required to drill the well, the duration of the well program (including downtime and weather time), and the remoteness of the location (logistic supply costs).

The daily rates of offshore drilling rigs vary by their depth capability, and the market availability. Rig rates reported by industry web service show that the deepwater water floating drilling rigs are over twice the daily cost of the shallow water fleet, and rates for jack-up fleet can vary by factor of 3 depending upon capability.

With deepwater drilling rig rates in 2015 of around $520,000/day, and similar additional spread costs, a deepwater well of a duration of 100 days can cost around US$100 million.

With high-performance jackup rig rates in 2015 of around $177,000, and similar service costs, a high pressure, high-temperature well of duration 100 days can cost about US$30 million.

Onshore wells can be considerably cheaper, particularly if the field is at a shallow depth, where costs range from less than $4.9 million to $8.3 million, and the average completion costing $2.9 million to $5.6 million per well. Completion makes up a larger portion of onshore well costs than offshore wells, which generally have the added cost burden of a surface platform.

The total costs mentioned do not include the those associated with the risk of explosion and leakage of oil. Those costs include the cost of protecting against such disasters, the cost of the cleanup effort, and the hard-to-calculate cost of damage to the company's image.

Additional Information

Oil is among the more versatile nouns in the English language. Depending on your recent experience and the nature of your everyday life, hearing the word might evoke images of stir-fry cooking, aggressive tanning, or the "thick" and "earthy" smell of an auto-repair shop.

But today, oil – once dubbed "black gold" as a nod to the inevitable vast fortune of anyone who discovered a sizable oil field in centuries past – has something of a bad reputation.

Fossil fuels allowed human civilization to take an unprecedented technological and industrial worldwide leap forward starting in the 19th century, yet oil and its ancient carbon-based cousins are pariahs of a sort today. This is because incontrovertible evidence has mounted to show that oil, for all its value across not only the transportation sector but every other human endeavor as well, is seriously damaging to the environment when burned.

Whatever your ideas may be on how to best address the energy needs of a world with a population topping 7 billion as of 2019, anyone who has ever seen an oil well, even from a distance, can't help but appreciate the sheer engineering triumph involved in pumping something out of the ground that is not only deep within rock, but in rock below the ocean floor itself. Oil wells come in a variety of different types and have a more colorful history than you might expect.

Fossil Fuels and Energy: The Oil Imperative

"Oil" can refer to a number of different substances that are nonpolar and liquid at room temperature. Many types of oil provide nutritional energy. They do not dissolve in water (which is why oil is hard to clean using water alone), as their long hydrogen-carbon chemical chains are hydrophobic ("water fearing"). "Oil" in the present context refers to the stuff that is found in significant concentrations in the Middle East, off the coast of Venezuela, North America and a few other regions.

Oil (also commonly called petroleum, from the Latin from "petra," or rock, and oleum, or oil is one of three primary fossil fuels, the name given to substances formed over many millions of years from living materials, though not from actual fossils. The other two types are natural gas and **coal.** Together, fossil fuels are expected to provide the vast majority of the world's energy supply beyond 2050 despite voluble concerns from scientists and environmental groups about the planetary warming resulting in part from their combustion.

Electricity, heating and transportation might be considered the main uses of oil and its cohorts, but the reach of fossil fuels extends far into manufacturing, food preparation, cosmetics and other industries as well.

As of 2018, oil was running ahead of natural gas in terms of which was contributing a greater share to U.S. energy consumption, as oil stood at 36 percent to 31 percent for natural gas (and 13 percent for coal, making fossil fuels responsible for 80 percent of U.S. fuel consumed). The increase in the use of the drilling technique known as **hydraulic fracturing,** or "fracking," to extract natural gas from the ground sparked a surge in that fuel's consumption beginning in the 1990s.

Oil Use in the 21st Century

All indications are that there will be a high demand for properly functioning oil wells for the foreseeable future. As noted, petroleum supplied 36 percent of American energy needs as of 2018, and yielded almost half of all energy derived from fossil fuels. These "internal" figures were subject to rapid shifts in the first fifth of the 21st century, but on the whole, fossil fuels were expected to account for virtually the same share of energy consumption both domestically and globally in 2040.

In 2017, the U.S. powered through almost 20 million 44-gallon barrels of crude oil per day. That's 880 million gallons, or over two and a half gallons per person. 

Oil is used – and, for the moment, in most cases required – to move vehicles. (Don't be confused by the terminology: The stuff called "gasoline" comes from petroleum, while natural gas is something else entirely.) It is also used directly to heat buildings and produce electricity. In manufacturing, the petrochemical industry uses petroleum as a raw material to make products such as plastics, solvents and other goods.

History of the Oil Well

Unlike the advent of the telephone, human heart transplantation or the wireless radio, there is no one person who can be credited with being "the" oil well inventor.

Oil wells were being drilled with bamboo in China far back as 347 BCE, and these were ambitious projects: Depths of up to 800 feet were reached using this technology. It wasn't until the 1500s or so that oil taken from the ground was used in the lamps of the day.

The first oil wells reached Europe, Canada and the U.S. in the 1850s, driven by the promise of the nascent Industrial Revolution that relied on previously unimaginable amounts of power production to sustain its own burgeoning growth.

Throughout the 20th century, the introduction of steam-recovery practices, horizontal drilling and finally computerization continued to grow and shape the extraction aspect of the booming oil industry. More production means more and more capable wells, and this has been the result, along with some perhaps predictable "black eyes" on the industry.

As of 2016, over 1,500 oil companies had been incorporated in the United States alone.

Where Oil Comes From

How is oil actually located before it is removed from the ground, and how do petrochemical engineers determine whether a located store of oil is worth the expense of withdrawing it from the ground by whatever means is easiest? While most of the attention given to oil wells naturally lies on their visible function, few people understand how anyone knows where to put these imposing structures in the first place.

One especially little-known feature about oil extraction: While it's true that it's found underground, it's not the case that it exists in convenient pools or reservoirs or even flows, like sap in a tree. For the most part, it needs to be removed from the interior of actual rocks, albeit large ones. (Imagine having to have major jaw surgery to have a single troublesome tooth extracted.)

Fortunately for the oil industry, nature does much of the work of making oil available by pushing some of it out of the rocks, which often come under incredible internal pressure. This allows human oil-seekers to find a trail to the main source located deeper within the Earth.

Basic Oil Well Structure

An oil well diagram is required to properly follow the material here, since most of the terminology is unfamiliar to most people.

Every oil well needs to be drilled before equipment can be arranged around the hole, and this is what is meant by the term "drilling rig." After this heavy bore is used to create a hole anywhere from about six inches to three feet wide, the sides of the well are reinforced with casing made from various materials in layers.

The oil well pump apparatus sits above the top of the well, where oil removed from below is directed to one side. This "production tree" looks a little like a horse from the side, and has components with names to match. The bridle connects the rod pushing vertically down into the well to the horse's "head," which redirects force horizontally along a walking beam. A series of elaborate levels, pulleys and gears leads to the prime mover, the source of mechanical power at the opposite end of the production tree.

Types of Oil Well Drilling

Two main techniques are used to drill oil wells today. In horizontal drilling, the idea is to extract oil that happens to be oriented in a mostly sideways direction in relation to the ground. This is the situation most commonly observed in rock shale because of the way the rock itself forms (it tends to fracture sideways under high pressure).

A horizontal drill unit has J-shaped pattern, which means its operators must first figure out how far to drill straight down before heading more horizontally (not a 90-degree turn). Once this depth is ascertained the next step is finding the right angle to optimize access to the oil below and to one side.

In hydraulic fracturing ("fracking"), a newer technique that took off at the close of the 20th century, highly pressurized fluid that contains sand and other rough material is pumped through previously drilled well bores, such as the aforementioned horizontal drilling wells. Despite the success of fracking from an efficiency standpoint, its ecological consequences have made it a target of environmental groups.

For example, over 90 percent of the fluid needed for fracking remains in the ground after it's put there, and the requirement for water is exceedingly high. Other concerns include toxin exposure, ground water and contamination and the lowering of local air quality.

Drilling Rigs

Most of the earlier models considered to be part of the current oil-well era were A-frame rigs, which are still used today, mostly in exploratory missions. There are large-diameter (large-bore) drills used in situations in which surgical precision may not be an issue.

The type of auger (the actual drill part of a drilling tool) used depends on local conditions underground, to the extent these are known or can be predicted. If a sample from a given area is high in ground water, for example, a hollow auger is likely to be chosen. The evaluation and drilling processes are really only different in scale from you deciding what kind of shovel to use to dig a post hole in your back yard.

As you may have guessed, portable oil drilling rigs entered the picture along the way, and a common model weighs a sizable but manageable 265 pounds or so. These can be mounted on trucks as needed.

Oil Well Disasters

On April 20, 2010, an oil rig named Deepwater Horizon, located off the Gulf of Mexico, cradled by the southeastern coast of the United States, exploded, killing 11 workers. In the three months that followed before engineers from the rig's owner, British Petroleum (BP), were able to cap the damaged Macondo well below the rig, an estimated 4 million gallons of crude oil made its way into the ocean, making this the worst mishap of its type in terms of pure scale.

Countless lawsuits followed in the wake of the explosion, the ecological effects of which were severe and still being evaluated a decade later. When such mishaps occur, capping the damaged well becomes a logistical nightmare because of both being below water and the fantastic pressures in play.

8b4e55d3253c470b8244aca981a9df78.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2500 2025-03-27 00:01:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 50,097

Re: Miscellany

2400) Anthropology

Gist

Anthropology is the comprehensive study of humanity, encompassing human origins, evolution, cultures, societies, and languages, both past and present. It aims to understand the diversity and shared experiences of humankind, exploring both biological and cultural aspects.

Anthropology is the systematic study of humanity, with the goal of understanding our evolutionary origins, our distinctiveness as a species, and the great diversity in our forms of social existence across the world and through time.

Summary

Anthropology is the study of the origin and development of human societies and cultures. Culture is the learned behavior of people, including their languages, belief systems, social structures, institutions, and material goods. Anthropologists study the characteristics of past and present human communities through a variety of techniques. In doing so, they investigate and describe how different peoples of our world lived throughout history.

Anthropologists aim to study and present their human subjects in a clear and unbiased way. They attempt to achieve this by observing subjects in their local environment. Anthropologists then describe interactions and customs, a process known as ethnography. By participating in the everyday life of their subjects, anthropologists can better understand and explain the purpose of local institutions, culture, and practices. This process is known as participant-observation.

As anthropologists study societies and cultures different from their own, they must evaluate their interpretations to make sure they aren’t biased. This bias is known as ethnocentrism, or the habit of viewing all groups as inferior to another, usually their own, cultural group.

Taken as a whole, these steps enable anthropologists to describe people through the people's own terms.

Subdisciplines of Anthropology

Anthropology’s diverse topics of study are generally categorized in four subdisciplines. A subdiscipline is a specialized field of study within a broader subject or discipline. Anthropologists specialize in cultural or social anthropology, linguistic anthropology, biological or physical anthropology, and archaeology. While subdisciplines can overlap and are not always seen by scholars as distinct, each tends to use different techniques and methods.

Cultural Anthropology

Cultural anthropology, also known as social anthropology, is the study of the learned behavior of groups of people in specific environments. Cultural anthropologists base their work in ethnography, a research method that uses field work and participant-observation to study individual cultures and customs.

Elizabeth Kapu'uwailani Lindsey is a National Geographic Fellow in anthropology. As a doctoral student, she documented rare and nearly lost traditions of the palu, Micronesian navigators who don’t use maps or instruments. Among the traditions she studied were the chants and practices of the Satawalese, a tiny cultural group native to a single coral atoll in the Federated States of Micronesia.

Cultural anthropologists who analyze and compare different cultures are known as ethnologists. Ethnologists may observe how specific customs develop differently in different cultures and interpret why these differences exist.

National Geographic Explorer-in-Residence Wade Davis is an ethnobotanist. He spent more than three years in Latin America, collecting and studying plants that different indigenous groups use in their daily lives. His work compares how these groups understand and use plants as food, medicine, and in religious ceremonies.

Linguistic Anthropology

Linguistic anthropology is the study of how language influences social life. Linguistic anthropologists say language provides people with the intellectual tools for thinking and acting in the world. Linguistic anthropologists focus on how language shapes societies and their social networks, cultural beliefs, and understanding of themselves and their environments.

To understand how people use language for social and cultural purposes, linguistic anthropologists closely document what people say as they engage in daily social activities. This documentation relies on participant-observation and other methods, including audiovisual recording and interviews with participants.

Lera Boroditsky, a cognitive scientist, studies forms of communication among the Pormpuraaw, an Aboriginal community in Australia. Boroditsky found that almost all daily activities and conversations were placed within the context of cardinal directions. For example, when greeting someone in Pormpuraaw, one asks, “Where are you going?” A response may be: “A long way to the south-southwest.” A person might warn another, “There is a snake near your northwest foot.” This language enables the Pormpuraaw to locate and navigate themselves in landscapes with extreme precision, but makes communication nearly impossible for those without an absolute knowledge of cardinal directions.

Linguistic anthropologists may document native languages that are in danger of extinction. The Enduring Voices Project at National Geographic aimed to prevent language extinction by embarking on expeditions that create textual, visual, and auditory records of threatened languages. The project also assisted indigenous communities in their efforts to revitalize and maintain their languages. Enduring Voices has documented the Chipaya language of Bolivia, the Yshyr Chamacoco language of Paraguay, and the Matugar Panau language of Papua New Guinea, among many others.

Biological Anthropology

Biological anthropology, also known as physical anthropology, is the study of the evolution of human beings and their living and fossil relatives. Biological anthropology places human evolution within the context of human culture and behavior. This means biological anthropologists look at how physical developments, such as changes in our skeletal or genetic makeup, are interconnected with social and cultural behaviors throughout history.

To understand how humans evolved from earlier life forms, some biological anthropologists study primates, such as monkeys and apes. Primates are considered our closest living relatives. Analyzing the similarities and differences between human beings and the “great apes” helps biological anthropologists understand human evolution.

Jane Goodall, a primatologist, has studied wild chimpanzees (Pan troglodytes) in Tanzania for more than 40 years. By living with these primates for extended periods of time, Goodall discovered a number of similarities between humans and chimpanzees.

One of the most notable of Goodall’s discoveries was that chimpanzees use basic tools, such as sticks. Toolmaking is considered a key juncture in human evolution. Biological anthropologists link the evolution of the human hand, with a longer thumb and stronger gripping muscles, to our ancient ancestors’ focus on toolmaking.

Other biological anthropologists examine the skeletal remains of our human ancestors to see how we have adapted to different physical environments and social structures over time. This specialty is known as human paleontology, or paleoanthropology.

Zeresenay Alemseged, a National Geographic Explorer, examines hominid fossils found at the Busidima-Dikika anthropological site in Ethiopia. Alemseged’s work aims to prove that a wide diversity of early hominid species existed three million to four million years ago. Paleoanthropologists study why some hominid species were able to survive for thousands of years, while others were not.

Biological anthropology may focus on how the biological characteristics of living people are related to their social or cultural practices. The Ju/’hoansi, a foraging society of Namibia, for example, have developed unique physical characteristics in response to cold weather and a lack of high-calorie foods. A thick layer of fat protects vital organs of the chest and abdomen, and veins shrink at night. This reduces the Ju/’hoansi’s heat loss and keeps their core body temperature at normal levels.

Archaeology

Archaeology is the study of the human past using material remains. These remains can be any objects that people created, modified, or used. Archaeologists carefully uncover and examine these objects in order to interpret the experiences and activities of peoples and civilizations throughout history.

Archaeologists often focus their work on a specific period of history. Archaeologists may study prehistoric cultures—cultures that existed before the invention of writing. These studies are important because reconstructing a prehistoric culture’s way of life can only be done through interpreting the artifacts they left behind. For example, macaw eggshells, skeletal remains, and ceramic imagery recovered at archaeological sites in the United States Southwest suggest the important role macaws played as exotic trade items and objects of worship for prehistoric peoples in that area.

Other archaeologists may focus their studies on a specific culture or aspect of cultural life. Constanza Ceruti, a National Geographic Emerging Explorer, is a high-altitude archaeologist specializing in artifacts and features of the Incan Empire. Along with archaeological evidence, Ceruti analyzes historical sources and traditional Andean beliefs. These data help her reconstruct what ancient sites looked like, the symbolic meaning behind each artifact, and how ceremonies took place.

History of Anthropology

Throughout history, the study of anthropology has reflected our evolving relationships with other people and cultures. These relationships are deeply connected to political, economic, and social forces present at different points in history.

The study of history was an important aspect of ancient Greek and Roman cultures, which focused on using reason and inquiry to understand and create just societies. Herodotus, a Greek historian, traveled through regions as far-flung as present-day Libya, Ukraine, Egypt, and Syria during the fifth century B.C.E. Herodotus traveled to these places to understand the origins of conflict between Greeks and Persians. Along with historical accounts, Herodotus described the customs and social structures of the peoples he visited. These detailed observations are considered one of the world’s first exercises in ethnography.

The establishment of exchange routes was also an important development in expanding an interest in societies and cultures. Zhang Qian was a diplomat who negotiated trade agreements and treaties between China and communities throughout Central Asia, for instance. Zhang’s diplomacy and interest in Central Asia helped spur the development of the Silk Road, one of history’s greatest networks for trade, communication, and exchange. The Silk Road provided a vital link between Asia, East Africa, and Eastern Europe for thousands of years.

Medieval scholars and explorers, who traveled the world to develop new trading partnerships, continued to keep accounts of cultures they encountered. Marco Polo, a Venetian merchant, wrote the first detailed descriptions of Central Asia and China, where he traveled for 24 years. Polo’s writings greatly elaborated Europe’s early understandings of Asia, its peoples, and practices.

Ibn Battuta traveled much more extensively than Marco Polo. Battuta was a Moroccan scholar who regularly traveled throughout North Africa and the Middle East. His expeditions, as far east as India and China, and as far south as Kenya, are recorded in his memoir, the Rihla.

Many scholars argue that modern anthropology developed during the Age of Enlightenment, a cultural movement of 18th century Europe that focused on the power of reason to advance society and knowledge. Enlightenment scholars aimed to understand human behavior and society as phenomena that followed defined principles. This work was strongly influenced by the work of natural historians, such as Georges Buffon. Buffon studied humanity as a zoological species—a community of Homo sapiens was just one part of the flora and fauna of an area.

Europeans applied the principles of natural history to document the inhabitants of newly colonized territories and other indigenous cultures they came in contact with. Colonial scholars studied these cultures as “human primitives,” inferior to the advanced societies of Europe. These studies justified the colonial agenda by describing foreign territories and peoples as needing European reason and control. Today, we recognize these studies as racist.

Colonial thought deeply affected the work of 19th century anthropologists. They followed two main theories in their studies: evolutionism and diffusionism. Evolutionists argued that all societies develop in a predictable, universal sequence. Anthropologists who believed in evolutionism placed cultures within this sequence. They placed non-Eurocentric colonies into the “savagery” stage and only considered European powers to be in the “civilizations” stage. Evolutionists believed that all societies would reach the civilization stage when they adopted the traits of these powers. Conversely, they studied “savage” societies as a means of understanding the primitive origins of European civilizations.

Diffusionists believed all societies stemmed from a set of “culture circles” that spread, or diffused, their practices throughout the world. By analyzing and comparing the cultural traits of a society, diffusionists could determine from which culture circle that society derived. W.J. Perry, a British anthropologist, believed all aspects of world cultures—agriculture, domesticated animals, pottery, civilization itself—developed from a single culture circle: Egypt.

Diffusionists and evolutionists both argued that all cultures could be compared to one another. They also believed certain cultures (mostly their own) were superior to others.

These theories were sharply criticized by 20th-century anthropologists who strived to understand particular cultures in those cultures’ own terms, not in comparison to European traditions. The theory of cultural relativism, supported by pioneering German-American anthropologist Franz Boas, argued that one could only understand a person’s beliefs and behaviors in the context of his or her own culture.

To put societies in cultural context, anthropologists began to live in these societies for long periods of time. They used the tools of participant-observation and ethnography to understand and describe the social and cultural life of a group more fully. Turning away from comparing cultures and finding universal laws about human behavior, modern anthropologists describe particular cultures or societies at a given place and time.

Other anthropologists began to criticize the discipline’s focus on cultures from the developing world. These anthropologists turned to analyzing the practices of everyday life in the developed world. As a result, ethnographic work has been conducted on a wider variety of human societies, from university hierarchies to high school sports teams to residents of retirement homes.

Anthropology Today

New technologies and emerging fields of study enable contemporary anthropologists to uncover and analyze more complex information about peoples and cultures. Archaeologists and biological anthropologists use CT scanners, which combine a series of X-ray views taken from different angles, to produce cross-sectional images of the bones and soft tissues inside human remains.

Zahi Hawass, a former National Geographic Explorer-in-Residence, has used CT scans on ancient Egyptian mummies to learn more about patterns of disease, health, and mortality in ancient Egypt. These scans revealed one mummy as an obese, 50-year-old woman who suffered from tooth decay. Hawass and his team were able to identify this mummy as Queen Hatshepsut, a major figure in Egyptian history, after finding one of her missing teeth in a ritual box inscribed with her name.

The field of genetics uses elements of anthropology and biology. Genetics is the study of how characteristics are passed down from one generation to the next. Geneticists study DNA, a chemical in every living cell of every organism. DNA studies suggest all human beings descend from a group of ancestors, some of whom began to migrate out of Central Africa about 60,000 years ago.

Anthropologists also apply their skills and tools to understand how humans create new social connections and cultural identities. Michael Wesch, a National Geographic Emerging Explorer, is studying how new media platforms and digital technologies, such as Facebook and YouTube, are changing how people communicate and relate to one another. As a “digital ethnographer,” Wesch’s findings about our relationships to new media are often presented as videos or interactive web experiences that incorporate hundreds of participant-observers. Wesch is one of many anthropologists expanding how we understand and navigate our digital environment and our approach to anthropological research.

Details

Anthropology, “the science of humanity,” which studies human beings in aspects ranging from the biology and evolutionary history of Homo sapiens to the features of society and culture that decisively distinguish humans from other animal species. Because of the diverse subject matter it encompasses, anthropology has become, especially since the middle of the 20th century, a collection of more specialized fields. Physical anthropology is the branch that concentrates on the biology and evolution of humanity. It is discussed in greater detail in the article human evolution. The branches that study the social and cultural constructions of human groups are variously recognized as belonging to cultural anthropology (or ethnology), social anthropology, linguistic anthropology, and psychological anthropology (see below). Archaeology (see below), as the method of investigation of prehistoric cultures, has been an integral part of anthropology since it became a self-conscious discipline in the latter half of the 19th century. (For a longer treatment of the history of archaeology, see archaeology.)

Overview

Throughout its existence as an academic discipline, anthropology has been located at the intersection of natural science and humanities. The biological evolution of Homo sapiens and the evolution of the capacity for culture that distinguishes humans from all other species are indistinguishable from one another. While the evolution of the human species is a biological development like the processes that gave rise to the other species, the historical appearance of the capacity for culture initiates a qualitative departure from other forms of adaptation, based on an extraordinarily variable creativity not directly linked to survival and ecological adaptation. The historical patterns and processes associated with culture as a medium for growth and change, and the diversification and convergence of cultures through history, are thus major foci of anthropological research.

In the middle of the 20th century, the distinct fields of research that separated anthropologists into specialties were (1) physical anthropology, emphasizing the biological process and endowment that distinguishes Homo sapiens from other species, (2) archaeology, based on the physical remnants of past cultures and former conditions of contemporary cultures, usually found buried in the earth, (3) linguistic anthropology, emphasizing the unique human capacity to communicate through articulate speech and the diverse languages of humankind, and (4) social and/or cultural anthropology, emphasizing the cultural systems that distinguish human societies from one another and the patterns of social organization associated with these systems. By the middle of the 20th century, many American universities also included (5) psychological anthropology, emphasizing the relationships among culture, social structure, and the human being as a person.

The concept of culture as the entire way of life or system of meaning for a human community was a specialized idea shared mainly by anthropologists until the latter half of the 20th century. However, it had become a commonplace by the beginning of the 21st century. The study of anthropology as an academic subject had expanded steadily through those 50 years, and the number of professional anthropologists had increased with it. The range and specificity of anthropological research and the involvement of anthropologists in work outside of academic life have also grown, leading to the existence of many specialized fields within the discipline. Theoretical diversity has been a feature of anthropology since it began and, although the conception of the discipline as “the science of humanity” has persisted, some anthropologists now question whether it is possible to bridge the gap between the natural sciences and the humanities. Others argue that new integrative approaches to the complexities of human being and becoming will emerge from new subfields dealing with such subjects as health and illness, ecology and environment, and other areas of human life that do not yield easily to the distinction between “nature” and “culture” or “body” and “mind.”

Anthropology in 1950 was—for historical and economic reasons—instituted as a discipline mainly found in western Europe and North America. Field research was established as the hallmark of all the branches of anthropology. While some anthropologists studied the “folk” traditions in Europe and America, most were concerned with documenting how people lived in nonindustrial settings outside these areas. These finely detailed studies of everyday life of people in a broad range of social, cultural, historical, and material circumstances were among the major accomplishments of anthropologists in the second half of the 20th century.

Beginning in the 1930s, and especially in the post-World War II period, anthropology was established in a number of countries outside western Europe and North America. Very influential work in anthropology originated in Japan, India, China, Mexico, Brazil, Peru, South Africa, Nigeria, and several other Asian, Latin American, and African countries. The world scope of anthropology, together with the dramatic expansion of social and cultural phenomena that transcend national and cultural boundaries, has led to a shift in anthropological work in North America and Europe. Research by Western anthropologists is increasingly focused on their own societies, and there have been some studies of Western societies by non-Western anthropologists. By the end of the 20th century, anthropology was beginning to be transformed from a Western—and, some have said, “colonial”—scholarly enterprise into one in which Western perspectives are regularly challenged by non-Western ones.

History of anthropology

The modern discourse of anthropology crystallized in the 1860s, fired by advances in biology, philology, and prehistoric archaeology. In The Origin of Species (1859), Charles Darwin affirmed that all forms of life share a common ancestry. Fossils began to be reliably associated with particular geologic strata, and fossils of recent human ancestors were discovered, most famously the first Neanderthal specimen, unearthed in 1856. In 1871 Darwin published The Descent of Man, which argued that human beings shared a recent common ancestor with the great African apes. He identified the defining characteristic of the human species as their relatively large brain size and deduced that the evolutionary advantage of the human species was intelligence, which yielded language and technology.

The pioneering anthropologist Edward Burnett Tylor concluded that as intelligence increased, so civilization advanced. All past and present societies could be arranged in an evolutionary sequence. Archaeological findings were organized in a single universal series (Stone Age, Iron Age, Bronze Age, etc.) thought to correspond to stages of economic organization from hunting and gathering to pastoralism, agriculture, and industry. Some contemporary peoples that remained hunter-gatherers or pastoralists were regarded as laggards in evolutionary terms, representing stages of evolution through which all other societies had passed. They bore witness to early stages of human development, while the industrial societies of northern Europe and the United States represented the pinnacle of human achievement.

Darwin’s arguments were drawn upon to underwrite the universal history of the Enlightenment, according to which the progress of human institutions was inevitable, guaranteed by the development of rationality. It was assumed that technological progress was constant and that it was matched by developments in the understanding of the world and in social forms. Tylor advanced the view that all religions had a common origin, in the belief in spirits. The original religious rite was sacrifice, which was a way of feeding these spirits. Modern religions retained some of these early features, but as human beings became more intelligent, and so more rational, superstitions were gradually refined and would eventually be abandoned. James George Frazer posited a progressive and universal progress from faith in magic through to belief in religion and, finally, to the understanding of science.

John Ferguson McLennan, Lewis Henry Morgan, and other writers argued that there was a parallel development of social institutions. The first humans were promiscuous (like, it was thought, the African apes), but at some stage blood ties were recognized between mother and children and incest between mother and son was forbidden. In time more restrictive forms of mating were introduced and paternity was recognized. Blood ties began to be distinguished from territorial relationships, and distinctive political structures developed beyond the family circle. At last monogamous marriage evolved. Paralleling these developments, technological advances produced increasing wealth, and arrangements guaranteeing property ownership and regulating inheritance became more significant. Eventually the modern institutions of private property and territorially based political systems developed, together with the nuclear family.

An alternative to this Anglo-American “evolutionist” anthropology established itself in the German-speaking countries. Its scientific roots were in geography and philology, and it was concerned with the study of cultural traditions and with adaptations to local ecological constraints rather than with universal human histories. This more particularistic and historical approach was spread to the United States at the end of the 19th century by the German-trained scholar Franz Boas. Skeptical of evolutionist generalizations, Boas advocated instead a “diffusionist” approach. Rather than graduating through a fixed series of intellectual, moral, and technological stages, societies or cultures changed unpredictably, as a consequence of migration and borrowing.

Fieldwork

The first generation of anthropologists had tended to rely on others—locally based missionaries, colonial administrators, and so on—to collect ethnographic information, often guided by questionnaires that were issued by metropolitan theorists. In the late 19th century, several ethnographic expeditions were organized, often by museums. As reports on customs came in from these various sources, the theorists would collate the findings in comparative frameworks to illustrate the course of evolutionary development or to trace local historical relationships.

The first generation of professionally trained anthropologists began to undertake intensive fieldwork on their own account in the early 20th century. As theoretically trained investigators began to spend long periods alone in the field, on a single island or in a particular tribal community, the object of investigation shifted. The aim was no longer to establish and list traditional customs. Field-workers began to record the activities of flesh-and-blood human beings going about their daily business. To get this sort of material, it was no longer enough to interview local authority figures. The field-worker had to observe people in action, off guard, to listen to what they said to each other, to participate in their daily activities. The most famous of these early intensive ethnographic studies was carried out between 1915 and 1918 by Bronisław Malinowski in the Trobriand Islands (now Kiriwina Islands) off the southeastern coast of New Guinea, and his Trobriand monographs, published between 1922 and 1935, set new standards for ethnographic reportage.

These new field studies reflected and accelerated a change of theoretical focus from the evolutionary and historical interests of the 19th century. Inspired by the social theories of Émile Durkheim and the psychological theories of Wilhelm Wundt and others, the ultimate aim was no longer to discover the origins of Western customs but rather to explain the purposes that were served by particular institutions or religious beliefs and practices. Malinowski explained that Trobriand magic was not simply poor science. The “function” of garden magic was to sustain the confidence of gardeners, whose investments could not be guaranteed. His colleague, A.R. Radcliffe-Brown, adopted a more sociological, Durkheimian line of argument, explaining, for example, that the “function” of ancestor worship was to sustain the authority of fathers and grandfathers and to back up the claims of family responsibility. Perhaps the most influential sociological explanation of early institutions was Marcel Mauss’s account of gift exchanges, illustrated by such diverse practices as the “kula ring” cycle of exchange of the Trobriand Islanders and the potlatch of the Kwakiutl of the Pacific coast of North America. Mauss argued that apparently irrational forms of economic consumption made sense when they were properly understood, as modes of social competition regulated by strict and universal rules of reciprocity.

Social and cultural anthropology

A distinctive “social” or “cultural” anthropology emerged in the 1920s. It was associated with the social sciences and linguistics, rather than with human biology and archaeology. In Britain in particular social anthropologists came to regard themselves as comparative sociologists, but the assumption persisted that anthropologists were primarily concerned with contemporary hunter-gatherers or pastoralists, and in practice evolutionary ways of thinking may often be discerned below the surface of functionalist argument that represents itself as ahistorical. A stream of significant monographs and comparative studies appeared in the 1930s and ’40s that described and classified the social structures of what were termed tribal societies. In African Political Systems (1940), Meyer Fortes and Edward Evans-Pritchard proposed a triadic classification of African polities. Some African societies (e.g., the San) were organized into kin-based bands. Others (e.g., the Nuer and the Tallensi) were federations of unilineal descent groups, each of which was associated with a territorial segment. Finally, there were territorially based states (e.g., those of the Tswana of southern Africa and the Kongo of central Africa, or the emirates of northwestern Africa), in which kinship and descent regulated only domestic relationships. Kin-based bands lived by foraging, lineage-based societies were often pastoralists, and the states combined agriculture, pastoralism, and trade. In effect, this was a transformation of the evolutionist stages into a synchronic classification of types. Though speculations about origins were discouraged, it was apparent that the types could easily be rearranged in a chronological sequence from the least to the most sophisticated.

There were similar attempts to classify systems of kinship and marriage, the most famous being that of the French anthropologist Claude Lévi-Strauss. In 1949 he presented a classification of marriage systems from diverse localities, again within the framework of an implicit evolutionary series. The crucial evolutionary moment was the introduction of the incest taboo, which obliged men to exchange their sisters and daughters with other men in order to acquire wives for themselves and their sons. These marriage exchanges in turn bound family groups together into societies. In societies organized by what Lévi-Strauss termed “elementary systems” of kinship and marriage, the key social units were exogamous descent groups. He represented the Australian Aboriginals as the most fully realized example of an elementary system, while most of the societies with complex kinship systems were to be found in the modern world, in complex civilizations.

American anthropology since the 1950s

In the United States a “culture-and-personality” school developed that drew rather on new movements in psychology (particularly psychoanalysis and Gestalt psychology). Later developments in the social sciences resulted in the emergence of a positivist cross-cultural project, associated with George P. Murdock at Yale University, which applied statistical methods to a sample of world cultures and attempted to establish universal functionalist relationships between forms of marriage, descent systems, property relationships, and other variables. Under the influence of the American social theorist Talcott Parsons, the anthropologists at Harvard University were drawn into team projects with sociologists and psychologists. They came to be regarded as the specialists in the study of “culture” within the framework of an interdisciplinary social science.

In the 1950s and ’60s, evolutionist ideas gained fresh currency in American anthropology, where they were cast as a challenge to the relativism and historical particularism of the Boasians. Some of the new evolutionists (led by Leslie White) reclaimed the abandoned territory of Victorian social theory, arguing for a coherent world history of human development, through a succession of stages, from a common early base. The more developed a society, the more complex its organization and the more energy it consumed. White believed that energy consumption was the gauge of cultural advance. Another tendency, led by Julian Steward, argued rather for an evolutionism that was more directly Darwinian in inspiration. Cultural practices were to be treated as modes of adaptation to specific environmental challenges. More skeptical than White about traditional models of unilineal evolution, Steward urged the study of particular evolutionary processes within enduring culture areas, in which societies with a common origin were exposed to similar ecological constraints. Students of White and Steward, including Marshall Sahlins, revived classic evolutionist questions about the origins of the state and the consequences of technological progress.

The institutional development of anthropology in Europe was strongly influenced by the existence of overseas empires, and in the aftermath of World War II anthropologists were drawn into programs in the so-called developing countries. In the United States, anthropologists had traditionally studied the native peoples of North and Central America. During World War II, however, they were called upon to apply their expertise to assist the war effort, along with other social scientists. As the United States became increasingly influential in the world, in the aftermath of the war, the profession grew explosively. In the 1950s and ’60s, important field studies were carried out by American ethnographers working in Indonesia, in East and West Africa, and in the many societies in the South Seas that had been brought under direct or indirect American control as a result of the war in the Pacific.

In the view of some critics, social and cultural anthropology was becoming, in effect, a Western social science that specialized in the study of colonial and postcolonial societies. The war in Vietnam fueled criticism of American engagement and precipitated a radical shift in American anthropology. There was general disenchantment with the project of “modernizing” the new states that had emerged after World War II, and many American anthropologists began to turn away from the social sciences.

American anthropology divided between two intellectual tendencies. One school, inspired by modern developments in genetics, looked for biological determinants of human cultures and sought to revive the traditional alliance between cultural anthropology and biological anthropology. Another school insisted that cultural anthropology should aim to interpret other cultures rather than to seek laws of cultural development or cultural integration and that it should therefore situate itself within the humanities rather than in the biological sciences or the social sciences.

Clifford Geertz was the most influential proponent of an “interpretive” anthropology. This represented a movement away from biological frameworks of explanation and a rejection of sociological or psychological preoccupations. The ethnographer was to focus on symbolic communications, and so rituals and other cultural performances became the main focus of research. Sociological and psychological explanations were left to other disciplines. In the next generation, a radically relativist version of Geertz’s program became influential. It was argued that cultural consensus is rare and that interpretations are therefore always partial. Cultural boundaries are provisional and uncertain, identities fragile and fabricated. Consequently ethnographers should represent a variety of discordant voices, not try to identify a supposedly normative cultural view. In short, it was an illusion that objective ethnographic studies could be produced and reliable comparisons undertaken.

European anthropology since the 1950s

In Europe the social science program remained dominant, though it was revitalized by a new concern with social history. Some European social scientists became leaders of social thought, among them Pierre Bourdieu, Mary Douglas, Louis Dumont, Ernest Gellner, and Claude Lévi-Strauss. Elsewhere, particularly in some formerly colonial countries in Latin America, Asia, and Africa, local traditions of anthropology established themselves. While anthropologists in these countries were responsive to theoretical developments in the traditional centres of the discipline, they were also open to other intellectual currents, because they were typically engaged in debates with specialists from other fields about developments in their own countries.

Empirical research flourished despite the theoretical diversity. Long-term fieldwork was now commonly backed up by historical investigations, and ethnography came to be regarded by many practitioners as the core activity of social and cultural anthropology. In the second half of the 20th century, the ethnographic focus of anthropologists changed decisively. The initial focus had been on contemporary hunter-gatherers or pastoralists. Later, ethnographers specialized in the study of formerly colonized societies, including the complex villages and towns of Asia. From the 1970s fieldwork began increasingly to be carried out in European societies and among ethnic minorities, church communities, and other groups in the United States. In the formerly colonized societies, local anthropologists began to dominate ethnographic research, and community leaders increasingly insisted on controlling the agenda of field-workers.

The liveliest intellectual developments were perhaps to be found beyond the mainstream. Fresh specializations emerged, notably the anthropology of women in the 1970s and, in the following decades, medical anthropology, psychological anthropology, visual anthropology, the anthropology of music and dance, and demographic anthropology. The anthropology of the 21st century was polycentric and cosmopolitan, and it was not entirely at home among the biological or social sciences or in the humanities.

Additional Information

Anthropology is the study of what makes us human.

To understand the full sweep and complexity of cultures across all of human history, anthropology draws and builds upon knowledge from the social and biological sciences as well as the humanities and physical sciences.

Anthropology takes a broad approach to understanding the many different aspects of the human experience. Some anthropologists consider what makes up our biological bodies and genetics, as well as our bones, diet, and health. Others look to the past to see how human groups lived hundreds or thousands of years ago and what was important to them. Around the world, they observe communities as they exist today, to understand the practices of different groups of people from an insider’s perspective. And they study how people use language, make meaning, and organize social action in all social groups and contexts.

In the community of anthropologists in the United States, these four fields—human biology, archaeology, cultural anthropology, and linguistics—are understood to be the pillars on which the whole discipline rests. Any individual anthropologist will probably specialize in one or two of these areas but have general familiarity with them all.

We understand these varied approaches to complement one another and give a well-rounded picture not only of what we all share as humans, but also of our rich diversity across time, space, and social settings. For example, everyone needs to eat, but people eat different foods and get food in different ways, so anthropologists look at how different groups of people get food, prepare it, and share it. They look at the meaning of different food traditions, such as what makes a dish appropriate for a special occasion. They focus on the intersection of culture and biology to understand what food is available in a community, why people make the choices they do, and how these choices relate to health and well-being. They compare these practices with others around the world, as well as what they can learn from the ancient archaeological record. And they use these insights to work toward a world where everyone has enough to eat and traditional foodways are celebrated and maintained.

More Information

Anthropology is the scientific study of humanity, concerned with human behavior, human biology, cultures, societies, and linguistics, in both the present and past, including archaic humans. Social anthropology studies patterns of behavior, while cultural anthropology studies cultural meaning, including norms and values. The term sociocultural anthropology is commonly used today. Linguistic anthropology studies how language influences social life. Biological or physical anthropology studies the biological development of humans.

Archaeology, often termed as "anthropology of the past," studies human activity through investigation of physical evidence. It is considered a branch of anthropology in North America and Asia, while in Europe, archaeology is viewed as a discipline in its own right or grouped under other related disciplines, such as history and palaeontology.

Branches-of-Anthropology-1.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB