Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1851 2023-07-29 14:28:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1854) Warehouse

Gist

A warehouse is a large building where raw materials or manufactured goods are stored until they are exported to other countries or distributed to shops to be sold.

Details

A warehouse is a building for storing goods. Warehouses are used by manufacturers, importers, exporters, wholesalers, transport businesses, customs, etc. They are usually large plain buildings in industrial parks on the outskirts of cities, towns, or villages.

Warehouses usually have loading docks to load and unload goods from trucks. Sometimes warehouses are designed for the loading and unloading of goods directly from railways, airports, or seaports. They often have cranes and forklifts for moving goods, which are usually placed on ISO standard pallets and then loaded into pallet racks. Stored goods can include any raw materials, packing materials, spare parts, components, or finished goods associated with agriculture, manufacturing, and production. In India and Hong Kong, a warehouse may be referred to as a "godown". There are also godowns in the Shanghai Bund.

History:

Prehistory and ancient history

A warehouse can be defined functionally as a building in which to store bulk produce or goods (wares) for commercial purposes. The built form of warehouse structures throughout time depends on many contexts: materials, technologies, sites, and cultures.

In this sense, the warehouse postdates the need for communal or state-based mass storage of surplus food. Prehistoric civilizations relied on family- or community-owned storage pits, or 'palace' storerooms, such as at Knossos, to protect surplus food. The archaeologist Colin Renfrew argued that gathering and storing agricultural surpluses in Bronze Age Minoan 'palaces' was a critical ingredient in the formation of proto-state power.

The need for warehouses developed in societies in which trade reached a critical mass requiring storage at some point in the exchange process. This was highly evident in ancient Rome, where the horreum (pl. horrea) became a standard building form. The most studied examples are in Ostia, the port city that served Rome. The Horrea Galbae, a warehouse complex on the road towards Ostia, demonstrates that these buildings could be substantial, even by modern standards. Galba's horrea complex contained 140 rooms on the ground floor alone, covering an area of some 225,000 square feet (21,000 sq m). As a point of reference, less than half of U.S. warehouses today are larger than 100,000 square feet (9290 sq m).

Medieval Europe

The need for a warehouse implies having quantities of goods too big to be stored in a domestic storeroom. But as attested by legislation concerning the levy of duties, some medieval merchants across Europe commonly kept goods in their large household storerooms, often on the ground floor or cellars. An example is the Fondaco dei Tedeschi, the substantial quarters of German traders in Venice, which combined a dwelling, warehouse, market and quarters for travellers.

From the Middle Ages on, dedicated warehouses were constructed around ports and other commercial hubs to facilitate large-scale trade. The warehouses of the trading port Bryggen in Bergen, Norway (now a World Heritage site), demonstrate characteristic European gabled timber forms dating from the late Middle Ages, though what remains today was largely rebuilt in the same traditional style following great fires in 1702 and 1955.

Industrial revolution

During the industrial revolution of the mid 18th century, the function of warehouses evolved and became more specialised. The mass production of goods launched by the industrial revolution of the 18th and 19th centuries fuelled the development of larger and more specialised warehouses, usually located close to transport hubs on canals, at railways and portside. Specialisation of tasks is characteristic of the factory system, which developed in British textile mills and potteries in the mid-late 1700s. Factory processes speeded up work and deskilled labour, bringing new profits to capital investment.

Warehouses also fulfill a range of commercial functions besides simple storage, exemplified by Manchester's cotton warehouses and Australian wool stores: receiving, stockpiling and despatching goods; displaying goods for commercial buyers; packing, checking and labelling orders, and dispatching them.

The utilitarian architecture of warehouses responded fast to emerging technologies. Before and into the nineteenth century, the basic European warehouse was built of load-bearing masonry walls or heavy-framed timber with a suitable external cladding. Inside, heavy timber posts supported timber beams and joists for the upper levels, rarely more than four to five stories high.

A gabled roof was conventional, with a gate in the gable facing the street, rail lines or port for a crane to hoist goods into the window-gates on each floor below. Convenient access for road transport was built-in via very large doors on the ground floor. If not in a separate building, office and display spaces were located on the ground or first floor.

Technological innovations of the early 19th century changed the shape of warehouses and the work performed inside them: cast iron columns and later, moulded steel posts; saw-tooth roofs; and steam power. All (except steel) were adopted quickly and were in common use by the middle of the 19th century.

Strong, slender cast iron columns began to replace masonry piers or timber posts to carry levels above the ground floor. As modern steel framing developed in the late 19th century, its strength and constructibility enabled the first skyscrapers. Steel girders replaced timber beams, increasing the span of internal bays in the warehouse.

The saw-tooth roof brought natural light to the top story of the warehouse. It transformed the shape of the warehouse, from the traditional peaked hip or gable to an essentially flat roof form that was often hidden behind a parapet. Warehouse buildings now became strongly horizontal. Inside the top floor, the vertical glazed pane of each saw-tooth enabled natural lighting over displayed goods, improving buyer inspection.

Hoists and cranes driven by steam power expanded the capacity of manual labour to lift and move heavy goods.

20th century

Two new power sources, hydraulics, and electricity, re-shaped warehouse design and practice at the end of the 19th century and into the 20th century.

Public hydraulic power networks were constructed in many large industrial cities around the world in the 1870s-80s, exemplified by Manchester. They were highly effective to power cranes and lifts, whose application in warehouses served taller buildings and enabled new labour efficiencies.

Public electricity networks emerged in the 1890s. They were used at first mainly for lighting and soon to electrify lifts, making possible taller, more efficient warehouses. It took several decades for electrical power to be distributed widely throughout cities in the western world.

20th-century technologies made warehousing ever more efficient. Electricity became widely available and transformed lighting, security, lifting, and transport from the 1900s. The internal combustion engine, developed in the late 19th century, was installed in mass-produced vehicles from the 1910s. It not only reshaped transport methods but enabled many applications as a compact, portable power plant, wherever small engines were needed.

The forklift truck was invented in the early 20th century and came into wide use after World War II. Forklifts transformed the possibilities of multi-level pallet racking of goods in taller, single-level steel-framed buildings for higher storage density. The forklift, and its load fixed to a uniform pallet, enabled the rise of logistic approaches to storage in the later 20th century.

Always a building of function, in the late 20th century warehouses began to adapt to standardization, mechanization, technological innovation, and changes in supply chain methods. Here in the 21st century, we are currently witnessing the next major development in warehousing– automation.

Warehouse layout

A good warehouse layout consist of 5 areas:

Loading and unloading area - This is the area where goods are unloaded and loaded into the truck. It could be part of the building or separated from the building.
Reception area - Also known as staging area, this a place where the incoming goods are processed and reviewed, and sorted before proceeding to storage.
Storage area - This is the area of the warehouse where goods sit as it awaits dispatch. This area can be further subdivided into static storage for goods that will take longer before being dispatched and dynamic storage for goods that sell faster.
Picking area - This is the area where goods being dispatched are prepared or modified before being shipped.
Shipping area - Once goods have been prepared, they proceed to the packing or shipping area where they await the actual shipping.

FWL_Sept_BlogImages_1_Warehouse.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1852 2023-07-30 14:13:04

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1855) Brainstorming

Gist

Brainstorming is a cooperative approach in which a number of people collectively agree upon a solution after all of their ideas are brought forth and discussed.

Summary

Brainstorming is a group problem-solving method that involves the spontaneous contribution of creative ideas and solutions. This technique requires intensive, freewheeling discussion in which every member of the group is encouraged to think aloud and suggest as many ideas as possible based on their diverse knowledge.

Brainstorming combines an informal approach to problem-solving with lateral thinking, which is a method for developing new concepts to solve problems by looking at them in innovative ways. Some of these ideas can be built into original, creative solutions to a problem, while others can generate additional ideas.

Some experts believe that brainstorming is better than conventional group interaction, which might be hindered by group think. Group think is a phenomenon that occurs when the team’s need for consensus overshadows the judgment of individual group members.

Although group brainstorming is frequently better for generating ideas than normal group problem-solving, several studies have shown that individual brainstorming can produce better ideas than group brainstorming. This can occur because group members pay so much attention to others’ ideas that they forget or do not create their own ideas. Also, groups do not always adhere to good brainstorming practices.

During brainstorming sessions, participants should avoid criticizing or rewarding ideas in order to explore new possibilities and break down incorrect answers. Once the brainstorming session is over, the evaluation session (which includes analysis and discussion of the aired ideas) begins, and solutions can be crafted using conventional means.

Common methods of brainstorming include mind mapping, which involves creating a diagram with a goal or key concept in the center with branches showing subtopics and related ideas; writing down the steps needed to get from Point A to Point B; "teleporting" yourself to a different time and place; putting yourself in other people’s shoes to imagine how they might solve a problem; and "superstorming," or using a hypothetical superpower such as X-ray vision to solve a problem. 

Details

Brainstorming is a group creativity technique by which efforts are made to find a conclusion for a specific problem by gathering a list of ideas spontaneously contributed by its members.

In other words, brainstorming is a situation where a group of people meet to generate new ideas and solutions around a specific domain of interest by removing inhibitions. People are able to think more freely and they suggest as many spontaneous new ideas as possible. All the ideas are noted down without criticism and after the brainstorming session the ideas are evaluated.

The term was popularized by Alex Faickney Osborn in the classic work Applied Imagination (1953). Once a new product has passed through the screening process, the next step is to conduct a business analysis. Business analysis is a basic assessment of a product's compatibility in the marketplace and its potential profitability. Both the size of the market and competing products are often studied at this point. The most important question relates to market demand: How will a product affect a firm's sales, costs, and profits? If a product survives the first three steps, it is developed into a prototype that should reveal the intangible attributes it possesses as perceived by the consumer.

History

In 1939, advertising executive Alex F. Osborn began developing methods for creative problem-solving. He was frustrated by employees' inability to develop creative ideas individually for ad campaigns. In response, he began hosting group-thinking sessions and discovered a significant improvement in the quality and quantity of ideas produced by employees. He first termed the process as organized ideation, but participants later came up with the term "brainstorm sessions", taking the concept after the use of "the brain to storm a problem".

During the period when Osborn made his concept, he started writing on creative thinking, and the first notable book where he mentioned the term brainstorming was How to Think Up (1942).

Osborn outlined his method in the subsequent book Your Creative Power (1948), in chapter 33, "How to Organize a Squad to Create Ideas".

One of Osborn's key recommendations was for all the members of the brainstorming group to be provided with a clear statement of the problem to be addressed prior to the actual brainstorming session. He also explained that the guiding principle is that the problem should be simple and narrowed down to a single target. Here, brainstorming is not believed to be effective in complex problems because of a change in opinion over the desirability of restructuring such problems. While the process can address the problems in such a situation, tackling all of them may not be feasible.

Additional Information

Brainstorming is a problem-solving technique in which a group of people freely and spontaneously present their ideas, build upon each other's visions and intuitions, until something new and unique emerges. The technique is designed so that critical and negative thinking, usual in group settings, is temporarily suspended so that ideas can flow freely and may be expressed without embarrassment.

A.F. Osborne is credited with inventing the technique in 1941. Osborne published his ideas in 1957 in a book entitled Applied Imagination. The well-known author Arthur Koestler (famous for his Darkness at Noon ) laid out the manner in which humor, invention, and artistic creativity all result from unsuspected linkages between seemingly different ideas and images—a phenomenon used in brainstorming. That book was entitled The Act of Creation.

Brainstorming is widely applicable to the solution of any problem whatever it might be. It is used in problems related to concrete physical objects as well as very abstract administrative procedures.

Three critical factors determine the success of a brainstorming effort. First, the group must strive to produce a large quantity of ideas to increase the likelihood that the best solution will emerge. Second, the group must be certain to withhold judgment of the ideas as they are expressed. Third, the group leader must create a positive environment for the brainstorming session and channel the creative energies of all the members in the same direction.

During the brainstorming session, meanwhile, participants should keep in mind the following:

* The aim of the session is to generate a large quantity of ideas. Self-censorship is counterproductive. A brainstorming session is successful when the sheer quantity of ideas forces participants to move beyond preconceived notions and explore new territory.

* Discussions of the relative merits of ideas should not be undertaken as they are voiced; this slows the process and discourages creativity.

* Seniority and rank should be ignored during the session so that all participants feel equal and feel encouraged to be creative.

* A lively atmosphere should be maintained, and when activity lags, someone should strive to introduce a novel and surprising perspective. A brainstorming team might, for example, shift the viewpoint and ask: How would a five-year old look at this problem…?

After the brainstorming portion of the meeting has been completed, the leader or group should arrange all the ideas into related categories to prioritize and evaluate them. These lists can then be evaluated and modified by the group as needed in order to settle on a course of action to pursue. After the conclusion of the meeting, it may be helpful to send participants a copy of the idea lists to keep them thinking about the issue under discussion. The group moderator may ask members to report back later on ideas they considered worthy of action, and to offer any ideas they might have about implementation.

There are a number of variations on the basic theme of brainstorming. In "brainwriting" the members of a group write their ideas down on paper and then exchange their lists with others. When group members expand upon each other's ideas in this way, it frequently leads to innovative new approaches. Another possibility is to brainstorm via a bulletin board, which can be hung in a central office location or posted on a computer network. The bulletin board centers upon a basic topic or question, and people are encouraged to read others' responses and add their own. One benefit of this approach is that it keeps the problem at the forefront of people's minds. Finally, it is also possible to perform solo brainstorming. In this approach, a person writes down at least one idea per day on an index card. Eventually he or she can look at all the cards, shuffle them around, and combine the ideas.

brainstorming-templates-Blog-Feature.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1853 2023-07-31 14:29:08

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1856) Trade

Gist

Trade is the activity of buying and selling, or exchanging, goods and/or services between people or countries.

Summary

Trade involves the transfer of goods and services from one person or entity to another, often in exchange for money. Economists refer to a system or network that allows trade as a market.

An early form of trade, barter, saw the direct exchange of goods and services for other goods and services, i.e. trading things without the use of money. Modern traders generally negotiate through a medium of exchange, such as money. As a result, buying can be separated from selling, or earning. The invention of money (and letters of credit, paper money, and non-physical money) greatly simplified and promoted trade. Trade between two traders is called bilateral trade, while trade involving more than two traders is called multilateral trade.

In one modern view, trade exists due to specialization and the division of labour, a predominant form of economic activity in which individuals and groups concentrate on a small aspect of production, but use their output in trade for other products and needs. Trade exists between regions because different regions may have a comparative advantage (perceived or real) in the production of some trade-able commodity – including the production of scarce or limited natural resources elsewhere. For example, different regions' sizes may encourage mass production. In such circumstances, trading at market prices between locations can benefit both locations. Different types of traders may specialize in trading different kinds of goods; for example, the spice trade and grain trade have both historically been important in the development of a global, international economy.

Retail trade consists of the sale of goods or merchandise from a very fixed location (such as a department store, boutique, or kiosk), online or by mail, in small or individual lots for direct consumption or use by the purchaser. Wholesale trade is the traffic in goods that are sold as merchandise to retailers, industrial, commercial, institutional, or other professional business users, or to other wholesalers and related subordinated services.

Historically, openness to free trade substantially increased in some areas from 1815 until the outbreak of World War I in 1914. Trade openness increased again during the 1920s but collapsed (in particular in Europe and North America) during the Great Depression of the 1930s. Trade openness increased substantially again from the 1950s onward (albeit with a slowdown during the oil crisis of the 1970s). Economists and economic historians contend that current levels of trade openness are the highest they have ever been.

Details:

What Is Trade?

Trade is the voluntary exchange of goods or services between different economic actors. Since the parties are under no obligation to trade, a transaction will only occur if both parties consider it beneficial to their interests.

Trade can have more specific meanings in different contexts. In financial markets, trade refers to purchasing and selling securities, commodities, or derivatives. Free trade means international exchanges of products and services without obstruction by tariffs or other trade barriers.

KEY TAKEAWAYS

* Trade refers to the voluntary exchange of goods or services between economic actors.
* Since transactions are consensual, trade is generally considered to benefit both parties.
* In finance, trading refers to purchasing and selling securities or other assets.
* In international trade, the comparative advantage theory states that trade benefits all parties.

Most classical economists advocate for free trade, but some development economists believe protectionism has advantages.

How Trade Works

As a generic term, trade can refer to any voluntary exchange, from selling baseball cards between collectors to multimillion-dollar contracts between companies.

In macroeconomics, trade usually refers to international trade, the system of exports and imports that connects the global economy. A product sold to the global market is an export, and a product bought from the global market is an import. Exports can account for a significant source of wealth for well-connected economies.

International trade results in increased efficiency and allows countries to benefit from foreign direct investment (FDI) by businesses in other countries. FDI can bring foreign currency and expertise into a country, raising local employment and skill levels. For investors, FDI offers company expansion and growth, eventually leading to higher revenues.

A trade deficit is a situation where a country spends more on aggregate imports from abroad than it earns from its aggregate exports. A trade deficit represents an outflow of domestic currency to foreign markets. This may also be referred to as a negative balance of trade (BOT).

International Trade

International trade occurs when countries put goods and services on the international market and trade with each other. Without trade between different countries, many modern amenities people expect to have would not be available.

Comparative Advantage

Trade seems to be as old as civilization itself—ancient civilizations traded with each other for goods they could not produce for themselves due to climate, natural resources, or other inhibiting factors. The ability of two countries to produce items the other could not and mutually exchange them led to the principle of comparative advantage.

This principle, commonly known as the Law of Comparative Advantage, is popularly attributed to English political economist David Ricardo and his book On the Principles of Political Economy and Taxation in 1817. However, Ricardo's mentor James Mill likely originated the analysis.

Ricardo famously showed how England and Portugal benefited by specializing and trading according to their comparative advantages. In this case, Portugal was able to make wine at a low cost, while England was able to manufacture cloth cheaply. By focusing on their comparative advantages, both countries could consume more goods through trade than they could in isolation.

The first long-distance trade is thought to have occurred 5,000 years ago between Mesopotamia and the Indus Valley.

The theory of comparative advantage helps to explain why protectionism is often counterproductive. While a country can use tariffs and other trade barriers to benefit specific industries or interest groups, these policies also prevent their consumers from enjoying the benefits of cheaper goods from abroad. Eventually, that country would be economically disadvantaged relative to countries that conduct trade.

Example of Comparative Advantage

Comparative advantage is one country's ability to produce something better and more efficiently than others. Whatever the item is, it becomes a powerful bargaining tool because it can be used as a trade incentive for trading partners.

When two countries trade, they can each have a comparative advantage and benefit each other. For instance, imagine a country that has limited natural resources. One day, a shepherd stumbled upon an abundant cheap and renewable energy source only occurring within that country's borders that could provide enough clean energy for its neighboring countries for centuries. As a result, this country would suddenly have a comparative advantage it could market to trading partners.

Imagine a neighboring country has a booming lumber trade and can manufacture building supplies much cheaper than the country with the new energy source, but it consumes a lot of energy to do so. The two countries have comparative advantages that can be traded beneficially for both.

Benefits of Trade

Because countries are endowed with different assets and natural resources, some may produce the same good more efficiently and sell it more cheaply than others. Countries that trade can take advantage of the lower prices available in other countries.

Here are some other benefits of trade:

* It increases a nation's global standing
* It raises a nation's profitability
* Creates jobs in import and export sectors
* Expands products variety
* Encourages investment in a country globally

Criticisms of Trade

While the law of comparative advantage is a regular feature of introductory economics, many countries try to shield local industries with tariffs, subsidies, or other trade barriers. One possible explanation comes from what economists call rent-seeking. Rent-seeking occurs when one group organizes and lobbies the government to protect its interests.

For example, business owners might pressure their country's government for tariffs to protect their industry from inexpensive foreign goods, which could cost the livelihoods of domestic workers. Even if the business owners understand trade benefits, they could be reluctant to sacrifice a lucrative income stream.

Moreover, there are strategic reasons for countries to avoid excessive reliance on free trade. For example, a country that relies on trade might become too dependent on the global market for critical goods.

Some development economists have argued for tariffs to help protect infant industries that cannot yet compete on the global market. As those industries grow and mature, they are expected to become a comparative advantage for their country.

What Are the Types of Trade?

Generally, there are two types of trade—domestic and international. Domestic trades occur between parties in the same countries. International trade occurs between two or more countries. A country that places goods and services on the international market is exporting those goods and services. One that purchases goods and services from the international market is importing those goods and services.

What Is the Importance of Trade?

Trade is essential for many reasons, but some of the most commonly cited ones are lowering prices, becoming or remaining competitive, developing relationships, fueling growth, reducing inflation, encouraging investment, and supporting better-paying jobs.

What Are the Advantages and Disadvantages of Trade?

Trade offers many advantages, such as increasing quality of life and fueling economic growth. However, trade can be used politically through embargoes and tariffs to manipulate trade partners. It also comes with language barriers, cultural differences, and restrictions on what can be imported or exported. Additionally, intellectual property theft becomes an issue because regulations and enforcement methods change across borders.

The Bottom Line

Trade is the exchange of goods and services between parties for mutually beneficial purposes. People and countries trade to improve their circumstances and quality of life. It also develops relationships between governments and fosters friendship and trust.

RS2077_trade-topic.jpg?h=ff74070f&itok=3pa6cmgm


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1854 2023-08-01 14:09:01

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1857) Scholarship

Gist

Scholarship is an amount of money given by a school, college, university, or other organization to pay for the studies of a person with great ability but little money.

In general, such awards are known as scholarships, fellowships, or loans; in European usage, a small scholarship is an exhibition, and a bursary is a sum granted to a needy student. Many awards are in the nature of long-term loans with low rates of interest.

Details

A scholarship is a form of financial aid awarded to students for further education. Generally, scholarships are awarded based on a set of criteria such as academic merit, diversity and inclusion, athletic skill, and financial need.

Scholarship criteria usually reflect the values and goals of the donor of the award, and while scholarship recipients are not required to repay scholarships, the awards may require that the recipient continue to meet certain requirements during their period of support, such maintaining a minimum grade point average or engaging in a certain activity (e.g., playing on a school sports team for athletic scholarship holders).

Scholarships also range in generosity; some cover partial tuition, while others offer a 'full-ride', covering all tuition, accommodation, housing and others.

Some prestigious, highly competitive scholarships are well-known even outside the academic community, such as Fulbright Scholarship and the Rhodes Scholarships at the graduate level, and the Robertson, Morehead-Cain and Jefferson Scholarships at the undergraduate level.

Scholarships vs. grants

While the terms scholarship and grant are frequently used interchangeably, they are distinctly different. Where grants are offered based exclusively on financial need, scholarships may have a financial need component but rely on other criteria as well.

* Academic scholarships typically use a minimum grade-point average or standardized test score such as the ACT or SAT to narrow down awardees.
* Athletic scholarships are generally based on athletic performance of a student and used as a tool to recruit high-performing athletes for their school's athletic teams.
* Merit scholarships can be based on a number of criteria, including performance in a particular school subject or club participation or community service.

A federal Pell Grant can be awarded to someone planning to receive their undergraduate degree and is solely based on their financial needs.

Types

The most common scholarships may be classified as:

* Merit-based: These awards are based on a student's academic, artistic, athletic, or other abilities, and often a factor in an applicant's extracurricular activities and community service record. Most such merit-based scholarships are paid directly by the institution the student attends, rather than issued directly to the student.
* Need-based: Some private need-based awards are confusingly called scholarships, and require the results of a FAFSA (the family's expected family contribution). However, scholarships are often merit-based, while grants tend to be need-based.
* Student-specific: These are scholarships for which applicants must initially qualify based upon gender, race, religion, family, and medical history, or many other student-specific factors. Minority scholarships are the most common awards in this category. For example, students in Canada may qualify for a number of Indigenous scholarships, whether they study at home or abroad. The Gates Millennium Scholars Program is another minority scholarship funded by Bill and Melinda Gates for excellent African American, American Indian, Asian Pacific Islander American, and Latino students who enroll in college.
* Career-specific: These are scholarships a college or university awards to students who plan to pursue a specific field of study. Often, the most generous awards go to students who pursue careers in high-need areas, such as education or nursing. Many schools in the United States give future nurses full scholarships to enter the field, especially if the student intends to work in a high-need community.
* College-specific: College-specific scholarships are offered by individual colleges and universities to highly qualified applicants. These scholarships are given on the basis of academic and personal achievement. Some scholarships have a "bond" requirement. Recipients may be required to work for a particular employer for a specified period of time or to work in rural or remote areas; otherwise, they may be required to repay the value of the support they received from the scholarship. This is particularly the case with education and nursing scholarships for people prepared to work in rural and remote areas. The programs offered by the uniformed services of the United States (Army, Navy, Marine Corps, Air Force, Coast Guard, National Oceanic and Atmospheric Administration Commissioned Officer Corps, and Public Health Service Commissioned Corps) sometimes resemble such scholarships.
* Athletic: Awarded to students with exceptional skill in a sport. Often this is so that the student will be available to attend the school or college and play the sport on their team, although in some countries government funded sports scholarships are available, allowing scholarship holders to train for international representation. School-based athletics scholarships can be controversial, as some believe that awarding scholarship money for athletic rather than academic or intellectual purposes is not in the institution's best interest.
* Brand: These scholarships are sponsored by a corporation that is trying to gain attention to their brand, or a cause. Sometimes these scholarships are referred to as branded scholarships. The Miss America beauty pageant is a famous example of a brand scholarship.
* Creative contest: These scholarships are awarded to students based on a creative submission. Contest scholarships are also called mini project-based scholarships, where students can submit entries based on unique and innovative ideas.
* "Last dollar": can be provided by private and government-based institutions, and are intended to cover the remaining fees charged to a student after the various grants are taken into account. To prohibit institutions from taking last dollar scholarships into account, and thereby removing other sources of funding, these scholarships are not offered until after financial aid has been offered in the form of a letter. Furthermore, last dollar scholarships may require families to have filed taxes for the most recent year, received their other sources of financial aid, and not yet received loans.
* Open: a scholarship open to any applicant.

scholarships.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1855 2023-08-02 14:01:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1858) Gymnasium

Gist

The history of the gymnasium dates back to ancient Greece, where the literal meaning of the Greek word gymnasion was “school for naked exercise.” The gymnasiums were of great significance to the ancient Greeks, and every important city had at least one.

Details

A gym, short for gymnasium, is an indoor venue for exercise and sports. The word is derived from the ancient Greek term "gymnasion". They are commonly found in athletic and fitness centers, and as activity and learning spaces in educational institutions. "Gym" is also slang for "fitness centre", which is often an area for indoor recreation. A "gym" may include or describe adjacent open air areas as well. In Western countries, "gyms" often describe places with indoor or outdoor courts for basketball, hockey, tennis, boxing or wrestling, and with equipment and machines used for physical development training, or to do exercises. In many European countries, Gymnasium (and variations of the word) also can describe a secondary school that prepares students for higher education at a university, with or without the presence of athletic courts, fields, or equipment.

Overview

Gymnasium apparatus like barbells, jumping boards, running paths, tennis-balls, cricket fields, and fencing areas are used for exercises. In safe weather, outdoor locations are the most conducive to health. Gyms were popular in ancient Greece. Their curricula included self-defense, gymnastics medica, or physical therapy to help the sick and injured, and for physical fitness and sports, from boxing to dancing to skipping rope.

Gymnasiums also had teachers of wisdom and philosophy. Community gymnastic events were done as part of the celebrations during various village festivals. In ancient Greece there was a phrase of contempt, "He can neither swim nor write." After a while, however, Olympic athletes began training in buildings specifically designed for them. Community sports never became as popular among ancient Romans as it had among the ancient Greeks. Gyms were used more as a preparation for military service or spectator sports. During the Roman Empire, the gymnastic art was forgotten. In the Dark Ages there were sword fighting tournaments and of chivalry; and after gunpowder was invented sword fighting began to be replaced by the sport of fencing, as well as schools of dagger fighting and wrestling and boxing.

In the 18th century, Salzmann, a German clergyman, opened a gym in Thuringia teaching bodily exercises, including running and swimming. Clias and Volker established gyms in London, and in 1825, Doctor Charles Beck, a German immigrant, established the first gymnasium in the United States. It was found that gym pupils lose interest in doing the same exercises, partly because of age. Variety in exercises included skating, dancing, and swimming. Some gym activities can be done by 6 to 8-year-olds, while age 16 has been considered mature enough for boxing and horseback riding.

In Ancient Greece, the gymnasion was a locality for both physical and intellectual education of young men. The latter meaning of intellectual education persisted in Greek, German and other languages to denote a certain type of school providing secondary education, the gymnasium, whereas in English the meaning of physical education pertained to the word 'gym'. The Greek word gymnasion, which means "school for naked exercise," was used to designate a locality for the education of young men, including physical education (gymnastics, for example, exercise) which was customarily performed naked, as well as bathing, and studies. For the Greeks, physical education was considered as important as cognitive learning. Most Greek gymnasia had libraries for use after relaxing in the baths.

Nowadays, it represents a common area where people, from all ranges of experience, exercise and work out their muscles. You can also usually find people doing cardio exercises or pilates.

History

The first recorded gymnasiums date back to over 3000 years ago in ancient Persia, where they were known as zurkhaneh, areas that encouraged physical fitness. The larger Roman Baths often had attached fitness facilities, the baths themselves sometimes being decorated with mosaics of local champions of sport. Gyms in Germany were an outgrowth of the Turnplatz, an outdoor space for gymnastics founded by German educator Friedrich Jahn in 1811 and later promoted by the Turners, a nineteenth-century political and gymnastic movement. The first American to open a public gym in the United States using Jahn's model was John Neal of Portland, Maine in 1827. The first indoor gymnasium in Germany was probably the one built in Hesse in 1852 by Adolph Spiess.

Through worldwide colonization, Great Britain expanded its national interest in sports and games to many countries. In the 1800s, programs were added to schools and college curricula that emphasized health, strength, and bodily measure. Sports drawn from European and British cultures thrived as college students and upper-class clubs financed competition. As a result, towns began building playgrounds that furthered interest in sports and physical activity. Early efforts to establish gyms in the United States in the 1820s were documented and promoted by John Neal in the American Journal of Education and The Yankee, helping to establish the American branch of the movement. Later in the century, the Turner movement was founded and continued to thrive into the early twentieth century. The first Turners group was formed in London in 1848. The Turners built gymnasiums in several cities like Cincinnati and St. Louis, which had large German American populations. These gyms were utilized by adults and youth. For example, a young Lou Gehrig would frequent the Turner gym in New York City with his father.

The Boston Young Men's Christian Union claims to be "America's First Gym". The YMCA first organized in Boston in 1851 and a smaller branch opened in Rangasville in 1852. Ten years later there were some two hundred YMCAs across the country, most of which provided gyms for exercise, games, and social interaction.

The 1920s was a decade of prosperity that witnessed the building of large numbers of public high schools with a gymnasium, an idea founded by Nicolas Isaranga.

Today, gymnasiums are commonplace in the United States. They are in virtually all U.S. colleges and high schools, as well as almost all middle schools and elementary schools. These facilities are used for physical education, intramural sports, and school gatherings. The number of gyms in the U.S. has more than doubled since the late 1980s.

47fdbb6c650d6efb50f36ea1f7e82dd4.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1856 2023-08-03 14:15:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1859) Backpacking (hiking)

Gist

Backpacking is a recreational activity of hiking while carrying clothing, food, and camping equipment in a pack on the back. Originally, in the early 20th century, backpacking was practiced in the wilderness as a means of getting to areas inaccessible by car or by day hike.

It is the activity of travelling while carrying your clothes and other things that you need in a backpack, usually not spending very much money and staying in places that are not expensive.

Details

Backpacking is the outdoor recreation of carrying gear on one's back while hiking for more than a day. It is often an extended journey and may involve camping outdoors. In North America, tenting is common, where simple shelters and mountain huts, widely found in Europe, are rare. In New Zealand, hiking is called tramping, and tents are used alongside a nationwide network of huts.  Hill walking is equivalent in Britain (but this can also refer to a day walk), though backpackers make use of a variety of accommodation, in addition to camping. Backpackers use simple huts in South Africa.  Trekking and bushwalking are other words used to describe such multi-day trips.

Backpacking as a method of travel is a different activity, which mainly uses public transport during a journey that can last months.

Definition

Backpacking is an outdoor recreation where gear is carried in a backpack. This can include food, water, bedding, shelter, clothing, stove, and cooking kit. Given that backpackers must carry their gear, the total weight of their bag and its contents is a primary concern of backpackers. Backpacking trips range from one night to weeks or months, sometimes aided by planned resupply points, drops, or caches.

Research

Anthropologists argue carrying weight, via a backpack, was likely more common than running. And carrying loads is an extreme difference and quality humans had over other animals.

They hypothesize, humans used running to catch prey, and humans evolved to carry food and weight over longer distances.

Fitness benefits

A weighted carry from backpacking taxes muscles. A weighted load stresses the shoulders, delts, back, abs, obliques, hips, quads, hamstrings and the knees. Research highlights a weighted carry for exercise helps avoid injuries.

A differential exists between a man running in comparison to a man walking whilst carrying a backpack. A 175-pound man running, without a backpack, loads his knees with 1,400 pounds of stress per stride. Whilst a 175-pound walking man, carrying a 30-pound pack, loads his knees with 555 pounds of stress per step.

Research shows humans can carry weight under 50-pound in a safe manner. Research also shows a weighted carry is as beneficial for the cardiovascular system as a light run.

Accommodations

Backpacking camps are usually more spartan than campsites where gear is transported by car or boat. In areas with heavy backpacker traffic, a hike-in campsite might have a fire ring (where permissible), an outhouse, a wooden bulletin board with a map and information about the trail and area. Many hike-in camps are no more than level patches of ground free of underbrush. In remote wilderness areas hikers must choose their own site. Established camps are rare and the ethos is to "leave no trace" when gone.

In some regions, varying forms of accommodation exist, from simple log lean-to's to staffed facilities offering escalating degrees of service. Beds, meals, and even drinks may be had at Alpine huts scattered among well-traveled European mountains. Backpackers there can walk from hut-to-hut without leaving the mountains, while in places like the Lake District or Yorkshire Dales in England hill-walkers descend to stay in youth hostels, farmhouses or guest houses. Reservations can usually be made in advance and are recommended in the high season.

In the more remote parts of Great Britain, especially Scotland, bothies exist to provide simple (free) accommodation for backpackers. On the French system of long distance trails, Grande Randonnées, backpackers can stay in gîtes d'étapes, which are simple hostels provided for walkers and cyclists. There are some simple shelters and occasional mountain hut also provided in North America, including on the Appalachian Trail. Another example is the High Sierra Camps in the Yosemite National Park. Long-distance backpacking trails with huts also exist in South Africa, including the 100 km plus Amatola Trail, in the Eastern Cape Province. Backpacking is also popular in the Himalayas (often called trekking there), where porters and pack animals are often used.

Equipment

Backpacking gear depends on the terrain and climate, and on a hiker's plans for shelter (refuges, huts, gites, camping, etc.). It may include:

* A backpack of appropriate size. Backpacks can include frameless, external frame, internal frame, and bodypack styles.
* Clothing and footwear appropriate for the conditions.
* Food and a means to prepare it (stove, utensils, pot, etc.).
* Sleep system such as a sleeping bag and a pad.
* Survival gear.
* A shelter such as a tent, tarp or bivouac sack.
* Water containers and purifiers.
W
Water

Proper hydration is critical to successful backpacking. Depending on conditions - which include weather, terrain, load, and the hiker's age and fitness - a backpacker may drink 2 to 8 litres (1/2 to 2 gallons), or more, per day. At 1 kilogram (2.2 lb) per 1 litre (1.1 US qt)[6] water is exceptionally heavy. It is impossible to carry more than a few days' supply. Therefore, hikers often drink natural water supplies, sometimes after filtering or purifying.

Some hikers will treat water before drinking to protect against waterborne diseases carried by bacteria and protozoa. The chief treatment methods include:

* Boiling
* Treatment with chemicals such as chlorine or iodine
* Filtering (often used with chemical treatments)
* Treatment with ultraviolet light

Water may be stored in bottles or collapsible plastic bladders. Hydration bladders are increasingly popular.

Food

Backpacking is energy intensive. It is essential to bring enough food to maintain both energy and health. The weight of food is an important issue to consider. Consequently, items with high food energy, long shelf life, and low mass and volume deliver the most utility. Taste and satisfaction are issues that are of varying importance to individual hikers, as they consider whether it's worth the effort (and trade-off against other gear) to carry fresh, heavy, or luxury food items. The shorter the trip and easier the conditions the more feasible such treats become.

In many cases, heat, fuel and utensils are used. Small liquid or gas-fueled campstoves and lightweight cooking pots are common. Campfires are sometimes prohibited.

Some backpackers consume dried foods, including many common household foods such as cereal, oatmeal, powdered milk, cheese, crackers, sausage, salami, dried fruit, peanut butter, pasta and rice. Popular snacks include trail mix, easily prepared at home; nuts, energy bars, chocolate, and other energy-dense foods. Coffee, tea, and cocoa are common beverages. Package food in plastic bags while avoiding heavier jars and cans. Dehydrators are popular for drying fruit, jerky, and pre-cooked meals.

Many hikers use freeze-dried precooked entrees for hot meals, quickly reconstituted by adding boiling water. An alternative is Ultra High Temperature (UHT) processed food, which has its moisture retained and merely needs heating with a special, water-activated chemical reaction.

Specialized cookbooks are available on trailside food and the challenges inherent in making it. Some focus on planning meals and preparing ingredients for short trips; others on the challenges of organizing and preparing meals revolving around the bulk rationing prevalent in extended trail hikes, particularly those with pre-planned food drops.

Additional Information

Backpacking is a recreational activity of hiking while carrying clothing, food, and camping equipment in a pack on the back. Originally, in the early 20th century, backpacking was practiced in the wilderness as a means of getting to areas inaccessible by car or by day hike. It demands physical conditioning and practice, knowledge of camping and survival techniques, and selection of equipment of a minimum weight consistent with safety and comfort. In planning an excursion, the backpacker must take into consideration food and water, terrain, climate, and weather.

Packs hang from the shoulders or are supported by a combination of straps around shoulders and waist or hips. Types range from the frameless rucksack, hung from two straps, and the frame rucksack, in which the pack is attached to a roughly rectangular frame hung on the shoulder straps, to the contour frame pack, with a frame of aluminum or magnesium tubing bent to follow the contour of the back. The Kelty-type pack, which is used with the contour frame, employs a waistband to transfer most of the weight of the pack to the hips.

Modern materials have revolutionized backpack construction, utility, and capacity. The earliest packs were made from canvas, which, while strong and durable, added considerable weight for the backpacker to manage and lacked resistance to the elements. Today’s packs are constructed of nylon or similar lightweight materials and are usually at least partially waterproof.

Clothing is weatherproof and insulating, including shell parkas, insulating underwear, down clothing, windpants, ponchos, sturdy waterproof boots with soles designed for maximum traction, and heavy socks. Tents may be a simple tarpaulin, a plastic sheeting tube, or a two-person nylon mountain tent. Sleeping bags of foam, dacron, or down and air mattresses or foam pads may be carried. Lightweight pots and pans and stoves are specially designed for backpacking; dehydrated food provides stew-type one-pot dishes.

Backpackers must be able to read topographic maps and use a compass; they must also carry emergency food and first-aid equipment and be acquainted with survival techniques.

In the later 20th century, backpacking became associated with travel, especially by students, outside the wilderness.

DSC1896-1-900x601.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1857 2023-08-04 14:35:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1860) Gliding

Gist

Gliding is a recreational activity and competitive air sport in which pilots fly unpowered aircraft known as gliders or sailplanes using naturally occurring currents of rising air in the atmosphere to remain airborne. The word soaring is also used for the sport.

Summary

Gliding, also called soaring is a flight in an unpowered heavier-than-air craft. Any engineless aircraft, from the simplest hang glider to a space shuttle on its return flight to the Earth, is a glider. The glider is powered by gravity, which means that it is always sinking through the air. However, when an efficient glider is flown through air that is rising faster than the aircraft’s rate of sink, the glider will climb. There are many types of glider, the most efficient of which is the sailplane. Hang gliding and paragliding are specialized forms of gliding.

Pioneers in glider flight and development include the German Otto Lilienthal (1848–96), who was the first to attain predictable and controlled glider flight; the British pilot Percy Pilcher (1866–99); and the Americans Octave Chanute and the Wright brothers. Gliding for sport originated in Germany in 1910; the sailplane was first developed there after World War I, during the time when the Treaty of Versailles prevented the Germans from constructing powered aircraft. International competition began in 1922 and became popular throughout Europe and the United States during the 1930s. Since 1937 the governing body of the sport has been the Fédération Aéronautique Internationale (FAI). During World War II, gliders were used by U.S., British, and German airborne troops. After the war, soaring as a sport extended worldwide, with activity in most continents.

Sailplanes have streamlined bodies and long, narrow wings that give them a combination of a low sink rate and a very flat glide. The controls are similar to those in small airplanes: the rudder is operated by pedals, and the ailerons (which control roll) and the elevators (which control the angle of the aircraft’s pitch and thus, indirectly, speed) are operated by a control stick. Sailplanes usually have a single landing wheel beneath the forward part of the fuselage. The most popular methods of launching are by aero tow with a light airplane or from a winch on the ground. In a typical aero tow, the aircraft fly at about 60 miles per hour (100 km per hour) until an altitude of about 2,000 feet (610 metres) has been reached (see photograph). While on tow, the sailplane pilot keeps directly behind and slightly above the towplane in order to avoid turbulence created by the propeller. When the planned altitude has been reached, or earlier if good lift is encountered, the pilot releases the towline by pulling a knob.

The basic method of soaring, called thermaling, is to find and use rising currents of warm air, such as those above a sunlit field of ripened grain, to lift the glider. Thermals can rise very rapidly, which allows the sailplane, if deftly piloted, to attain substantial increases in altitude. Slope soaring occurs when moving air is forced up by a ridge. By following the ridge, the sailplane can glide for great distances. In wave soaring, the glider flies along vertical waves of wind that form on the lee side of mountain ranges (the side protected from fiercer winds). Riding such waves allows extreme altitude to be gained rapidly. To facilitate all such maneuvers as well as navigation, gliders can be equipped with familiar airplane instruments such as an altimeter, an airspeed indicator, a turn-and-bank indicator, a compass, and GPS (Global Positioning System) equipment. The most important instrument is the variometer, which shows when the glider is moving up or down even when that movement is too minute to be noticed by the pilot.

National and international records for gliding include categories for straight distance, out-and-return (a course in which a pilot begins at a designated place, travels a distance, and then returns to the designated place), and triangle distance (a course that begins at a designated place after which there are two turning places before return), speed over triangular courses, gain of height, and absolute altitude. World championship competitions began in 1937 and since 1950 have been held every other year. The competition occupies about two weeks, and the tasks usually consist of elapsed-time races over out-and-return or triangular courses. The overall champion is determined by point total. Apart from competition, many pilots soar purely for recreation.

Details

Gliding is a recreational activity and competitive air sport in which pilots fly unpowered aircraft known as gliders or sailplanes using naturally occurring currents of rising air in the atmosphere to remain airborne. The word soaring is also used for the sport.

Gliding as a sport began in the 1920s. Initially the objective was to increase the duration of flights but soon pilots attempted cross-country flights away from the place of launch. Improvements in aerodynamics and in the understanding of weather phenomena have allowed greater distances at higher average speeds. Long distances are now flown using any of the main sources of rising air: ridge lift, thermals and lee waves. When conditions are favourable, experienced pilots can now fly hundreds of kilometres before returning to their home airfields; occasionally flights of more than 1,000 kilometres (621 mi) are achieved.

Some competitive pilots fly in races around pre-defined courses. These gliding competitions test pilots' abilities to make best use of local weather conditions as well as their flying skills. Local and national competitions are organized in many countries, and there are biennial World Gliding Championships. Techniques to maximize a glider's speed around the day's task in a competition have been developed, including the optimum speed to fly, navigation using GPS (Global Positioning System) and the carrying of water ballast. If the weather deteriorates pilots are sometimes unable to complete a cross-country flight. Consequently, they may need to land elsewhere, perhaps in a field, but motorglider pilots can avoid this by starting an engine.

Powered-aircraft and winches are the two most common means of launching gliders. These and other launch methods require assistance and facilities such as airfields, tugs, and winches. These are usually provided by gliding clubs who also train new pilots and maintain high safety standards. Although in most countries the standards of safety of the pilots and the aircraft are the responsibility of governmental bodies, the clubs and sometimes national gliding associations often have delegated authority.

History

The development of heavier-than-air flight in the half century between Sir George Cayley's coachman in 1853 and the Wright brothers in 1903 mainly involved gliders (see History of aviation). However, the sport of gliding only emerged after the First World War, as a result of the Treaty of Versailles, which imposed severe restrictions on the manufacture and use of single-seat powered aircraft in Germany's Weimar Republic. Thus, in the 1920s and 1930s, while aviators and aircraft makers in the rest of the world were working to improve the performance of powered aircraft, the Germans were designing, developing and flying ever more efficient gliders and discovering ways of using the natural forces in the atmosphere to make them fly farther and faster. With the active support of the German government, there were 50,000 glider pilots by 1937. The first German gliding competition was held at the Wasserkuppe in 1920,  organized by Oskar Ursinus. The best flight lasted two minutes and set a world distance record of 2 kilometres (1.2 mi).  Within ten years, it had become an international event in which the achieved durations and distances had increased greatly. In 1931, Gunther Grönhoff flew 272 kilometres (169 mi) on the front of a storm from Munich to Kadaň (Kaaden in German) in Western Czechoslovakia, farther than had been thought possible.

In the 1930s, gliding spread to many other countries. In the 1936 Summer Olympics in Berlin gliding was a demonstration sport, and it was scheduled to be a full Olympic sport in the 1940 Games.  A glider, the Olympia, was developed in Germany for the event, but World War II intervened. By 1939 the major gliding records were held by Russians, including a distance record of 748 kilometres (465 mi).  During the war, the sport of gliding in Europe was largely suspended, though several German fighter aces in the conflict, including Erich Hartmann, began their flight training in gliders.

Gliding did not return to the Olympics after the war for two reasons: a shortage of gliders, and the failure to agree on a single model of competition glider. (Some in the community feared doing so would hinder development of new designs.)  The re-introduction of air sports such as gliding to the Olympics has occasionally been proposed by the world governing body, the Fédération Aéronautique Internationale (FAI), but has been rejected on the grounds of lack of public interest.

In many countries during the 1950s, a large number of trained pilots wanted to continue flying. Many were also aeronautical engineers who could design, build and maintain gliders. They started both clubs and manufacturers, many of which still exist. This stimulated the development of both gliding and gliders, for example the membership of the Soaring Society of America increased from 1,000 to 16,000 by 1980. The increased numbers of pilots, greater knowledge and improving technology helped set new records, for example the pre-war altitude record was doubled by 1950,  and the first 1,000-kilometre (620 mi) flight was achieved in 1964. New materials such as glass fiber and carbon fiber, advances in wing shapes and airfoils, electronic instruments, the Global Positioning System and improved weather forecasting have since allowed many pilots to make flights that were once extraordinary. Today over 550 pilots have made flights over 1,000 kilometres (620 mi). Although there is no Olympic competition, there are the World Gliding Championships. The first event was held at the Samedan in 1948.  Since World War II it has been held every two years. There are now six classes open to both sexes, plus three classes for women and two junior classes. The latest worldwide statistics for 2011 indicate that Germany, the sport's birthplace, is still a center of the gliding world: it accounted for 27 percent of the world's glider pilots, and the three major glider manufacturers are still based there. However the meteorological conditions that allow soaring are common and the sport has been taken up in many countries. At the last count, there were over 111,000 active civilian glider pilots and 32,920 gliders, plus an unknown number of military cadets and aircraft. Clubs actively seek new members by giving trial flights, which are also a useful source of revenue for the clubs.

630dff81ec4c51280b45ddf1_hang-gliding-vs-skydiving.jpeg?ezimgfmt=rs%3Adevice%2Frscb1-1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1858 2023-08-05 01:32:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1861) Steam Turbine

Gist

The steam turbine is a machine that, categorized as a form of heat engine, is able to extract thermal energy from steam and convert it into rotary motions.

The modern version of the steam turbine was invented by Charles Parsons in 1884 and is guided by the principles of thermodynamic efficiency, focusing on the different stages of steam expansion.

Summary

Steam turbines create electricity in four key steps. First, a fuel is combusted to create heat energy. The heat is used to convert water or other liquids to high-pressure steam in a boiler.  The steam is then piped into the steam turbine that spins the turbine blades, thus spinning the generator and creating electricity. The steam expands as it drives the blades, lowering the steam pressure and temperature. The lower-energy steam exiting the turbine is cooled using either water from a lake, ocean, or river, or evaporative cooling in a cooling tower. This returns the feedwater to a liquid state so it can be pumped back to the boiler and the cycle can be repeated.

Flue gases resulting from the fuel combustion are directed up an exhaust stack where products of the fuel combustion are emitted into the air. Depending on the type of fuel, various emissions-control devices may be attached to the exhaust stack to capture pollutants before they are emitted. There are several fuel sources used to operate a steam turbine including coal, nuclear fuel, natural gas, biofuel, and oil.

Details

A steam turbine is a machine that extracts thermal energy from pressurized steam and uses it to do mechanical work on a rotating output shaft. Its modern manifestation was invented by Charles Parsons in 1884. Fabrication of a modern steam turbine involves advanced metalwork to form high-grade steel alloys into precision parts using technologies that first became available in the 20th century; continued advances in durability and efficiency of steam turbines remains central to the energy economics of the 21st century.

The steam turbine is a form of heat engine that derives much of its improvement in thermodynamic efficiency from the use of multiple stages in the expansion of the steam, which results in a closer approach to the ideal reversible expansion process.

Because the turbine generates rotary motion, it can be coupled to a generator to harness its motion into electricity. Such turbogenerators are the core of thermal power stations which can be fueled by fossil fuels, nuclear fuels, geothermal, or solar energy. About 85% of all electricity generation in the United States in the year 2014 was by use of steam turbines.

Technical challenges include rotor imbalance, vibration, bearing wear, and uneven expansion (various forms of thermal shock). In large installations, even the sturdiest turbine will shake itself apart if operated out of trim.

History

The first device that may be classified as a reaction steam turbine was little more than a toy, the classic Aeolipile, described in the 1st century by Hero of Alexandria in Roman Egypt. In 1551, Taqi al-Din in Ottoman Egypt described a steam turbine with the practical application of rotating a spit. Steam turbines were also described by the Italian Giovanni Branca (1629) and John Wilkins in England (1648). The devices described by Taqi al-Din and Wilkins are today known as steam jacks. In 1672 an impulse turbine driven car was designed by Ferdinand Verbiest. A more modern version of this car was produced some time in the late 18th century by an unknown German mechanic. In 1775 at Soho James Watt designed a reaction turbine that was put to work there. In 1807 Polikarp Zalesov designed and constructed an impulse turbine, using it for the fire pump operation. In 1827 the Frenchmen Real and Pichon patented and constructed a compound impulse turbine.

The modern steam turbine was invented in 1884 by Charles Parsons, whose first model was connected to a dynamo that generated 7.5 kilowatts (10.1 hp) of electricity. The invention of Parsons' steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare. Parsons' design was a reaction type. His patent was licensed and the turbine scaled-up shortly after by an American, George Westinghouse. The Parsons turbine also turned out to be easy to scale up. Parsons had the satisfaction of seeing his invention adopted for all major world power stations, and the size of generators had increased from his first 7.5 kilowatts (10.1 hp) set up to units of 50,000 kilowatts (67,000 hp) capacity. Within Parsons' lifetime, the generating capacity of a unit was scaled up by about 10,000 times, and the total output from turbo-generators constructed by his firm C. A. Parsons and Company and by their licensees, for land purposes alone, had exceeded thirty million horse-power.

Other variations of turbines have been developed that work effectively with steam. The de Laval turbine (invented by Gustaf de Laval) accelerated the steam to full speed before running it against a turbine blade. De Laval's impulse turbine is simpler and less expensive and does not need to be pressure-proof. It can operate with any pressure of steam, but is considerably less efficient. Auguste Rateau developed a pressure compounded impulse turbine using the de Laval principle as early as 1896, obtained a US patent in 1903, and applied the turbine to a French torpedo boat in 1904. He taught at the École des mines de Saint-Étienne for a decade until 1897, and later founded a successful company that was incorporated into the Alstom firm after his death. One of the founders of the modern theory of steam and gas turbines was Aurel Stodola, a Slovak physicist and engineer and professor at the Swiss Polytechnical Institute (now ETH) in Zurich. His work Die Dampfturbinen und ihre Aussichten als Wärmekraftmaschinen (English: The Steam Turbine and its prospective use as a Heat Engine) was published in Berlin in 1903. A further book Dampf und Gas-Turbinen (English: Steam and Gas Turbines) was published in 1922.

The Brown-Curtis turbine, an impulse type, which had been originally developed and patented by the U.S. company International Curtis Marine Turbine Company, was developed in the 1900s in conjunction with John Brown & Company. It was used in John Brown-engined merchant ships and warships, including liners and Royal Navy warships.

Current usage

Since the 1980s, steam turbines have been replaced by gas turbines on fast ships and by diesel engines on other ships; exceptions are nuclear-powered ships and submarines and LNG carriers. (liquefied natural gas) Some auxiliary ships continue to use steam propulsion.

In the U.S. Navy, the conventionally powered steam turbine is still in use on all but one of the Wasp-class amphibious assault ships. The Royal Navy decommissioned its last conventional steam-powered surface warship class, the Fearless-class landing platform dock, in 2002, with the Italian Navy following in 2006 by decommissioning its last conventional steam-powered surface warships, the Audace-class destroyers. In 2013, the French Navy ended its steam era with the decommissioning of its last Tourville-class frigate. Amongst the other blue-water navies, the Russian Navy currently operates steam-powered Kuznetsov-class aircraft carriers and Sovremenny-class destroyers. The Indian Navy currently operates INS Vikramaditya, a modified Kiev-class aircraft carrier; it also operates three Brahmaputra-class frigates commissioned in the early 2000s. The Chinese Navy currently operates steam-powered Kuznetsov-class aircraft carriers, Sovremenny-class destroyers along with Luda-class destroyers and the lone Type 051B destroyer. Most other naval forces have either retired or re-engined their steam-powered warships. As of 2020, the Mexican Navy operates four steam-powered former U.S. Knox-class frigates. The Egyptian Navy and the Republic of China Navy respectively operate two and six former U.S. Knox-class frigates. The Ecuadorian Navy currently operates two steam-powered Condell-class frigates (modified Leander-class frigates).

Today, propulsion steam turbine cycle efficiencies have yet to break 50%, yet diesel engines routinely exceed 50%, especially in marine applications. Diesel power plants also have lower operating costs since fewer operators are required. Thus, conventional steam power is used in very few new ships. An exception is LNG carriers which often find it more economical to use boil-off gas with a steam turbine than to re-liquify it.

Nuclear-powered ships and submarines use a nuclear reactor to create steam for turbines. Nuclear power is often chosen where diesel power would be impractical (as in submarine applications) or the logistics of refuelling pose significant problems (for example, icebreakers). It has been estimated that the reactor fuel for the Royal Navy's Vanguard-class submarines is sufficient to last 40 circumnavigations of the globe – potentially sufficient for the vessel's entire service life. Nuclear propulsion has only been applied to a very few commercial vessels due to the expense of maintenance and the regulatory controls required on nuclear systems and fuel cycles.

Additional Information:

History of steam turbine technology

Early precursors

The first device that can be classified as a reaction steam turbine is the aeolipile proposed by Hero of Alexandria, during the 1st century CE. In this device, steam was supplied through a hollow rotating shaft to a hollow rotating sphere. It then emerged through two opposing curved tubes, just as water issues from a rotating lawn sprinkler. The device was little more than a toy, since no useful work was produced.

Another steam-driven machine, described in 1629 in Italy, was designed in such a way that a jet of steam impinged on blades extending from a wheel and caused it to rotate by the impulse principle. Starting with a 1784 patent by James Watt, the developer of the steam engine, a number of reaction and impulse turbines were proposed, all adaptations of similar devices that operated with water. None were successful except for the units built by William Avery of the United States after 1837. In one such Avery turbine two hollow arms, about 75 centimetres long, were attached at right angles to a hollow shaft through which steam was supplied. Nozzles at the outer end of the arms allowed the steam to escape in a tangential direction, thus producing the reaction to turn the wheel. About 50 of these turbines were built for sawmills, cotton gins, and woodworking shops, and at least one was tried on a locomotive. While the efficiencies matched those of contemporary steam engines, high noise levels, difficult speed regulation, and frequent need for repairs led to their abandonment.

Development of modern steam turbines

No further developments occurred until the end of the 19th century when various inventors laid the groundwork for the modern steam turbine. In 1884 Sir Charles Algernon Parsons, a British engineer, recognized the advantage of employing a large number of stages in series, allowing extraction of the thermal energy in the steam in small steps. Parsons also developed the reaction-stage principle according to which a nearly equal pressure drop and energy release takes place in both the stationary and moving blade passages. In addition, he subsequently built the first practical large marine steam turbines. During the 1880s Carl G.P. de Laval of Sweden constructed small reaction turbines that turned at about 40,000 revolutions per minute to drive cream separators. Their high speed, however, made them unsuitable for other commercial applications. De Laval then turned his attention to single-stage impulse turbines that used convergent-divergent nozzles, such as the one shown in Figure 3. From 1889 to 1897 de Laval built many turbines with capacities from about 15 to several hundred horsepower. His 15-horsepower turbines were the first employed for marine propulsion (1892). C.E.A. Rateau of France first developed multistage impulse turbines during the 1890s. At about the same time, Charles G. Curtis of the United States developed the velocity-compounded impulse stage.

By 1900 the largest steam turbine-generator unit produced 1,200 kilowatts, and 10 years later the capacity of such machines had increased to more than 30,000 kilowatts. This far exceeded the output of even the largest steam engines, making steam turbines the principal prime movers in central power stations after the first decade of the 20th century. Following the successful installation of a series of 68,000-horsepower turbines in the transatlantic passenger liners Lusitania and Mauretania, launched in 1906, steam turbines also gained preeminence in large-scale marine applications, first with vessels burning fossil fuels and then with those using nuclear power. Steam generator pressures increased from about 1,000 kilopascals gauge in 1895 to 1,380 kilopascals gauge by 1919 and then to 9,300 kilopascals gauge by 1940. Steam temperatures climbed from about 180 °C (saturated steam) to 315 °C (superheated steam) and eventually to 510 °C over the same time period, while heat rates decreased from about 38,000 to below 10,000 Btus per kilowatt-hour.

Recent developments and trends

By 1940, single turbine units with a power capacity of 100,000 kilowatts were common. Ever-larger turbines (with higher efficiencies) have been constructed during the last half of the century, largely because of the steadily rising cost of fossil fuels. This required a substantial increase in steam generator pressures and temperatures. Some units operating with supercritical steam at pressures as high as 34,500 kilopascals gauge and at temperatures of up to 650 °C were built before 1970. Reheat turbines that operate at lower pressures (between 17,100 to 24,100 kilopascals gauge) and temperatures (540–565 °C) are now commonly installed to assure high reliability. Steam turbines in nuclear power plants, which are still being constructed in a number of countries outside of the United States, typically operate at about 7,580 kilopascals gauge and at temperatures of up to 295 °C to accommodate the limitations of reactors. Turbines that exceed one-million-kilowatt output require exceptionally large, highly alloyed steel blades at the low pressure end.

Slightly more efficient units with a power capacity of more than 1.3 million kilowatts may eventually be built, but no major improvements are expected within the next few decades, primarily because of the temperature limitations of the materials employed in steam generators, piping, and high-pressure turbine components and because of the need for very high reliability.

Although the use of large steam turbines is tied to electric power production and marine propulsion, smaller units may be used for cogeneration when steam is required for other purposes, such as for chemical processing, powering other machines (e.g., compressors of large central air-conditioning systems serving many buildings), or driving large pumps and fans in power stations or refineries. However, the need for a complete steam plant, including steam generators, pumps, and accessories, does not make the steam turbine an attractive power device for small installations.

a30dc0ae-a389-4dd4-b0ae-86719f967b87


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1859 2023-08-05 21:25:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1862) Landslide

Gist

Landslides are caused by disturbances in the natural stability of a slope. They can accompany heavy rains or follow droughts, earthquakes, or volcanic eruptions. Mudslides develop when water rapidly accumulates in the ground and results in a surge of water-saturated rock, earth, and debris. Mudslides usually start on steep slopes and can be activated by natural disasters. Areas where wildfires or human modification of the land have destroyed vegetation on slopes are particularly vulnerable to landslides during and after heavy rains.

Summary

Landslides, also known as landslips, are several forms of mass wasting that may include a wide range of ground movements, such as rockfalls, shallow or deep-seated slope failures, mudflows, and debris flows. Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, in which case they are called submarine landslides.

Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as a heavy rainfall, an earthquake, a slope cut to build a road, and many others), although this is not always identifiable.

Landslides are frequently made worse by human development (such as urban sprawl) and resource exploitation (such as mining and deforestation). Land degradation frequently leads to less stabilization of soil by vegetation. Additionally, global Warming caused by climate change and other human impact on the environment, can increase the frequency of natural events (such as extreme weather) which trigger landslides. Landslide mitigation describes the policy and practices for reducing the risk of human impacts of landslides, reducing the risk of natural disaster.

Causes

Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include:

* increase in water content (loss of suction) or saturation by rain water infiltration, snow melting, or glaciers melting;
* rising of groundwater or increase of pore water pressure (e.g. due to aquifer recharge in rainy seasons, or by rain water infiltration);
* increase of hydrostatic pressure in cracks and fractures;
* loss or absence of vertical vegetative structure, soil nutrients, and soil structure (e.g. after a wildfire);
* erosion of the top of a slope by rivers or sea waves;
* physical and chemical weathering (e.g. by repeated freezing and thawing, heating and cooling, salt leaking in the groundwater or mineral dissolution);
* ground shaking caused by earthquakes, which can destabilize the slope directly (e.g., by inducing soil liquefaction) or  weaken the material and cause cracks that will eventually produce a landslide;
* volcanic eruptions;
* changes in pore fluid composition;
* changes in temperature (seasonal or induced by climate change).

Landslides are aggravated by human activities, such as:

* deforestation, cultivation and construction;
* vibrations from machinery or traffic;
* blasting and mining;
* earthwork (e.g. by altering the shape of a slope, or imposing new loads);
* in shallow soils, the removal of deep-rooted vegetation that binds colluvium to bedrock;
* agricultural or forestry activities (logging), and urbanization, which change the amount of water infiltrating the soil.
* temporal variation in land use and land cover (LULC): it includes the human abandonment of farming areas, e.g. due to the economic and social transformations which occurred in Europe after the Second World War. Land degradation and extreme rainfall can increase the frequency of erosion and landslide phenomena.

Details

Landslide, also called landslip, is the movement downslope of a mass of rock, debris, earth, or soil (soil being a mixture of earth and debris). Landslides occur when gravitational and other types of shear stresses within a slope exceed the shear strength (resistance to shearing) of the materials that form the slope.

Shear stresses can be built up within a slope by a number of processes. These include oversteepening of the base of the slope, such as by natural erosion or excavation, and loading of the slope, such as by an inflow of water, a rise in the groundwater table, or the accumulation of debris on the slope’s surface. Short-term stresses, such as those imposed by earthquakes and rainstorms, can likewise contribute to the activation of landslides. Landslides can also be activated by processes that weaken the shear strength of a slope’s material. Shear strength is dependent mainly on two factors: frictional strength, which is the resistance to movement between the slope material’s interacting constituent particles, and cohesive strength, which is the bonding between the particles. Coarse particles such as sand grains have high frictional strength but low cohesive strength, whereas the opposite is true for clays, which are composed of fine particles. Another factor that affects the shear strength of a slope-forming material is the spatial disposition of its constituent particles, referred to as the sediment fabric. Some materials with a loose, open sediment fabric will weaken if they are mechanically disturbed or flooded with water. An increase in water content, resulting from either natural causes or human activity, typically weakens sandy materials through the reduction of interparticle friction and weakens clays through the dissolution of interparticle cements, the hydration of clay minerals, and the elimination of interparticle (capillary) tension.

Types of landslides

Landslides are generally classified by type of movement (slides, flows, spreads, topples, or falls) and type of material (rock, debris, or earth). Sometimes more than one type of movement occurs within a single landslide, and, because the temporal and spatial relationships of these movements are often complex, their analysis often requires detailed interpretation of both landforms and geological sections, or cores.

Rockslides and other types of slides involve the displacement of material along one or more discrete shearing surfaces. The sliding can extend downward and outward along a broadly planar surface (a translational slide), or it can be rotational along a concave-upward set of shear surfaces (a slump). A translational slide typically takes place along structural features, such as a bedding plane or the interface between resistant bedrock and weaker overlying material. If the overlying material moves as a single, little-deformed mass, it is called a block slide. A translational slide is sometimes called a mud slide when it occurs along gently sloping, discrete shear planes in fine-grained rocks (such as fissured clays) and the displaced mass is fluidized by an increase in pore water pressure. In a rotational slide the axis of rotation is roughly parallel to the contours of the slope. The movement near the head of the slide is largely downward, exposing a steep head scarp, and movement within the displaced mass takes place along internal slip planes, each tending to tilt backward. Over time, upslope ponding of water by such back-tilted blocks can enlarge the area of instability, so that a stable condition is reached only when the slope is reduced to a very low gradient.

A type of landslide in which the distribution of particle velocities resembles that of a viscous fluid is called a flow. The most important fluidizing agent is water, but trapped air is sometimes involved. Contact between the flowing mass and the underlying material can be distinct, or the contact can be one of diffuse shear. The difference between slides and flows is gradational, with variations in fluid content, mobility, and type of movement, and composite slide movement and flow movement are common.

A spread is the complex lateral movement of relatively coherent earth materials resting on a weaker substrate that is subject to liquefaction or plastic flow. Coherent blocks of material subside into the weaker substrate, and the slow downslope movement frequently extends long distances as a result of the retrogressive extension from the zone of origin, such as an eroding riverbank or coastline. Spreads occur as the result of liquefaction caused by water saturation or earthquake shock in such substrates as loess, a weakly cemented wind-lain silt.

Rotation of a mass of rock, debris, or earth outward from a steep slope face is called toppling. This type of movement can subsequently cause the mass to fall or slide.

Earth materials can become detached from a steep slope without significant shearing, fall freely under gravity, and land on a surface from which they bounce and fall farther. Falls of large volume can trap enough air to facilitate the very rapid flow of rock or debris, forming rock avalanches and debris avalanches, respectively. Entrapped snow and ice may also help mobilize such flows, but the unqualified term avalanche is generally used to refer only to an avalanche of snow. (See avalanche.) Triggered by earthquake shock or torrential rain in mountainous relief with steep gradients, a huge volume of avalanching rock or debris (of up to millions of metric tons) can reach a velocity of more than 50 metres (160 feet) per second and leave a long trail of destruction.

Landslide mitigation and prevention

Landslides pose a recurrent hazard to human life and livelihood in most parts of the world, especially in some regions that have experienced rapid population and economic growth. Hazards are mitigated mainly through precautionary means—for instance, by restricting or even removing populations from areas with a history of landslides, by restricting certain types of land use where slope stability is in question, and by installing early warning systems based on the monitoring of ground conditions such as strain in rocks and soils, slope displacement, and groundwater levels. There are also various direct methods of preventing landslides; these include modifying slope geometry, using chemical agents to reinforce slope material, installing structures such as piles and retaining walls, grouting rock joints and fissures, diverting debris pathways, and rerouting surface and underwater drainage. Such direct methods are constrained by cost, landslide magnitude and frequency, and the size of human settlements at risk.

Additional Information

A landslide is the movement of rock, earth, or debris down a sloped section of land. Landslides are caused by rain, earthquakes, volcanoes, or other factors that make the slope unstable. Geologists, scientists who study the physical formations of the Earth, sometimes describe landslides as one type of mass wasting. A mass wasting is any downward movement in which the Earth's surface is worn away. Other types of mass wasting include rockfalls and the flow of shore deposits called alluvium. Near populated areas, landslides present major hazards to people and property. Landslides cause an estimated 25 to 50 deaths and $3.5 billion in damage each year in the United States.

What Causes Landslides?

Landslides have three major causes: geology, morphology, and human activity.

Geology refers to characteristics of the material itself. The earth or rock might be weak or fractured, or different layers may have different strengths and stiffness.

Morphology refers to the structure of the land. For example, slopes that lose their vegetation to fire or drought are more vulnerable to landslides. Vegetation holds soil in place, and without the root systems of trees, bushes, and other plants, the land is more likely to slide away.

A classic morphological cause of landslides is erosion, or weakening of earth due to water. In April 1983, the town of Thistle, Utah, experienced a devastating landslide brought on by heavy rains and rapidly melting snow. A mass of earth eventually totaling 305 meters wide, 61 meters thick, and 1.6 kilometers long (1,000 feet wide, 200 feet thick, and one mile long) slid across the nearby Spanish Fork River, damming it and severing railroad and highway lines. The landslide was the costliest in U.S. history, causing over $400 million in damage and destroying Thistle, which remains an evacuated ghost town today.

Human activity, such as agriculture and construction, can increase the risk of a landslide. Irrigation, deforestation, excavation, and water leakage are some of the common activities that can help destabilize, or weaken, a slope.

Types of Landslides

There are many ways to describe a landslide. The nature of a landslide's movement and the type of material involved are two of the most common.

Landslide Movement

There are several ways of describing how a landslide moves. These include falls, topples, translational slides, lateral spreads, and flows. In falls and topples, heavy blocks of material fall after separating from a very steep slope or cliff. Boulders tumbling down a slope would be a fall or topple. In translational slides, surface material is separated from the more stable underlying layer of a slope. An earthquake may shake the loosen top layer of soil from the harder earth beneath in this type of landslide. A lateral spread or flow is the movement of material sideways, or laterally. This happens when a powerful force, such as an earthquake, makes the ground move quickly, like a liquid.

Landslide Material

A landslide can involve rock, soil, vegetation, water, or some combination of all these. A landslide caused by a volcano can also contain hot volcanic ash and lava from the eruption. A landslide high in the mountains may have snow and snowmelt. Volcanic landslides, also called lahars, are among the most devastating type of landslides. The largest landslide in recorded history took place after the 1980 eruption of Mount St. Helens in the U.S. state of Washington. The resulting flow of ash, rock, soil, vegetation and water, with a volume of about 2.9 cubic kilometers (0.7 cubic miles), covered an area of 62 square kilometers (24 square miles).

Other Factors

Another factor that might be important for describing landslides is the speed of the movement. Some landslides move at many meters per second, while others creep along at an centimeter or two a year. The amount of water, ice, or air in the earth should also be considered. Some landslides include toxic gases from deep in the Earth expelled by volcanoes. Some landslides, called mudslides, contain a high amount of water and move very quickly. Complex landslides consist of a combination of different material or movement types.

proof-of-a-landslide.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1860 2023-08-06 14:08:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1863) Post office

Summary

Post office is a building where the mail for a local area is sent and received — abbreviation P.O.
2. the Post Office : the government department in charge of collecting and delivering mail.

Details

A post office is a public facility and a retailer that provides mail services, such as accepting letters and parcels, providing post office boxes, and selling postage stamps, packaging, and stationery. Post offices may offer additional services, which vary by country. These include providing and accepting government forms (such as passport applications), and processing government services and fees (such as road tax, postal savings, or bank fees). The chief administrator of a post office is called a postmaster.

Before the advent of postal codes and the post office, postal systems would route items to a specific post office for receipt or delivery. During the 19th century in the United States, this often led to smaller communities being renamed after their post offices, particularly after the Post Office Department began to require that post office names not be duplicated within a state.

Name

The term "post-office" has been in use since the 1650s, shortly after the legalisation of private mail services in England in 1635. In early modern England, post riders—mounted couriers—were placed, or "posted", every few hours along post roads at posting houses (also known as post houses) between major cities, or "post towns". These stables or inns permitted important correspondence to travel without delay. In early America, post offices were also known as stations. This term, as well as the term "post house", fell from use as horse and coach services were replaced by railways, aircraft, and automobiles.

The term "post office" usually refers to government postal facilities providing customer service. "General Post Office" is sometimes used for the national headquarters of a postal service, even if the building does not provide customer service. A postal facility that is used exclusively for processing mail is instead known as a sorting office or delivery office, which may have a large central area known as a sorting or postal hall. Integrated facilities combining mail processing with railway stations or airports are known as mail exchanges.

In India, post offices are found in almost every village having panchayat (a "village council"), towns, cities, and throughout the geographical area of India. India's postal system changed its name to India Post after the advent of private courier companies in the 1990s. It is run by the Indian government's Department of Posts. India Post accepts and delivers inland letters, postcards, parcels, postal stamps, and money orders (money transfers). Few post offices in India offer speed post (fast delivery) and payments or bank savings services. It is also uncommon for Indian post offices to sell insurance policies or accept payment for electricity, landline telephone, or gas bills. Until the 2000 A.D., post offices would collect fees for radio licenses, recruitment for government jobs, and the operation of public call telephone (PCO) booths. Postmen would deliver letters, money orders, and parcels to places that are within the assigned area of a particular post office. Each Indian post office is assigned a unique six-digit code called the Postal Index Number, or PIN. Each post office is identified by its PIN. Post offices coming under Department of Posts ,Ministry of Communication, Government of India have a history of one hundred fifty years.

Private courier and delivery services often have offices as well, although these are usually not called "post offices", except in the case of Germany, which has fully privatised its national postal system.

As abbreviation PO is used, together with GPO for General Post Office and LPO for Licensed Post Office.

History

There is evidence of corps of royal couriers disseminating the decrees of Egyptian pharaohs as early as 2400 BCE, and it is possible that the service greatly precedes that date. Similarly, there may be ancient organised systems of post houses providing mounted courier service, although sources vary as to precisely who initiated the practice.

In the Persian Empire, a Chapar Khaneh system existed along the Royal Road. Similar postage systems were established in India and China by the Mauryan and Han dynasties in the 2nd century BCE.

The Roman historian Suetonius credited Augustus with regularising the Roman transportation and courier network, the Cursus Publicus. Local officials were obliged to provide couriers who would be responsible for their message's entire course. Locally maintained post houses (Latin: stationes) privately owned rest houses (Latin: mansiones) and were obliged or honored to care for couriers along their way. The Roman emperor Diocletian later established two parallel systems: one providing fresh horses or mules for urgent correspondence and the other providing sturdy oxen for bulk shipments. The Byzantine historian Procopius, though not unbiased, records the Cursus Publicus system remained largely intact until it was dismantled in the Byzantine empire by the emperor Justinian in the 6th century.

The Princely House of Thurn and Taxis family initiated regular mail service from Brussels in the 16th century, directing the Imperial Post of the Holy Roman Empire. The British Postal Museum claims that the oldest functioning post office in the world is on High Street in Sanquhar, Scotland. The post office has functioned continuously since 1712, during which horses and stagecoaches were used to carry mail.

Rural parts of Canada in the 19th century utilised the way office system. Villagers could leave their letters at the way office which were then taken to the nearest post office, as well as pick up their mail from the way office.

In parts of Europe, special postal censorship offices existed to intercept and censor mail. In France, such offices were known as cabinets noirs.

Unstaffed postal facilities

In many jurisdictions, mailboxes and post office boxes have long been in widespread use for drop-off and pickup (respectively) of mail and small packages outside post offices or when offices are closed. Germany's national postage system Deutsche Post introduced the Pack-Station for package delivery, including both drop-off and pickup, in 2001. In the 2000s, the United States Postal Service began to install Automated Postal Centers (APCs) in many locations in both post offices, for when they are closed or busy, and retail locations. APCs can print postage and accept mail and small packages.

1079600-4-post.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1861 2023-08-07 14:13:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1864) Playground

Gist

Playground is an area designed for children to play in outside, especially at a school or in a park.

Summary

A playground is a controlled setting for children’s play. This institutionalized environment consists of a planned, enclosed space with play equipment that encourages children’s motor development.

For most of history children merely shared public spaces such as marketplaces with adults; there was no conception of a special area dedicated to children. The “invention” of the playground in general is not attributed to any one person but rather is viewed as a development and combination of the ideas of many thinkers who wrote on education and play—including John Amos Comenius, John Locke, Johann Bernhard Basedow, Jean-Jacques Rousseau, Johann Heinrich Pestalozzi, Friedrich Froebel, John Dewey, Maria Montessori, and Arnold Gesell. The history of playgrounds in the West can be divided into three periods: traditional, contemporary, and adventure playground design.

The American children’s play movement began in Boston in 1885 with the development of children’s sand gardens modeled on German designs. German-born Marie Zarkrzewska was one of the earliest female physicians in the United States. While in Berlin, Zarkrzewska had noted the simple piles of sand boarded by wooden planks that provided a safe, enclosed space for several children to engage in sand play. Based on her recommendation, the first outdoor children’s playground was opened in Boston in 1885. Soon swings and other play apparatus were added in order to provide for older children. By 1912 recreation buildings for outdoor activities were added, thus fostering the establishment of the recreation profession around the years 1918–22. During this period community resources were also harnessed to support enclosed neighbourhood playgrounds to provide safety from street gangs. This is the era of traditional playground design.

The greatest difference between traditional and contemporary playgrounds is the kind of play equipment installed. The earlier, traditional play spaces were called “outdoor gymnasiums” and had exercise apparatus, running tracks, and space for games. In 1928 the National Recreation Association proposed guidelines recommending equipment for children that was appropriate to their age levels—for instance, the association recommended that a preschool playground should have a sandbox, chair swings, a small slide, and a piece of simple low climbing equipment; the elementary school playground should have a horizontal ladder, a balance beam, a giant stride (a large wheel placed at the top of a pole with chains hanging down to permit children to grasp on and swing as the wheel turns), swings, a slide, a horizontal bar, seesaws, and other low climbing equipment. Water play was soon added. These early recommendations have been fairly consistent even to the present day, although materials have changed and safety concerns have increased. Thus, wooden swing seats have been replaced over time with flexible materials such as cloth or plastic, and standard dimensions for apparatus such as sliding boards have narrowed so that only one child at a time can slide down the board. The surface material of the playground has also evolved over time in order to permit safer falls.

The playground era of the 1960s was influenced by the theories of child psychologists such as Erik Erikson and Jean Piaget as well as by modern landscape architects. In this phase of contemporary playground design the psychology of children at play and their stages of development gained in consideration; equipment such as activity panels that were geared toward teaching children a concept through play began to be fostered in playground environments. Denmark is considered the leader in playground development and was the first country to pass laws to ensure that playgrounds were built in public housing projects. This concept has spread throughout much of Europe.

A more recent trend in playground design is the “adventure” playground. Inspired by Scandinavian and British playground reformers, this design attempts to allow for a child-oriented perspective in play; children are, for instance, encouraged in these playgrounds to build their own appropriate play structures. This shift in philosophy can also be seen in the name change of the International Playground Association to the International Association for the Child’s Right to Play.

The organization Playlink (formerly the London Adventure Playground Association) described an adventure playground as an area of between one-third to two and a half acres (one-tenth to one hectare) equipped with materials for building “houses,” cooking in the open, digging holes, gardening, and playing with sand, water, and clay under the supervision of at least two full-time playground leaders who participate in the activities that the children organize themselves. Ideally such playgrounds would also contain an indoor facility with supplies for dramatic play and creative activities such as paints and modeling clay. At some adventure playgrounds in Copenhagen children are encouraged in such activities as constructing huts for rabbits, feeding chickens, and cooking meals over outdoor bonfires.

Details

A playground, playpark, or play area is a place designed to provide an environment for children that facilitates play, typically outdoors. While a playground is usually designed for children, some are designed for other age groups, or people with disabilities. A playground might exclude children below (or above) a certain age.

Modern playgrounds often have recreational equipment such as the seesaw, merry-go-round, swingset, slide, jungle gym, chin-up bars, sandbox, spring rider, trapeze rings, playhouses, and mazes, many of which help children develop physical coordination, strength, and flexibility, as well as providing recreation and enjoyment and supporting social and emotional development. Common in modern playgrounds are play structures that link many different pieces of equipment.

Playgrounds often also have facilities for playing informal games of adult sports, such as a baseball diamond, a skating arena, a basketball court, or a tether ball.

Public playground equipment installed in the play areas of parks, schools, childcare facilities, institutions, multiple family dwellings, restaurants, resorts, and recreational developments, and other areas of public use.

A type of playground called a playscape is designed to provide a safe environment for play in a natural setting.

Types

Playgrounds can be:

* Built by collaborative support of corporate and community resources to achieve an immediate and visible improvement to the neighborhood.
* Public, free of charge, typically found at elementary schools
* Connected to a business and for customers only, such as those a McDonald's, IKEA, and Chuck E. Cheese's.
* Commercial enterprises charging an entrance fee, such as Discovery Zone.
* Non-profit organizations for edutainment as children's museums and science centers, some charge admission, some are free.

Inclusive playgrounds

Universally designed playgrounds are created to be accessible to all children. There are three primary components to a higher level of inclusive play:

* physical accessibility;
* age and developmental appropriateness; and
* sensory-stimulating activity.

Some children with disabilities or developmental differences do not interact with playgrounds in the same way as typical children. A playground designed without considering these children's needs may not be accessible or interesting to them.

Most efforts at inclusive playgrounds have been aimed at accommodating wheelchair users. For example, rubber paths and ramps replace sand pits and steps, and some features are placed at ground level. Efforts to accommodate children on the autism spectrum, who may find playgrounds overstimulating or who may have difficulty interacting with other children, have been less common.

Natural playgrounds

"Natural playgrounds" are play environments that blend natural materials, features, and indigenous vegetation with creative landforms to create purposely complex interplays of natural, environmental objects in ways that challenge and fascinate children and teach them about the wonders and intricacies of the natural world while they play within it.

Play components may include earth shapes (sculptures), environmental art, indigenous vegetation (trees, shrubs, grasses, flowers, lichens, mosses), boulders or other rock structures, dirt and sand, natural fences (stone, willow, wooden), textured pathways, and natural water features.

Themed and educational playgrounds

Some playgrounds have specific purposes. A traffic park teaches children how to navigate streets safely. An adventure playground encourages open-ended play, sometimes involving potentially dangerous objects such as fire or hand tools. An obstacle course or ropes course is designed to focus participants' attention on accomplishing a pre-determined challenging physical task. A trampoline park provides trampolines.

Playgrounds for adults

China and some countries in Europe have playgrounds designed for adults. These are outdoor spaces that feature fitness equipment designed for use primarily by adults, such as chin-up bars.

Playgrounds for older adults are popular in China. Seniors are the primary users of public playgrounds in China. These playgrounds are usually in a smaller, screened area, which may reduce the feeling of being watched or judged by others. They often have adult-sized equipment that helps seniors stretch, strengthen muscles, and improve their sense of balance. Similar playgrounds for adults have been built in other countries. Berlin's Preußenpark for example is designed for people aged 70 or higher.

School playgrounds

School playgrounds are in a unique position to meet kids where they are, helping them become smarter, healthier and stronger through play. Studies show that outdoor physical activity not only benefits a child’s health but improves classroom performance, increases cognitive development and hones social skills.

school-playground-hero-komensky_1440x360.jpg?width=1440&height=560&mode=crop


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1862 2023-08-08 14:17:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1865) Postal code

Details

Postal codes, also called zip codes, ZIP codes, or postcodes, numeric or alphanumeric code, usually five or six characters, is that identifies a geographic location and address. Postal codes are managed by a specific entity within each country; in the United States, for example, postal codes are controlled by the United States Postal Service (USPS). These operators are usually overseen by the Universal Postal Union (UPU), an international body that maintains global postal services. More than 200 different addressing formats exist in the world, helping to facilitate and simplify transit and shipping.

A forerunner to postal codes was introduced in London in 1857. The city was divided into 10 districts, which were each assigned a compass point, such as NW, N, NE, and E, as well as a corresponding post office. Those addressing mail were asked to add the receiving district’s compass points to the end of an address. This localized system saved mail from having to be carried to central London to be sorted and, in some cases, immediately returned to where it had originated. In 1917 the districts were further refined into subdistricts, which were each given a serial number after the district initials. To send mail to Fulham, for example, individuals added a 6 to SW, so it became SW6. Postcodes were introduced at mid-century, but the subdistricts remained and continue to be in use in the 21st century.

In 1932 Ukraine was the first country to implement a system of modern-day postal codes, by using an indexing system. All cities, villages, towns, and railways were assigned a number-letter-number series, which were collected in a reference book. Patrons could find the postal code for their recipient’s area and add it to their mail, thereby expediting sorting and delivery. The system was short-lived, however, abruptly concluding in 1939 just before the start of World War II. Germany developed a system of postal codes during the war, in 1941.

After the war, booming economies and direct-mail marketing led to a huge increase in the amount of mail being processed in the United States. Mail volume more than doubled from 30 billion pieces of mail each year during the 1930s to 80 billion pieces per year in the 1960s. Processing so many items by hand was labour-intensive, creating a need for a mechanized sorting system. Mechanization, however, meant that addresses needed to be standardized.

Robert Moon, a postal inspector in Philadelphia, first suggested using a system of codes to refer to general regions in the country in 1944. His idea was accepted and expanded upon by a committee at the USPS, which included additional digits in order to pinpoint specific geographic locales. Zoning Improvement Plan (ZIP) codes were implemented in 1963. The five-digit system used the first number to point to a national area, the next two numbers to pinpoint a population centre or large city, and the final two numbers to designate a specific delivery area. Zip codes did not follow state or city boundaries but instead were tied to delivery routes, mail sorting facilities, and post office hubs. In 1983 the zip code system was extended to include an additional four digits that indicated a more precise location. These four digits were used most often for large businesses, office complexes, or particularly dense areas with a high volume of mail.

As e-commerce and global shipping routes continue to expand in the 21st century, postal codes have become increasingly important. They ensure that packages and mail are correctly routed. Over the decades, however, postal codes have acquired new purposes. In some countries, including the United States and Canada, customers making a purchase with a credit or debit card are often required to enter their postal or zip code to validate their identity. This address verification system is intended to thwart fraudulent activity by proving that the person using the card knows the permanent mailing address of the cardholder.

Postal codes also offer demographic data that is frequently used for marketing, insurance, and research purposes. In the United States, for example, this includes health research, which has indicated that a person’s zip code is one of the biggest predictors of life expectancy. Comparing postal codes with other statistics like race, income, age, educational attainment, and population density can be used to target direct-to-consumer marketing. Some data brokers and advertising firms use this socioeconomic and demographic data to attempt to predict consumer behaviour and spending patterns, raising concerns about data privacy.

ZIP Code

ZIP Code, in full Zone Improvement Plan Code, system of zone coding (postal coding) introduced by the U.S. Post Office Department (now the U.S. Postal Service) in 1963 to facilitate the sorting and delivery of mail. After an extensive publicity campaign, the department finally succeeded in eliciting from the public a widespread acceptance of the ZIP code. Users of the mails were requested to include in all addresses a five-number code, of which the first three digits identified the section of the country to which the item was destined and the last two digits the specific post office or zone of the addressee. The primary purpose of the zone coding system was to fully exploit the capabilities of electronic reading and sorting equipment.

The U.S. Postal Service introduced a nine-digit ZIP Code in 1983. The new code, composed of the original five digits plus a hyphen and four additional numbers, was designed to speed up automated sorting operations. The first two of the four extra digits specify a particular sector, such as a group of streets or cluster of large buildings. The last two digits of the expanded code represent an even smaller area called a segment, which may consist of one side of a city block, a single floor in a large building, or a group of post office boxes.

Additional Information

A postal code (also known locally in various English-speaking countries throughout the world as a postcode, post code, PIN or ZIP Code) is a series of letters or digits or both, sometimes including spaces or punctuation, included in a postal address for the purpose of sorting mail.

As of August 2021, the Universal Postal Union lists 160 countries which require the use of a postal code.

Although postal codes are usually assigned to geographical areas, special codes are sometimes assigned to individual addresses or to institutions that receive large volumes of mail, such as government agencies and large commercial companies. One example is the French CEDEX system.

Terms

There are a number of synonyms for postal code; some are country-specific:

* CAP: The standard term in Italy; CAP is an acronym for codice di avviamento postale (postal expedition code).
* CEP: The standard term in Brazil; CEP is an acronym for código de endereçamento postal (postal addressing code).
* Eircode: The standard term in Ireland.
* NPA in French-speaking Switzerland (numéro postal d'acheminement) and Italian-speaking Switzerland (numero postale di avviamento).
* PIN: The standard term in India; PIN is an acronym for Postal Index Number. Sometimes called a PIN code.
* PLZ: The standard term in Germany, Austria, German-speaking Switzerland and Liechtenstein; PLZ is an abbreviation of Postleitzahl (postal routing number).
* Postal code: The general term is used in Canada.
* Postcode: This solid compound is popular in many English-speaking countries and is also the standard term in the Netherlands.
* Postal index: This term is used in Eastern European countries such as Ukraine, Moldova, Belarus etc.
* PSČ: The standard term in Slovakia and the Czech Republic; PSČ is an acronym for Poštové smerovacie číslo (in Slovak) or Poštovní směrovací číslo (in Czech), both meaning postal routing number.
* ZIP Code: The standard term in the United States and the Philippines; ZIP is an acronym for Zone Improvement Plan.

History

The development of postal codes reflects the increasing complexity of postal delivery as populations grew and the built environment became more complex. This happened first in large cities. Postal codes began with postal district numbers (or postal zone numbers) within large cities. London was first subdivided into 10 districts in 1857 (EC (East Central), WC (West Central), N, NE, E, SE, S, SW, W, and NW), four were created to cover Liverpool in 1864 and Manchester/Salford was split into eight numbered districts in 1867/68. By World War I, such postal district or zone numbers also existed in various large European cities. They existed in the United States at least as late as the 1920s, possibly implemented at the local post office level only (for example, instances of "Boston 9, Mass" in 1920 are attested) although they were evidently not used throughout all major US cities (implemented USPOD-wide) until World War II.

By 1930 or earlier the idea of extending postal district or zone numbering plans beyond large cities to cover even small towns and rural locales was in the air. These developed into postal codes as they are defined today. The name of US postal codes, "ZIP codes", reflects this evolutionary growth from a zone plan to a zone improvement plan, "ZIP". Modern postal codes were first introduced in the Ukrainian Soviet Socialist Republic in December 1932, but the system was abandoned in 1939. The next country to introduce postal codes was Germany in 1941, followed by Singapore in 1950, Argentina in 1958, the United States in 1963 and Switzerland in 1964. The United Kingdom began introducing its current system in Norwich in 1959, but they were not used nationwide until 1974.

zip-code-1.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1863 2023-08-09 14:45:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1866) Courier

Gist

A courier is a person whose job is to carry messages, packages, etc., from one person or place to another.

Details

A courier is a person or organization that delivers a message, package or letter from one place or person to another place or person. Typically, a courier provides their courier service on a commercial contract basis; however, some couriers are government or state agency employees (for example: a diplomatic courier).

Duties and functions

Couriers are distinguished from ordinary mail services by features such as speed, security, tracking, signature, specialization and individualization of express services, and swift delivery times, which are optional for most everyday mail services. As a premium service, couriers are usually more expensive than standard mail services, and their use is normally limited to packages where one or more of these features are considered important enough to warrant the cost.

Courier services operate on all scales, from within specific towns or cities, to regional, national and global services. Large courier companies include DHL, DTDC, FedEx, EMS International, TNT, UPS, India Post, J&T Express and Aramex. These offer services worldwide, typically via a hub and spoke model.

Couriers services utilizing courier software provide electronic proof of delivery and electronic tracking details.

Before the industrial era

In ancient history, messages were hand-delivered using a variety of methods, including runners, homing pigeons and riders on horseback. Before the introduction of mechanized courier services, foot messengers physically ran miles to their destinations. Xenophon attributed the first use of couriers to the Persian prince Cyrus the Younger.

Famously, the Ancient Greek courier Pheidippides is said to have run 26 miles from Marathon to Athens to bring the news of the Greek victory over the Persians in 490 BCE. The long-distance race known as a marathon is named for this run.

Hezekiah

Judah's king, Hezekiah, dates between 200-400 BCE, where several couriers brought letters throughout the land of Judah and Israel (cf. 2 Chron 30 ESV).

Anabasii

Starting at the time of Augustus, the ancient Greeks and Romans made use of a class of horse and chariot-mounted couriers called anabasii to quickly bring messages and commands from long distances. The word anabasii comes from the Greek (anábasis, "ascent, mounting"). They were contemporary with the Greek hemeredromi, who carried their messages by foot.

In Roman Britain, Rufinus made use of anabasii, as documented in Saint Jerome's memoirs (adv. Ruffinum, l. 3. c. 1.): "Idcircone Cereales et Anabasii tui per diversas provincias cucurrerunt, ut laudes meas legerent?" ("Is it on that account that your Cereales and Anabasii circulated through many provinces, so that they might read my praises?")

Middle Ages

In the Middle Ages, royal courts maintained their own messengers who were paid little more than common labourers.

Types

In cities, there are often bicycle couriers or motorcycle couriers but for consignments requiring delivery over greater distance networks, this may often include trucks, railroads and aircraft.

Many companies which operate under a just-in-time or "JIT" inventory method often use on-board couriers (OBCs). On-board couriers are individuals who can travel at a moment's notice anywhere in the world, usually via commercial airlines. While this type of service is the second costliest—general aviation charters are far more expensive—companies analyze the cost of service to engage an on-board courier versus the "cost" the company will realize should the product not arrive by a specified time (an assembly line stopping, untimely court filing, lost sales from product or components missing a delivery deadline, loss of life from a delayed organ transplant).

By country:

Australia

The courier business in Australia is a very competitive industry and is mainly concentrated in the high population areas in and around the capital cities. With such a vast mass of land to cover the courier companies tend to transport either by air or by the main transport routes and national highways. The only large company that provides a country-wide service is Australia Post. Australian Post operates quite differently to government departments, as it is government-owned enterprise focused on service delivery in a competitive market. It operates in a fully competitive market against other delivery services such as Fastway, UPS, and Transdirect.

China

International courier services in China include TNT, EMS International, DHL, FedEx and UPS. These companies provide nominal worldwide service for both inbound and outbound shipments, connecting China to countries such as the US, Australia, United Kingdom, and New Zealand. Of the international courier services, the Dutch company TNT is considered to have the most capable local fluency and efficacy for third- and fourth- tiered cities. EMS International is a unit of China Post, and as such is not available for shipments originating outside China.

Domestic courier services include SF Express, YTO Express, E-EMS and many other operators of sometimes microscopic scales. E-EMS, is the special product of a co-operative arrangement between China Post and Alipay, which is the online payment unit of Alibaba Group. It is only available for the delivery of online purchases made using Alipay.

Within the Municipality of Beijing, TongCheng KuaiDi, also a unit of China Post, provides intra-city service using cargo bicycles.

India

International courier services in India include DHL, FedEx, Blue Dart Express, Ekart, DTDC, VRL Courier Services, Delhivery, TNT, Amazon.com, OCS and Gati Ltd. Apart from these, several local couriers also operate across India. Almost all of these couriers can be tracked online. India Post, an undertaking by the Indian government, is the largest courier service with around 155 thousand branches (comprising 139 thousand (90%) in rural areas and 16 thousand (10%) in urban areas). All couriers use the PIN code or postal index number introduced by India Post to locate delivery address. Additionally, the contact number of the recipient and sender are voluntarily added on the courier for ease of locating the address.

Bangladesh

The history of courier services in Bangladesh dates back to the late 1970s when private companies started offering delivery and parcel services. These companies played a crucial role in facilitating the movement of documents and goods within the country. Over the years, the courier industry in Bangladesh has grown significantly, adapting to changes in technology and expanding its services to include international shipments. Today, various local and international courier companies operate in Bangladesh, contributing to the country's logistics and trade networks.

International courier services in Bangladesh include DHL, FedEx, United Express, Royale International Bangladesh, DSL Worldwide Courier Service, Aramex, Pos Laju, J&T Express, and Amazon.com. Apart from these, several local couriers also operate across Bangladesh Such as Sundarban Courier Service, Pathao Courier, e-dak Courier, RedX SA Paribahan, Sheba Delivery, Janani Express Parcel Service, Delivery Tiger, eCourier, Karatoa Courier Service, Sonar Courier . Almost all of these couriers can be tracked online.

Malaysia

International courier services in Malaysia include DHL, FedEx, Pgeon, Skynet Express, ABX Express, GDex, Pos Laju, J&T Express, and Amazon.com. Apart from these, several local couriers also operate across Malaysia. Almost all of these couriers can be tracked online.

Ireland

The main courier services available in Ireland as alternatives to the national An Post system are Parcel Direct Ireland, SnapParcel (No longer in operation), DHL, UPS, TNT, DPD and FedEx.

Singapore

There are several international courier companies in Singapore including TNT, DHL and FedEx. Despite being a small country, the demand for courier services is high. Many local courier companies have sprung up to meet this demand. Most courier companies in Singapore focus on local deliveries instead of international freight.

United Kingdom

The genus of the UK same-day courier market stems from the London Taxi companies but soon expanded into dedicated motorcycle despatch riders with the taxi companies setting up separate arms to their companies to cover the courier work. During the late 1970s small provincial and regional companies were popping up throughout the country. Today, there are many large companies offering next-day courier services, including Speedy Freight, DX Group, UKMail and UK divisions of worldwide couriers such as FedEx, DHL, Hermes Group, Global Express Courier, UPS and TNT City Sprint.

There are many 'specialist' couriers usually for the transportation of items such as freight/pallets, sensitive documents and liquids.

The 'Man & Van'/Freelance courier business model, is highly popular in the United Kingdom, with thousands upon thousands of independent couriers and localised companies, offering next-day and same day services. This is likely to be so popular because of the low business requirements (a vehicle) and the lucrative number of items sent within the UK every day. In fact, from 1988 to 2016, UK couriers were considered universally self employed, though the number of salaried couriers employed by firms has grown substantially since then. However, since the dawn of the electronic age the way in which businesses use couriers has changed dramatically. Prior to email and the ability to create PDFs, documents represented a significant proportion of the business. However, over the past five years, documentation revenues have decreased by 50 percent. Customers are also demanding more from their courier partners. Therefore, more organisations prefer to use the services of larger organisations who are able to provide more flexibility and levels of service, which has led to another level of courier company, regional couriers. This is usually a local company which has expanded to more than one office to cover an area.

Some UK couriers offer next-day services to other European countries. FedEx offers next-day air delivery to many EU countries. Cheaper 'by-road' options are also available, varying from two days' delivery time (such as France), to up to a week (former USSR countries).

Large couriers often require an account to be held (and this can include daily scheduled collections). Senders are therefore primarily in the commercial/industrial sector (and not the general public); some couriers such as DHL do however allow public sending (at higher cost than regular senders).

In recent years, the increased popularity of Black Friday in the UK has placed some firms under operational stress.

The process of booking a courier has changed, it is no longer a lengthy task of making numerous calls to different courier companies to request a quote. Booking a courier is predominantly carried out online. The courier industry has been quick to adapt to our ever-changing digital landscape, meeting the needs of mobile and desktop consumers as well as e-commerce and online retailers. Offering end users access to instant online payments, parcel tracking, delivery notifications, and the convenience of door to door collection and delivery to almost any destination in the world.

United States

The courier industry has long held an important place in United States commerce and has been involved in pivotal moments in the nation's history such as westward migration and the gold rush. Wells Fargo was founded in 1852 and rapidly became the preeminent package delivery company. The company specialised in shipping gold, packages and newspapers throughout the West, making a Wells Fargo office in every camp and settlement a necessity for commerce and connections to home. Shortly afterward, the Pony Express was established to move packages more quickly than the traditional stagecoach. It illustrated the demand for timely deliveries across the nation, a concept that continued to evolve with the railroads, automobiles and interstate highways and which has emerged into today's courier industry.

The courier industry in United States is a $59 billion industry, with 90% of the business shared by DHL, FedEx, UPS and USA Couriers. On the other hand, regional and/or local courier and delivery services were highly diversified and tended to be smaller operations; the top 50 firms accounted for just a third of the sector's revenues. USPS is mail or packages delivered by the government and are the only ones who can legally ship to mailboxes.

In a 2019 quarterly earnings call, the CEO of FedEx named Amazon as a direct competitor, cementing the e-commerce company's growth into the field of logistics.

BlogArticle-3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1864 2023-08-10 02:11:53

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1867) Fireplace

Gist

Fireplace is a space in the wall of a room for a fire to burn in, or the decorated part that surrounds this space.

Details

A fireplace or hearth is a structure made of brick, stone or metal designed to contain a fire. Fireplaces are used for the relaxing ambiance they create and for heating a room. Modern fireplaces vary in heat efficiency, depending on the design.

Historically, they were used for heating a dwelling, cooking, and heating water for laundry and domestic uses. A fire is contained in a firebox or fire pit; a chimney or other flue allows exhaust gas to escape. A fireplace may have the following: a foundation, a hearth, a firebox, a mantel, a chimney crane (used in kitchen and laundry fireplaces), a grate, a lintel, a lintel bar, an overmantel, a damper, a smoke chamber, a throat, a flue, and a chimney filter or afterburner.

On the exterior, there is often a corbelled brick crown, in which the projecting courses of brick act as a drip course to keep rainwater from running down the exterior walls. A cap, hood, or shroud serves to keep rainwater out of the exterior of the chimney; rain in the chimney is a much greater problem in chimneys lined with impervious flue tiles or metal liners than with the traditional masonry chimney, which soaks up all but the most violent rain. Some chimneys have a spark arrestor incorporated into the crown or cap.

Organizations like the United States Environmental Protection Agency (EPA) and the Washington State Department of Ecology warn that, according to various studies, fireplaces can pose health risks. The EPA writes "Smoke may smell good, but it's not good for you."

Types of fireplaces

* Manufactured fireplaces are made with sheet metal or glass fire boxes.
* Electric fireplaces can be built-in replacements for wood or gas or retrofit with log inserts or electric fireboxes.
* A few types are wall mounted electric fireplaces, electric fireplace stoves, electric mantel fireplaces, and fixed or free standing electric fireplaces.

Masonry and prefabricated fireplaces can be fueled by:

* Wood fuel or firewood and other biomass
* Charcoal (carbonized biomass)
* Coal of various grades
* Coke (carbonized coal)
* Smokeless fuel of several types
* Flammable gases: propane, butane, and methane (natural gas is mostly methane, liquefied petroleum gas mostly propane)
* Ethanol (a liquid alcohol, also sold in gels)

Ventless fireplaces (duct free/room-venting fireplaces) are fueled by either gel, liquid propane, bottled gas or natural gas.[clarification needed] In the United States, some states and local counties have laws restricting these types of fireplaces. They must be properly sized to the area to be heated.[4] There are also air quality control issues due to the amount of moisture they release into the room air, and an oxygen sensor and a carbon monoxide detector are safety essentials.

Direct vent fireplaces are fueled by either liquid propane or natural gas. They are completely sealed from the area that is heated, and vent all exhaust gasses to the exterior of the structure.

Chimney and flue types:

* Masonry (brick or stone fireplaces and chimneys) with or without tile-lined flue.
* Reinforced concrete chimneys. Fundamental design flaws bankrupted the US manufacturers and made the design obsolete. These chimneys often show vertical cracks on the exterior.
* Metal-lined flue: Double- or triple-walled metal pipe running up inside a new or existing wood-framed or masonry chase.

Newly constructed flues may feature a chase cover, a cap, and a spark arrestor at the top to keep small animals out and to prevent sparks from being broadcast into the atmosphere. All fireplaces require trained gas service members to carry out installations.

Accessories

A wide range of accessories are used with fireplaces, which range between countries, regions, and historical periods. For the interior, common in recent Western cultures include grates, fireguards, log boxes, andirons and pellet baskets, all of which cradle fuel and accelerate combustion. A grate (or fire grate) is a frame, usually of iron bars, to retain fuel for a fire. Heavy metal firebacks are sometimes used to capture and re-radiate heat, to protect the back of the fireplace, and as decoration. Fenders are low metal frames set in front of the fireplace to contain embers, soot and ash. For fireplace tending, tools include pokers, bellows, tongs, shovels, brushes and tool stands. Other wider accessories can include log baskets, companion sets, coal buckets, cabinet accessories and more.

History

Ancient fire pits were sometimes built in the ground, within caves, or in the center of a hut or dwelling. Evidence of prehistoric, man-made fires exists on all five inhabited continents. The disadvantage of early indoor fire pits was that they produced toxic and/or irritating smoke inside the dwelling.

Fire pits developed into raised hearths in buildings, but venting smoke depended on open windows or holes in roofs. The medieval great hall typically had a centrally located hearth, where an open fire burned with the smoke rising to the vent in the roof. Louvers were developed during the Middle Ages to allow the roof vents to be covered so rain and snow would not enter.

Also during the Middle Ages, smoke canopies were invented to prevent smoke from spreading through a room and vent it out through a wall or roof. These could be placed against stone walls, instead of taking up the middle of the room, and this allowed smaller rooms to be heated.

Chimneys were invented in northern Europe in the 11th or 12th century and largely fixed the problem of smoke, more reliably venting it outside. They made it possible to give the fireplace a draft, and also made it possible to put fireplaces in multiple rooms in buildings conveniently. They did not come into general use immediately, however, as they were expensive to build and maintain.

In 1678, Prince Rupert, nephew of Charles I, raised the grate of the fireplace, improving the airflow and venting system. The 18th century saw two important developments in the history of fireplaces. Benjamin Franklin developed a convection chamber for the fireplace that greatly improved the efficiency of fireplaces and wood-burning stoves. He also improved the airflow by pulling air from a basement and venting out a longer area at the top. In the later 18th century, Count Rumford designed a fireplace with a tall, shallow firebox that was better at drawing the smoke up and out of the building. The shallow design also improved greatly the amount of heat transfer projected into the room. Rumford's design is the foundation for modern fireplaces.

The Aesthetic movement of the 1870s and 1880s took on a more traditional spectra based on stone and deflected unnecessary ornamentation. Rather it relied on simple designs with little unnecessary ornamentation. In the 1890s, the Aesthetic movement gave way to the Arts and Crafts movement, where the emphasis was still placed on providing quality stone. Stone fireplaces at this time were a symbol of prosperity, which to some degree is still the notion today.

Evolution of fireplace design

Over time, the purpose of fireplaces has changed from one of necessity to one of visual interest.[5] Early ones were more fire pits than modern fireplaces. They were used for warmth on cold days and nights, as well as for cooking. They also served as a gathering place within the home. These fire pits were usually centered within a room, allowing more people to gather around it.

Many flaws were found in early fireplace designs. Along with the Industrial Revolution, came large-scale housing developments, necessitating a standardization of fireplaces. The most renowned fireplace designers of this time were the Adam Brothers: John Adam, Robert Adam, and James Adam. They perfected a style of fireplace design that was used for generations. It was smaller, more brightly lit, with an emphasis on the quality of the materials used in their construction, instead of their size.

By the 1800s, most new fireplaces were made up of two parts, the surround and the insert. The surround consisted of the mantelpiece and side supports, usually in wood, marble or granite. The insert was where the fire burned, and was constructed of cast iron often backed with decorative tiles. As well as providing heat, the fireplaces of the Victorian era were thought to add a cosy ambiance to homes. In the US state of Wisconsin, some elementary classrooms would contain decorated fireplaces to ease children's transition from home to school.

Heating efficiency

Some fireplace units incorporate a blower, which transfers more of the fireplace's heat to the air via convection, resulting in a more evenly heated space and a lower heating load. Fireplace efficiency can also be increased with the use of a fireback, a piece of metal that sits behind the fire and reflects heat back into the room. Firebacks are traditionally made from cast iron, but are also made from stainless steel.

Most older fireplaces have a relatively low efficiency rating. Standard, modern, wood-burning masonry fireplaces though have an efficiency rating of at least 80% (legal minimum requirement, for example, in Salzburg, Austria). To improve efficiency, fireplaces can also be modified by inserting special heavy fireboxes designed to burn much cleaner and can reach efficiencies as high as 80% in heating the air. These modified fireplaces are often equipped with a large fire window, enabling an efficient heating process in two phases. During the first phase the initial heat is provided through a large glass window while the fire is burning. During this time the structure, built of refractory bricks, absorbs the heat. This heat is then evenly radiated for many hours during the second phase. Masonry fireplaces without a glass fire window only provide heat radiated from its surface. Depending on the outside temperature, 1 to 2 daily firings are sufficient to ensure a constant room temperature.

Environmental effects

Burning any hydrocarbon fuel releases carbon dioxide and water vapor. Other emissions, such as nitrogen oxides and sulfur oxides, can be harmful to the environment.

Additional Information

Fireplace is a housing for an open fire inside a dwelling, used for heating and often for cooking. The first fireplaces developed when medieval houses and castles were equipped with chimneys to carry away smoke; experience soon showed that the rectangular form was superior, that a certain depth was most favourable, that a grate provided better draft, and that splayed sides increased reflection of heat. Early fireplaces were made of stone; later, brick became more widely used. A medieval discovery revived in modern times is that a thick masonry wall opposite the fireplace is capable of absorbing and re-radiating heat.

From early times fireplace accessories and furnishings have been objects of decoration. Since at least the 15th century a fireback, a slab of cast iron, protected the back wall of the fireplace from the intense heat; these were usually decorated. After the 19th century the fireback gave way to firebrick in fireplace construction.

Andirons, a pair of horizontal iron bars on short legs and placed parallel to the sides of the fireplace to support burning logs, were used from the Iron Age. A vertical guard bar at the front, placed to prevent logs from rolling into the rooms, is often decorated ornately. (Rear guard bars were in use until the 14th century, when the central open hearth as a mode of heating went out of general use.) The grate, a sort of basket of cast-iron grillwork, came into use in the 11th century and was especially useful for holding coal.

Fire tools used to maintain a fire have changed little since the 15th century: tongs are used to handle burning fuel, a fire fork or log fork to maneuver fuel into position, and a long-handled brush to keep the hearth swept. The poker, designed to break burning coal into smaller pieces, did not become common until the 18th century. Coal scuttles appeared early in the 18th century and were later adapted into usually ornamental wood boxes or racks for fire logs. The fire screen was developed early in the 19th century to prevent sparks from flying into the room, and it also has been ornamented and shaped to serve decorative as well as functional purposes.

The fireplace itself was not subject to significant improvement—once the open central hearth was abandoned—until 1624, when Louis Savot, an architect employed in construction in the Louvre, Paris, developed a fireplace in which air was drawn through passages under the hearth and behind the fire grate and discharged into the room through a grill in the mantel. This approach was adapted in the 20th century into a prefabricated double-walled steel fireplace liner with the hollow walls serving as air passages. Some such systems use electric fans to force circulation. In the 1970s, when sharply rising fuel costs had stimulated energy conservation measures, sealed systems were devised in which the air to support combustion is drawn in from outside the house or from an unheated portion; a glass cover, fitted closely over the front of the fireplace, is sealed once fuel has been placed and ignited.

Branden+36%27%27+W+Electric+Fireplace.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1865 2023-08-11 00:08:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1868) Farmer

Gist

A farmer is a person who farms; person who operates a farm or cultivates land.

Summary

A farmer is a person who runs and works on a farm. Some farmers raise a variety of food crops, while others keep dairy cows and sell their milk.

Farmers work in some aspect of agriculture, growing vegetables, grains, or fruit; or raising animals for milk, eggs, or meat. A small farmer manages a relatively small piece of land, often growing different crops and keeping hens for their eggs, for example. Some farmers own their farms, while others rent the land on which they work. In the 14th century, a farmer was "one who collects taxes," from the Old French fermier, "lease holder."

Details

A farmer is a person engaged in agriculture, raising living organisms for food or raw materials. The term usually applies to people who do some combination of raising field crops, orchards, vineyards, poultry, or other livestock. A farmer might own the farm land or might work as a laborer on land owned by others. In most developed economies, a "farmer" is usually a farm owner (landowner), while employees of the farm are known as farm workers (or farmhands). However, in other older definitions a farmer was a person who promotes or improves the growth of plants, land or crops or raises animals (as livestock or fish) by labor and attention.

Over half a billion farmers are smallholders, most of whom are in developing countries, and who economically support almost two billion people. Globally, women constitute more than 40% of agricultural employees.

History

Farming dates back as far as the Neolithic, being one of the defining characteristics of that era. By the Bronze Age, the Sumerians had an agriculture specialized labor force by 5000–4000 BCE, and heavily depended on irrigation to grow crops. They relied on three-person teams when harvesting in the spring. The Ancient Egypt farmers farmed and relied and irrigated their water from the Nile.

Animal husbandry, the practice of rearing animals specifically for farming purposes, has existed for thousands of years. Dogs were domesticated in East Asia about 15,000 years ago. Goats and sheep were domesticated around 8000 BCE in Asia. Swine or pigs were domesticated by 7000 BCE in the Middle East and China. The earliest evidence of horse domestication dates to around 4000 BCE.

Advancements in technology

In the U.S. of the 1930s, one farmer could only produce enough food to feed three other consumers. A modern-day farmer produces enough food to feed well over a hundred people. However, some authors consider this estimate to be flawed, as it does not take into account that farming requires energy and many other resources which have to be provided by additional workers, so that the ratio of people fed to farmers is actually smaller than 100 to 1.

Types

More distinct terms are commonly used to denote farmers who raise specific domesticated animals. For example, those who raise grazing livestock, such as cattle, sheep, goats and horses, are known as ranchers (U.S.), graziers (Australia & UK) or simply stockmen. Sheep, goat and cattle farmers might also be referred to, respectively, as shepherds, goatherds and cowherds. The term dairy farmer is applied to those engaged primarily in milk production, whether from cattle, goats, sheep, or other milk producing animals. A poultry farmer is one who concentrates on raising chickens, turkeys, ducks or geese, for either meat, egg or feather production, or commonly, all three. A person who raises a variety of vegetables for market may be called a truck farmer or market gardener. Dirt farmer is an American colloquial term for a practical farmer, or one who farms his own land.

In developed nations, a farmer (as a profession) is usually defined as someone with an ownership interest in crops or livestock, and who provides land or management in their production. Those who provide only labor are most often called farmhands. Alternatively, growers who manage farmland for an absentee landowner, sharing the harvest (or its profits) are known as sharecroppers or sharefarmers. In the context of agribusiness, a farmer is defined broadly, and thus many individuals not necessarily engaged in full-time farming can nonetheless legally qualify under agricultural policy for various subsidies, incentives, and tax deductions.

Techniques

In the context of developing nations or other pre-industrial cultures, most farmers practice a meager subsistence agriculture—a simple organic-farming system employing crop rotation, seed saving, slash and burn, or other techniques to maximize efficiency while meeting the needs of the household or community. One subsisting in this way may become labelled as a peasant, often associated disparagingly with a "peasant mentality".

In developed nations, however, a person using such techniques on small patches of land might be called a gardener and be considered a hobbyist. Alternatively, one might be driven into such practices by poverty or, ironically—against the background of large-scale agribusiness—might become an organic farmer growing for discerning/faddish consumers in the local food market.

Farming organizations

Farmers are often members of local, regional, or national farmers' unions or agricultural producers' organizations and can exert significant political influence. The Grange movement in the United States was effective in advancing farmers' agendas, especially against railroad and agribusiness interests early in the 20th century. The FNSEA is very politically active in France, especially pertaining to genetically modified food. Agricultural producers, both small and large, are represented globally by the International Federation of Agricultural Producers (IFAP), representing over 600 million farmers through 120 national farmers' unions in 79 countries.

Youth farming organizations

There are many organizations that are targeted at teaching young people how to farm and advancing the knowledge and benefits of sustainable agriculture.

* 4-H was started in 1902 and is a U.S.-based network that has approximately 6.5 million members, ages 5 to 21 years old, and is administered by the National Institute of Food and Agriculture of the United States Department of Agriculture (USDA).

* The National FFA Organization (formerly known as Future Farmers of America) was founded in 1925 and is specifically focused on providing agriculture education for middle and high school students.

* Rural Youth Europe is a non-governmental organization for European youths to create awareness of rural environmental and agriculture issues, it was started in 1957 and the headquarters is in Helsinki, Finland. The group is active in 17 countries with over 500,000 participants.

Income

Farmed products might be sold either to a market, in a farmers' market, or directly from a farm. In a subsistence economy, farm products might to some extent be either consumed by the farmer's family or pooled by the community.

Occupational hazards

There are several occupational hazards for those in agriculture; farming is a particularly dangerous industry. Farmers can encounter and be stung or bitten by dangerous insects and other arthropods, including scorpions, fire ants, bees, wasps and hornets. Farmers also work around heavy machinery which can kill or injure them. Farmers can also establish muscle and joints pains from repeated work.

Etymology

The word 'farmer' originally meant a person collecting taxes from tenants working a field owned by a landlord. The word changed to refer to the person farming the field. Previous names for a farmer were churl and husbandman.

Additional Information:

Introduction

A farmer is someone who grows plants and raises animals for human use. Farmers have to work very hard and long hours in order to be successful. The work of farmers is necessary for human survival.

Farming, or agriculture, has been around for about 10,000 years. The first farmers began by taming animals and growing small crops. Over time, people learned what crops to plant and what animals to raise depending on their environment.

Types of Farms

Today, there are different forms of farming. For instance, in less economically developed countries, many families live on what they can grow on their own small farms. There is little left over to sell. This is called subsistence farming. On the other hand, in more economically developed countries, farming has become a huge business. This is called agribusiness. Large companies own farms that produce vast quantities of food.

Methods of Farming

There have been many advancements in the tools farmers use and in the way farmers grow crops. Farmers use tractors and other machinery to make planting and harvesting easier and faster. Farmers who live in dry areas also use irrigation, or artificial watering. Farmers can also use chemicals to help keep their crops free of pests. These chemicals, or pesticides, create a larger yield for the farmer who uses them. However, the use of these chemicals can harm people and the environment. As a result, some farmers choose to not use pesticides. They are called organic farmers.

How People Become Farmers

Some people become farmers because they come from a family of farmers. Older farmers pass down their knowledge of soil, animals, crops, and weather to young people. Depending on one’s knowledge of farming, it is not necessary to attend college. However, there are many agriculture schools that are a part of universities. At these agricultural schools, students pick a specialization such as horticulture (plant cultivation), soil science, animal sciences, and agribusiness economics.

tworoots-092722-0201-copy-1536x1022.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1866 2023-08-12 00:31:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1869) Seat belt

Gist

A seat belt is a strap on a vehicle's seat that holds a person in the seat if there is an accident.

Details

A seat belt, also known as a safety belt or spelled seatbelt, is a vehicle safety device designed to secure the driver or a passenger of a vehicle against harmful movement that may result during a collision or a sudden stop. A seat belt reduces the likelihood of death or serious injury in a traffic collision by reducing the force of secondary impacts with interior strike hazards, by keeping occupants positioned correctly for maximum effectiveness of the airbag (if equipped), and by preventing occupants being ejected from the vehicle in a crash or if the vehicle rolls over.

When in motion, the driver and passengers are traveling at the same speed as the vehicle. If the vehicle suddenly stops or crashes, the occupants continue at the same speed the vehicle was going before it stopped. A seatbelt applies an opposing force to the driver and passengers to prevent them from falling out or making contact with the interior of the car (especially preventing contact with, or going through, the windshield). Seatbelts are considered primary restraint systems (PRSs), because of their vital role in occupant safety.

Effectiveness

An analysis conducted in the United States in 1984 compared a variety of seat belt types alone and in combination with air bags. The range of fatality reduction for front seat passengers was broad, from 20% to 55%, as was the range of major injury, from 25% to 60%. More recently, the Centers for Disease Control and Prevention has summarized these data by stating "seat belts reduce serious crash-related injuries and deaths by about half." Most seatbelt malfunctions are a result of there being too much slack in the seatbelt at the time of the accident.

It has been suggested that although seat belt usage reduces the probability of death in any given accident, mandatory seat belt laws have little or no effect on the overall number of traffic fatalities because seat belt usage also disincentivizes safe driving behaviors, thereby increasing the total number of accidents. This idea, known as compensating-behavior theory, is not supported by the evidence.

In case of vehicle rollover in a US passenger car or SUV, from 1994 to 2004, wearing seat belt reduced the risk of fatalities or incapacitating injuries and increased the probability of no injury:

* In case of vehicle rollover in a US passenger car, there are 0.71% fatalities in 1994 and 0.87% in 2014 when user is restrained. There are 7% fatalities in 1994 and 13% in 2014 when user is unrestrained.

* In case of vehicle rollover, there are 10% incapacitating injury in 1994 and 10% in 2014 when user is restrained. There are 32% incapacitating injury in 1994 and 25% in 2014 when user is unrestrained.

* The probability of no injury is 45% in 1994 and 44% in 2014 when user is restrained. There are 19% no injury in 1994 and 15% in 2014 when user is unrestrained.

Mass transit considerations:

Buses

School buses

In the US, six states—California, Florida, Louisiana, New Jersey, New York, and Texas—require seat belts on school buses.

Pros and cons had been alleged about the use of seatbelts in school buses. School buses, which are much bigger in size than the average vehicle, allow for the mass transportation of students from place to place. The American School Bus Council states in a brief article saying that, "The children are protected like eggs in an egg carton—compartmentalized, and surrounded with padding and structural integrity to secure the entire container." (ASBC). Although school buses are considered safe for mass transit of students this will not guarantee that the students will be injury-free if an impact were to occur. Seatbelts in buses are sometimes believed to make recovering from a roll or tip harder for students and staff as they could be easily trapped in their own safety belts.

In 2015, for the first time, NHTSA endorsed seat belts on school buses.

Motor coaches

In the European Union, all new long-distance buses and coaches must be fitted with seat belts.

Australia has required lap/sash seat belts in new coaches since 1994. These must comply with Australian Design Rule 68, which requires the seat belt, seat and seat anchorage to withstand 20g deceleration and an impact by an unrestrained occupant to the rear.

In the United States, NHTSA has now required lap-shoulder seat belts in new "over-the-road" buses (includes most coaches) starting in 2016.

Trains

The use of seatbelts in trains has been investigated. Concerns about survival space intrusion in train crashes and increased injuries to unrestrained or incorrectly restrained passengers led the researchers to discourage the use of seat belts in trains.

"It has been shown that there is no net safety benefit for passengers who choose to wear 3-point restraints on passenger-carrying rail vehicles. Generally, passengers who choose not to wear restraints in a vehicle modified to accept 3-point restraints receive marginally more severe injuries."

Airplanes

All aerobatic aircraft and gliders (sailplanes) are fitted with four or five-point harnesses, as are many types of light aircraft and many types of military aircraft. The seatbelts in these aircraft have the dual function of crash protection and keeping the pilot(s) and crew in their seat(s) during turbulence and aerobatic maneuvers. Passenger aircraft are fitted with lap belts. Unlike road vehicles, passenger aircraft seat belts are not primarily designed for crash protection. In fact, their main purpose is to keep passengers in their seats during events such as turbulence. Many civil aviation authorities require a "fasten seat belt" sign in passenger aircraft that can be activated by a pilot during takeoff, turbulence, and landing. The International Civil Aviation Organization recommends the use of child restraints. Some airline authorities, including the UK Civil Aviation Authority (CAA), permit the use of airline infant lap belts  (sometimes known as an infant loop or belly belt) to secure an infant under two sitting on an adults lap.

Child occupants

As with adult drivers and passengers, the advent of seat belts was accompanied by calls for their use by child occupants, including legislation requiring such use. Generally, children using adult seat belts suffer significantly lower injury risk when compared to non-buckled children.

The UK extended compulsory seatbelt wearing to child passengers under the age of 14 in 1989. It was observed that this measure was accompanied by a 10% increase in fatalities and a 12% increase in injuries among the target population. In crashes, small children who wear adult seatbelts can suffer "seat-belt syndrome" injuries including severed intestines, ruptured diaphragms, and spinal damage. There is also research suggesting that children in inappropriate restraints are at significantly increased risk of head injury, one of the authors of this research said, "The early graduation of kids into adult lap and shoulder belts is a leading cause of child-occupant injuries and deaths."

As a result of such findings, many jurisdictions now advocate or require child passengers to use specially designed child restraints. Such systems include separate child-sized seats with their own restraints and booster cushions for children using adult restraints. In some jurisdictions, children below a certain size are forbidden to travel in front car seats."

470927580.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1867 2023-08-13 00:46:11

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1870) Sky

Gist

Sky is the area above the earth, in which clouds, the sun, etc. can be seen.

Details

The sky is an unobstructed view upward from the surface of the Earth. It includes the atmosphere and outer space. It may also be considered a place between the ground and outer space, thus distinct from outer space.

In the field of astronomy, the sky is also called the celestial sphere. This is an abstract sphere, concentric to the Earth, on which the Sun, Moon, planets, and stars appear to be drifting. The celestial sphere is conventionally divided into designated areas called constellations.

Usually, the term sky informally refers to a perspective from the Earth's surface; however, the meaning and usage can vary. An observer on the surface of the Earth can see a small part of the sky, which resembles a dome (sometimes called the sky bowl) appearing flatter during the day than at night. In some cases, such as in discussing the weather, the sky refers to only the lower, denser layers of the atmosphere.

The daytime sky appears blue because air molecules scatter shorter wavelengths of sunlight more than longer ones (redder light). The night sky appears to be a mostly dark surface or region spangled with stars. The Sun and sometimes the Moon are visible in the daytime sky unless obscured by clouds. At night, the Moon, planets, and stars are similarly visible in the sky.

Some of the natural phenomena seen in the sky are clouds, rainbows, and aurorae. Lightning and precipitation are also visible in the sky. Certain birds and insects, as well as human inventions like aircraft and kites, can fly in the sky. Due to human activities, smog during the day and light pollution during the night are often seen above large cities.

Etymology

The word sky comes from the Old Norse sky, meaning 'cloud, abode of God'. The Norse term is also the source of the Old English scēo, which shares the same Indo-European base as the classical Latin obscūrus, meaning 'obscure'.

In Old English, the term heaven was used to describe the observable expanse above the earth. During the period of Middle English, "heaven" began shifting toward its current, religious meaning.

During daytime

Except for direct sunlight, most of the light in the daytime sky is caused by scattering, which is dominated by a small-particle limit called Rayleigh scattering. The scattering due to molecule-sized particles (as in air) is greater in the directions both toward and away from the source of light than it is in directions perpendicular to the incident path. Scattering is significant for light at all visible wavelengths, but is stronger at the shorter (bluer) end of the visible spectrum, meaning that the scattered light is bluer than its source: the Sun. The remaining direct sunlight, having lost some of its shorter-wavelength components, appears slightly less blue.

Scattering also occurs even more strongly in clouds. Individual water droplets refract white light into a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed-out white color.

The sky can turn a multitude of colors such as red, orange, purple, and yellow (especially near sunset or sunrise) when the light must travel a much longer path (or optical depth) through the atmosphere. Scattering effects also partially polarize light from the sky and are most pronounced at an angle 90° from the Sun. Scattered light from the horizon travels through as much as 38 times the air mass as does light from the zenith, causing a blue gradient looking vivid at the zenith and pale near the horizon. Red light is also scattered if there is enough air between the source and the observer, causing parts of the sky to change color as the Sun rises or sets. As the air mass nears infinity, scattered daylight appears whiter and whiter.

Apart from the Sun, distant clouds or snowy mountaintops may appear yellow. The effect is not very obvious on clear days, but is very pronounced when clouds cover the line of sight, reducing the blue hue from scattered sunlight. At higher altitudes, the sky tends toward darker colors since scattering is reduced due to lower air density. An extreme example is the Moon, where no atmospheric scattering occurs, making the lunar sky black even when the Sun is visible.

Sky luminance distribution models have been recommended by the International Commission on Illumination (CIE) for the design of daylighting schemes. Recent developments relate to "all sky models" for modelling sky luminance under weather conditions ranging from clear to overcast.

During twilight

The brightness and color of the sky vary greatly over the course of a day, and the primary cause of these properties differs as well. When the Sun is well above the horizon, direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. However, during twilight, the period between sunset and night or between night and sunrise, the situation is more complex.

Green flashes and green rays are optical phenomena that occur shortly after sunset or before sunrise, when a green spot is visible above the Sun, usually for no more than a second or two, or it may resemble a green ray shooting up from the sunset point. Green flashes are a group of phenomena that stem from different causes, most of which occur when there is a temperature inversion (when the temperature increases with altitude rather than the normal decrease in temperature with altitude). Green flashes may be observed from any altitude (even from an aircraft). They are usually seen above an unobstructed horizon, such as over the ocean, but are also seen above clouds and mountains. Green flashes may also be observed at the horizon in association with the Moon and bright planets, including Venus and Jupiter.

Earth's shadow is the shadow that the planet casts through its atmosphere and into outer space. This atmospheric phenomenon is visible during civil twilight (after sunset and before sunrise). When the weather conditions and the observing site permit a clear view of the horizon, the shadow's fringe appears as a dark or dull bluish band just above the horizon, in the low part of the sky opposite of the (setting or rising) Sun's direction. A related phenomenon is the Belt of Venus (or antitwilight arch), a pinkish band that is visible above the bluish band of Earth's shadow in the same part of the sky. No defined line divides Earth's shadow and the Belt of Venus; one colored band fades into the other in the sky.

Twilight is divided into three stages according to the Sun's depth below the horizon, measured in segments of 6°. After sunset, the civil twilight sets in; it ends when the Sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the Sun is between 6° and 12° below the horizon (depth between −6° and −12°), after which comes the astronomical twilight, defined as the period between −12° and −18°. When the Sun drops more than 18° below the horizon, the sky generally attains its minimum brightness.

Several sources can be identified as the source of the intrinsic brightness of the sky, namely airglow, indirect scattering of sunlight, scattering of starlight, and artificial light pollution.

During the night

The Milky Way can be seen as a large band across the night sky, and is distorted into an arch in this 360° panorama.
The term night sky refers to the sky as seen at night. The term is usually associated with skygazing and astronomy, with reference to views of celestial bodies such as stars, the Moon, and planets that become visible on a clear night after the Sun has set. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. The fact that the sky is not completely dark at night can be easily observed. Were the sky (in the absence of moon and city lights) absolutely dark, one would not be able to see the silhouette of an object against the sky.

The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the state of the night sky as a calendar to determine when to plant crops. The ancient belief in astrology is generally based on the belief that relationships between heavenly bodies influence or convey information about events on Earth. The scientific study of the night sky and bodies observed within it, meanwhile, takes place in the science of astronomy.

Within visible-light astronomy, the visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of ambient lighting. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Special filters and modifications to light fixtures can help to alleviate this problem, but for the best views, both professional and amateur optical astronomers seek viewing sites located far from major urban areas.

Use in weather forecasting

Along with pressure tendency, the condition of the sky is one of the more important parameters used to forecast weather in mountainous areas. Thickening of cloud cover or the invasion of a higher cloud deck is indicative of rain in the near future. At night, high thin cirrostratus clouds can lead to halos around the Moon, which indicate the approach of a warm front and its associated rain. Morning fog portends fair conditions and can be associated with a marine layer, an indication of a stable atmosphere. Rainy conditions are preceded by wind or clouds which prevent fog formation. The approach of a line of thunderstorms could indicate the approach of a cold front. Cloud-free skies are indicative of fair weather for the near future. The use of sky cover in weather prediction has led to various weather lore over the centuries.

Tropical cyclones

Within 36 hours of the passage of a tropical cyclone's center, the pressure begins to fall and a veil of white cirrus clouds approaches from the cyclone's direction. Within 24 hours of the closest approach to the center, low clouds begin to move in, also known as the bar of a tropical cyclone, as the barometric pressure begins to fall more rapidly and the winds begin to increase. Within 18 hours of the center's approach, squally weather is common, with sudden increases in wind accompanied by rain showers or thunderstorms. Within six hours of the center's arrival, rain becomes continuous. Within an hour of the center, the rain becomes very heavy and the highest winds within the tropical cyclone are experienced. When the center arrives with a strong tropical cyclone, weather conditions improve and the sun becomes visible as the eye moves overhead. Once the system departs, winds reverse and, along with the rain, suddenly increase. One day after the center's passage, the low overcast is replaced with a higher overcast, and the rain becomes intermittent. By 36 hours after the center's passage, the high overcast breaks and the pressure begins to level off.

Use in transportation

Flight is the process by which an object moves through or beyond the sky (as in the case of spaceflight), whether by generating aerodynamic lift, propulsive thrust, aerostatically using buoyancy, or by ballistic movement, without any direct mechanical support from the ground. The engineering aspects of flight are studied in aerospace engineering which is subdivided into aeronautics, which is the study of vehicles that travel through the air, and astronautics, the study of vehicles that travel through space, and in ballistics, the study of the flight of projectiles. While human beings have been capable of flight via hot air balloons since 1783, other species have used flight for significantly longer. Animals, such as birds, bats, and insects are capable of flight. Spores and seeds from plants use flight, via use of the wind, as a method of propagating their species.

Significance in mythology

Many mythologies have deities especially associated with the sky. In Egyptian religion, the sky was deified as the goddess Nut and as the god Horus. Dyeus is reconstructed as the god of the sky, or the sky personified, in Proto-Indo-European religion, whence Zeus, the god of the sky and thunder in Greek mythology and the Roman god of sky and thunder Jupiter.

In Australian Aboriginal mythology, Altjira (or Arrernte) is the main sky god and also the creator god. In Iroquois mythology, Atahensic was a sky goddess who fell down to the ground during the creation of the Earth. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities.

Additional Information

Why Is the Sky Blue?

The Short Answer:

Sunlight reaches Earth's atmosphere and is scattered in all directions by all the gases and particles in the air. Blue light is scattered more than the other colors because it travels as shorter, smaller waves. This is why we see a blue sky most of the time.

It's easy to see that the sky is blue. Have you ever wondered why?

A lot of other smart people have, too. And it took a long time to figure it out!

The light from the Sun looks white. But it is really made up of all the colors of the rainbow.

A prism separates white light into the colors of the rainbow.

When white light shines through a prism, the light is separated into all its colors. A prism is a specially shaped crystal.

If you visited The Land of the Magic Windows, you learned that the light you see is just one tiny bit of all the kinds of light energy beaming around the universe--and around you!

Like energy passing through the ocean, light energy travels in waves, too. Some light travels in short, "choppy" waves. Other light travels in long, lazy waves. Blue light waves are shorter than red light waves.

Different colors of light have different wavelengths.

All light travels in a straight line unless something gets in the way and does one of these things:—

reflect it (like a mirror)

bend it (like a prism)

or scatter it (like molecules of the gases in the atmosphere)

Sunlight reaches Earth's atmosphere and is scattered in all directions by all the gases and particles in the air. Blue light is scattered in all directions by the tiny molecules of air in Earth's atmosphere. Blue is scattered more than other colors because it travels as shorter, smaller waves. This is why we see a blue sky most of the time.

Atmosphere scatters blue light more than other colors.

Closer to the horizon, the sky fades to a lighter blue or white. The sunlight reaching us from low in the sky has passed through even more air than the sunlight reaching us from overhead. As the sunlight has passed through all this air, the air molecules have scattered and rescattered the blue light many times in many directions.

Atmosphere scatters blue light more than other colors

Also, the surface of Earth has reflected and scattered the light. All this scattering mixes the colors together again so we see more white and less blue.

What makes a red sunset?

As the Sun gets lower in the sky, its light is passing through more of the atmosphere to reach you. Even more of the blue light is scattered, allowing the reds and yellows to pass straight through to your eyes.

Sometimes the whole western sky seems to glow. The sky appears red because small particles of dust, pollution, or other aerosols also scatter blue light, leaving more purely red and yellow light to go through the atmosphere.

Is the sky blue on other planets, too?

It all depends on what’s in the atmosphere! For example, Mars has a very thin atmosphere made mostly of carbon dioxide and filled with fine dust particles. These fine particles scatter light differently than the gases and particles in Earth’s atmosphere.

Photos from NASA’s rovers and landers on Mars have shown us that at sunset there is actually the opposite of what you’d experience on Earth. During the daytime, the Martian sky takes on an orange or reddish color. But as the Sun sets, the sky around the Sun begins to take on a blue-gray tone.

Clouds.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1868 2023-08-14 00:03:04

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1871) Storage

Gist

Storage is the keeping of things until they are needed; the place where they are kept.

Summary

A storeage room or storeroom is a room in a building for storing objects. They are not designed for permanent residence, and are often small and without windows. Such rooms often have more lenient requirements for fire protection, daylight entry and emergency exits compared to rooms intended for permanent residence.

In businesses, the storage is a place where the employees can put their goods and then take them out when the store starts to become empty or when there is a high demand.

In dwelling, storage rooms are used to store less used tools or items that are not used on a daily basis. The term shed is often used for separate small independent buildings for storing food, equipment and the like, lik for example storage sheds, toolsheds or woodsheds. Historically, storage rooms in homes have often been narrow, dark and inconspicuous, and places on floors other than the main floors of the building, such as in a basement or an attic.

A storage room can be lockable, and can be located in a housing unit or a common area, indoors or outdoors.

Rental of storage

There are companies that rent out storage space for self storage, where individuals and companies can rent storage rooms.

Television programs

Sheds, garages and other storage rooms can become overcrowded and cluttered with items that are not in use, or old scrap that has neither been thrown away nor repaired yet, things that one is unable to get rid of or have big plans for. The value of the mess is often small, especially if the people who live there have a compulsive hoarding problem and if the objects are stored in such a way that the condition becomes very poor. The TV show Hoarders is one of several TV shows that try to help people with such problems.

In some cases, there may be valuable antiques that have been stored and forgotten. The TV program American Pickers is a show where the hosts go through old collections in search of valuable antiques.

Storage Wars is a TV series where the contents of storage lockers are auctioned off to customers who hasn't paid their rent, without the bidders being allowed to enter and have a close look on what is inside except for a quick peek from the outside.

Cloud storage

Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on "the cloud". The physical storage spans multiple servers (sometimes in multiple locations), and the physical environment is typically owned and managed by a hosting company. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data.

Cloud storage services may be accessed through a colocated cloud computing service, a web service application programming interface (API) or by applications that use the API, such as cloud desktop storage, a cloud storage gateway or Web-based content management systems.

History

Cloud computing is believed to have been invented by J. C. R. Licklider in the 1960s with his work on ARPANET to connect people and data from anywhere at any time.

In 1983, CompuServe offered its consumer users a small amount of disk space that could be used to store any files they chose to upload.

In 1994, AT&T launched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud." Amazon Web Services introduced their cloud storage service Amazon S3 in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such as SmugMug, Dropbox, and Pinterest. In 2005, Box announced an online file sharing and personal cloud content management service for businesses.

Architecture

Cloud storage is based on highly virtualized infrastructure and is like broader cloud computing in terms of interfaces, near-instant elasticity and scalability, multi-tenancy, and metered resources. Cloud storage services can be used from an off-premises service (Amazon S3) or deployed on-premises (ViON Capacity Services).

There are three types of cloud storage: a hosted object storage service, file storage, and block storage. Each of these cloud storage types offer their own unique advantages.

Examples of object storage services that can be hosted and deployed with cloud storage characteristics include Amazon S3, Oracle Cloud Storage and Microsoft Azure Storage, object storage software like Openstack Swift, object storage systems like EMC Atmos, EMC ECS and Hitachi Content Platform, and distributed storage research projects like OceanStore and VISION Cloud.

Examples of file storage services include Amazon Elastic File System (EFS) and Qumulo Core, used for applications that need access to shared files and require a file system. This storage is often supported with a Network Attached Storage (NAS) server, used for large content repositories, development environments, media stores, or user home directories.

A block storage service like Amazon Elastic Block Store (EBS) is used for other enterprise applications like databases and often require dedicated, low latency storage for each host. This is comparable in certain respects to direct attached storage (DAS) or a storage area network (SAN).

Cloud storage is:

* Made up of many distributed resources, but still acts as one, either in a federated or a cooperative storage cloud architecture
* Highly fault tolerant through redundancy and distribution of data
* Highly durable through the creation of versioned copies
* Typically eventually consistent with regard to data replicas

Details

Storage is the means of holding and protecting commodities for later use.

Foodstuffs were probably the first goods to be stored, being put aside during months of harvest for use in winter. To preserve it from rotting, food was treated in a variety of ways—e.g., dried, smoked, pickled, or sealed in water- and air-tight containers and placed in cool, dark cellars for storage. Modern refrigeration techniques made it possible to store agricultural products with a minimum of change in their natural condition.

Commerce created another major need for storage facilities. The basic goals in commercial storage are protection from weather and from destructive animals like rodents and insects, as well as security from theft. Storage facilities must also serve as a reservoir to accommodate seasonal and fluctuating demand. Efficiency in the transportation of goods often makes the accumulation of a reserve in storage (called stockpiling) advisable. Stockpiling is often advisable for greatest production efficiency as well, for it enables a factory to produce more of a single item than is immediately marketable before initiating the often costly and time-consuming procedures for adjusting production lines for another product. Thus, storage serves commerce as a holding operation between manufacture and market. In another type of storage, called terminaling, pipelines are used to transport products in flowable form directly from the factory to the point of storage.

Transportation, especially transportation over long distances by slow means (such as water-borne shipment), may technically be considered as an aspect of storage. Consignment of goods by diverting shipments is an effective way to satisfy immediate market demand. The use of such “rolling warehouses” is common in both the chemical and lumber industries.

The rule of thumb in transporting of goods is that the higher the volume shipped at one time, the lower the cost per item. Thus, the economics of making a large number of shipments must be weighed against the cost of accumulating goods in storage for a single shipment of a large number of items. Within the marketing process, transportation and storage have what are called place-time values, derived from the appropriate appearance of products when and where they are needed. In manufacturing as well, a high value must be placed on the insurance provided by the storing of parts (raw materials, components, machinery) necessary for production so that they are easily available when necessary.

Large companies typically manufacture different but related items at a variety of locations, seldom producing their complete line at a single plant. Through the operation of storage houses called distribution centres, companies are able to offer their customers a complete selection of all their products, efficiently shipping whole mixed orders at once, rather than piecemeal from each factory.

Accurate market forecasting is essential to the successful functioning of a distribution centre, where the flow of products must be continuous in order that space not be wasted on unused or obsolete items. Further consolidation of the process is accomplished by the public warehouse, to which many companies ship their products, and from which a buyer can purchase a wide variety of items in a single shipment. Of course, in public warehousing, the manufacturer loses control over the handling of the product and over some of the aspects of customer relations. This disadvantage must be weighed against the underutilization of personnel and facilities that occurs in a private operation susceptible to fluctuating demand.

Some goods can be stored in bulk jointly with identical goods of the same quality and specifications without distinction as to manufacturer ownership. Thus, bulk products, such as standard chemicals or cereal grains, from different producers are placed in the same tank or silo in the warehouse. Each of these products can then be sold at a price appreciably lower than otherwise possible owing to the savings realized over individual storage and handling of small amounts. Similarly, the same product purchased by two retailers can continue to be stored together and then separated and inventoried as shipments to individual markets are made.

Such storage is dynamic—that is, the movement of products is fairly constant, and accessibility of items is essential.

Custody storage is a static type of storage. Goods of a high value such as business records and personal items are kept safe for a long period of time without handling. Capacity and security are then the most relevant factors.

Storage facilities are tailored to the needs of accessibility, security, and climate. Refrigerated space must be carefully designed, and heated areas must also be efficiently planned. In all storage facilities, fireproof materials such as concrete and steel are preferable. These materials lend themselves readily to prefabrication and have good insulating and acoustic properties.

Warehousing, the dynamic aspect of storage, is largely an automated process, designed to facilitate stock rotation by means of a combination of equipment such as stacker cranes built into the storage area, remote-controlled forklift trucks for vertical and horizontal movement of goods, and gravity flow racks, in which pallets are automatically replaced in a line. Many warehouses are computer-controlled from dispatching towers.

Additional Information:

What Does Storage Mean?

Storage is a process through which digital data is saved within a data storage device by means of computing technology. Storage is a mechanism that enables a computer to retain data, either temporarily or permanently.

Storage devices such as flash drives and hard disks are a fundamental component of most digital devices since they allow users to preserve all kinds of information such as videos, documents, pictures and raw data.

Storage may also be referred to as computer data storage or electronic data storage.

Storage is among the key components of a computer system and can be classified into several forms, although there are two major types:

* Volatile Storage (Memory): Requires a continuous supply of electricity to store/retain data. It acts as a computer’s primary storage for temporarily storing data and handling application workloads. Examples of non-volatile storage include cache memory and random access memory (RAM).

* Non-Volatile Storage: A type of storage mechanism that retains digital data even if it’s powered off or isn’t supplied with electrical power. This is often referred to as a secondary storage mechanism, and is used for permanent data storage requiring I/O operations. Examples of volatile storage include a hard disk, USB storage and optical media.

Storage is often confused for memory, although in computing the two terms have different meanings. Memory refers to short-term location of temporary data (see volatile storage above), while storage devices, in fact, store data on a long-term basis for later uses and access. While memory is cleared every time a computer is turned off, stored data is saved and stays intact until it’s manually deleted. Primary or volatile storage tends to me much faster than secondary storage due to its proximity to the processor, but it’s also comparably smaller. Secondary storage can hold and handle significantly larger sizes of data, and keeps it inactive until it’s needed again.

Storage devices include a broad range of different magnetic, optical, flash, and virtual drives. They can be either internal (if they’re part of the computer’s hardware), external (if they are installed outside the computer), or removable (if they can be plugged in and removed without opening the computer). Storage also includes many forms of virtual and online storage devices such as cloud to allow users to access their data from multiple devices.

Common storage devices that are in use or have been used in the past include:

* Hard disks.
* Flash drives.
* Floppy diskettes.
* Tape drives.
* CD-ROM disks.
* Blu-ray disks.
* Memory cards.
* Cloud drives.

After a software command is issued by the user, digital data is stored inside the appropriate device. Data size is measured in bits (the smallest unit of measure of computer memory), with larger storage devices being able to store more data.

Storage capabilities have increased significantly in the last few decades, jumping up from the old 5.25-inch disks of the 1980s which held 360 kilobytes, to the modern hard drives which can hold several terabytes.

A-Comprehensive-Guide-On-Warehouse-Storage.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1869 2023-08-15 00:59:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1872) Day

Gist

A day is the interval of light between two successive nights; the time between sunrise and sunset.

Summary

Day is the time required for a celestial body to turn once on its axis; especially the period of the Earth’s rotation. The sidereal day is the time required for the Earth to rotate once relative to the background of the stars—i.e., the time between two observed passages of a star over the same meridian of longitude. The apparent solar day is the time between two successive transits of the Sun over the same meridian. Because the orbital motion of the Earth makes the Sun seem to move slightly eastward each day relative to the stars, the solar day is about four minutes longer than the sidereal day; i.e., the mean solar day is 24 hours 3 minutes 56.555 seconds of mean sidereal time; more usually the sidereal day is expressed in terms of solar time, being 23 hours 56 minutes 4 seconds of mean solar time long. The mean solar day is the average value of the solar day, which changes slightly in length during the year as Earth’s speed in its orbit varies.

The solar day is the fundamental unit of time in both astronomical practice and civil life. It begins at midnight and runs through 24 hours, until the next midnight. A day is commonly divided into two sets of 12 hours for ordinary timekeeping purposes; those hours from midnight to noon are designated AM (ante meridiem, “before noon”), and those from noon to midnight are designated PM (post meridiem, “after noon”). In law the word day, unless qualified, means the 24 hours between midnight and midnight, rather than the daylight hours between sunrise and sunset.

Details

A day is the time period of a full rotation of the Earth with respect to the Sun. On average, this is 24 hours (86,400 seconds). As a day passes at a given location it experiences morning, noon, afternoon, evening, and night. This daily cycle drives circadian rhythms in many organisms, which are vital to many life processes.

A collection of sequential days is organized into calendars as dates, almost always into weeks, months and years. A solar calendar organizes dates based on the Sun's annual cycle, giving consistent start dates for the four seasons from year to year. A lunar calendar organizes dates based on the Moon's lunar phase.

In common usage, a day starts at midnight, written as 00:00 or 12:00 am in 24- or 12-hour clocks, respectively. Because the time of midnight varies between locations, time zones are set up to facilitate the use of a uniform standard time. Other conventions are sometimes used, for example the Jewish religious calendar counts days from sunset to sunset, so the Jewish Sabbath begins at sundown on Friday. In astronomy, a day begins at noon so that observations throughout a single night are recorded as happening on the same day.

In specific applications, the definition of a day is slightly modified, such as in the SI day (exactly 86,400 seconds) used for computers and standards keeping, local mean time accounting of the Earth's natural fluctuation of a solar day, and stellar day and sidereal day (using the celestial sphere) used for astronomy. In most countries outside of the tropics, daylight saving time is practiced, and each year there will be one 23-hour civil day and one 25-hour civil day. Due to slight variations in the rotation of the Earth, there are rare times when a leap second will get inserted at the end of a UTC day, and so while almost all days have a duration of 86,400 seconds, there are these exceptional cases of a day with 86,401 seconds (in the half-century spanning 1972 through 2022, there have been a total of 27 leap seconds that have been inserted, so roughly once every other year).

Etymology

The term comes from the Old English term, with its cognates such as dagur in Icelandic, Tag in German, and dag in Norwegian, Danish, Swedish and Dutch – all stemming from a Proto-Germanic root *dagaz. As of October 17, 2015, day is the 205th most common word in US English, and the 210th most common in UK English.

Definitions

Apparent and Mean Solar Day

Several definitions of this universal human concept are used according to context, need and convenience. Besides the day of 24 hours (86,400 seconds), the word day is used for several different spans of time based on the rotation of the Earth around its axis. An important one is the solar day, defined as the time it takes for the Sun to return to its culmination point (its highest point in the sky). Due to an orbit's eccentricity, the Sun resides in the one of the orbit's foci instead of the middle. Consequently, due to Kepler's second law, the planet travels at different speeds at various positions in its orbit, and thus a solar day is not the same length of time throughout the orbital year. Because the Earth moves along an eccentric orbit around the Sun while the Earth spins on an inclined axis, this period can be up to 7.9 seconds more than (or less than) 24 hours. In recent decades, the average length of a solar day on Earth has been about 86,400.002 seconds (24.000 000 6 hours) and there are currently about 365.2421875 solar days in one mean tropical year.

Ancient custom has a new day start at either the rising or setting of the Sun on the local horizon (Italian reckoning, for example, being 24 hours from sunset, oldstyle). The exact moment of, and the interval between, two sunrises or sunsets depends on the geographical position (longitude and latitude, as well as altitude), and the time of year (as indicated by ancient hemispherical sundials).

A more constant day can be defined by the Sun passing through the local meridian, which happens at local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly constant (24 hours ± 30 seconds). This is the time as indicated by modern sundials.

A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial equator; the speed is the same as the average speed of the real Sun, but this removes the variation over a year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt).

In terms of Earth's rotation, the average day length is about 360.9856°. A day lasts for more than 360° of rotation because of the Earth's revolution around the Sun. With a full year being slightly more than 360 days, the Earth's daily orbit around the Sun is slightly less than 1°, so the day is slightly less than 361° of rotation.

Elsewhere in the Solar System or other parts of the universe, a day is a full rotation of other large astronomical objects with respect to its star.

Civil day

For civil purposes, a common clock time is typically defined for an entire region based on the local mean solar time at a central meridian. Such time zones began to be adopted about the middle of the 19th century when railroads with regularly occurring schedules came into use, with most major countries having adopted them by 1929. As of 2015, throughout the world, 40 such zones are now in use: the central zone, from which all others are defined as offsets, is known as UTC±00, which uses Coordinated Universal Time (UTC).

The most common convention starts the civil day at midnight: this is near the time of the lower culmination of the Sun on the central meridian of the time zone. Such a day may be referred to as a calendar day.

A day is commonly divided into 24 hours of 60 minutes, with each minute composed of 60 seconds.

Sidereal day

A sidereal day or stellar day is the span of time it takes for the Earth to make one entire rotation with respect to the celestial background or a distant star (assumed to be fixed). Measuring a day as such is used in astronomy. A sidereal day is about 4 minutes less than a solar day of 24 hours (23 hours 56 minutes and 4.09 seconds), or 0.99726968 of a solar day of 24 hours. There are about 366.2422 stellar days in one mean tropical year (one stellar day more than the number of solar days).

Besides a stellar day on Earth, other bodies in the Solar System have day times, the durations of these being in different planets.

In the International System of Units

In the International System of Units (SI), a day not an official unit, but is accepted for use with SI. A day, with symbol d, is defined using SI units as 86,400 seconds; the second is the base unit of time in SI units. In 1967–68, during the 13th CGPM (Resolution 1), the International Bureau of Weights and Measures (BIPM) redefined a second as "... the duration of 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium 133 atom." This makes the SI-based day last exactly 794,243,384,928,000 of those periods.

In decimal and metric time

Various decimal or metric time proposals have been made, but do not redefine the day, and use the day or sidereal day as a base unit. Metric time uses metric prefixes to keep time. It uses the day as the base unit, and smaller units being fractions of a day: a metric hour (deci) is 1⁄10 of a day; a metric minute (milli) is 1⁄1000 of a day; etc. Similarly, in decimal time, the length of a day is static to normal time. A day is also split into 10 hours, and 10 days comprise a décade – the equivalent to a week. 3 décades make a month.  Various decimal time proposals which do not redefine the day: Henri de Sarrauton's proposal kept days, and subdivided hours into 100 minutes; in Mendizábal y Tamborel's proposal, the sidereal day was the basic unit, with subdivisions made upon it;  and Rey-Pailhade's proposal divided the day 100 cés.

sunrise.png?1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1870 2023-08-15 23:59:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1873) Night

Gist

Night is the part of the day when it is dark and when most people sleep. It is the period of darkness between sunset and sunrise.

Details

Night (also described as nighttime, unconventionally spelled as "nite") is the period of ambient darkness from sunset to sunrise during each 24-hour day, when the Sun is below the horizon. The exact time when night begins and ends depends on the location and varies throughout the year, based on factors such as season and latitude.

The word can be used in a different sense as the time between bedtime and morning. In common communication, it is a farewell (sometimes lengthened to "good night"), mainly when someone is going to sleep or leaving.

Astronomical night is the period between astronomical dusk and astronomical dawn when the Sun is between 18 and 90 degrees below the horizon and does not illuminate the sky. As seen from latitudes between about 48.56° and 65.73° north or south of the equator, complete darkness does not occur around the summer solstice because, although the Sun sets, it is never more than 18° below the horizon at lower culmination, −90° Sun angles occur at the Tropic of Cancer on the December solstice and Tropic of Capricorn on the June solstice, and at the equator on equinoxes. And as seen from latitudes greater than 72° north or south of the equator, complete darkness does not occur in both equinoxes because, although the Sun sets, it is never more than 18° below the horizon.

The opposite of night is day (or "daytime", to distinguish it from "day" referring to a 24-hour period). Twilight is the period of night after sunset or before sunrise when the Sun still illuminates the sky when it is below the horizon. At any given time, one side of Earth is bathed in sunlight (the daytime), while the other side is in darkness caused by Earth blocking the sunlight. The central part of the shadow is called the umbra, where the night is darkest.

Natural illumination at night is still provided by a combination of moonlight, planetary light, starlight, zodiacal light, gegenschein, and airglow. In some circumstances, aurorae, lightning, and bioluminescence can provide some illumination. The glow provided by artificial lighting is sometimes referred to as light pollution because it can interfere with observational astronomy and ecosystems.

Duration and geography

On Earth, an average night is shorter than daytime due to two factors. Firstly, the Sun's apparent disk is not a point, but has an angular diameter of about 32 arcminutes (32'). Secondly, the atmosphere refracts sunlight so that some of it reaches the ground when the Sun is below the horizon by about 34'. The combination of these two factors means that light reaches the ground when the center of the solar disk is below the horizon by about 50'. Without these effects, daytime and night would be the same length on both equinoxes, the moments when the Sun appears to contact the celestial equator. On the equinoxes, daytime actually lasts almost 14 minutes longer than night does at the equator, and even longer towards the poles.

The summer and winter solstices mark the shortest and longest nights, respectively. The closer a location is to either the North Pole or the South Pole, the wider the range of variation in the night's duration. Although daytime and night nearly equalize in length on the equinoxes, the ratio of night to day changes more rapidly at high latitudes than at low latitudes before and after an equinox. In the Northern Hemisphere, Denmark experiences shorter nights in June than India. In the Southern Hemisphere, Antarctica sees longer nights in June than Chile. Both hemispheres experience the same patterns of night length at the same latitudes, but the cycles are 6 months apart so that one hemisphere experiences long nights (winter) while the other is experiencing short nights (summer).

In the region within either polar circle, the variation in daylight hours is so extreme that part of summer sees a period without night intervening between consecutive days, while part of winter sees a period without daytime intervening between consecutive nights.

Beyond Earth

The phenomenon of day and night is due to the rotation of a celestial body about its axis, creating an illusion of the sun rising and setting. Different bodies spin at very different rates, some much faster than Earth and others extremely slowly, leading to very long days and nights. The planet Venus rotates once every 224.7 days – by far the slowest rotation period of any of the major planets. In contrast, the gas giant Jupiter's sidereal day is only 9 hours and 56 minutes. The length of a planet's orbital period determines the length of its day-night cycle as well - Venus has a rotation period of 224.7 days, but a day-night cycle just 116.75 days long due to its retrograde rotation and orbital motion around the Sun. Mercury has the longest day-night cycle as a result of its 3:2 resonance between its orbital period and rotation period - this resonance gives it a day-night cycle that is 176 days long. A planet may experience large temperature variations between day and night, such as Mercury, the planet closest to the sun. This is one consideration in terms of planetary habitability or the possibility of extraterrestrial life.

Effects on life:

Biological

The disappearance of sunlight, the primary energy source for life on Earth, has dramatic effects on the morphology, physiology and behavior of almost every organism. Some animals sleep during the night, while other nocturnal animals, including moths and crickets, are active during this time. The effects of day and night are not seen in the animal kingdom alone – plants have also evolved adaptations to cope best with the lack of sunlight during this time. For example, crassulacean acid metabolism is a unique type of carbon fixation which allows some photosynthetic plants to store carbon dioxide in their tissues as organic acids during the night, which can then be used during the day to synthesize carbohydrates. This allows them to keep their stomata closed during the daytime, preventing transpiration of water when it is precious.

Social

The first constant electric light was demonstrated in 1835. As artificial lighting has improved, especially after the Industrial Revolution, nighttime activity has increased and become a significant part of the economy in most places. Many establishments, such as nightclubs, bars, convenience stores, fast-food restaurants, gas stations, distribution facilities, and police stations now operate 24 hours a day or stay open as late as 1 or 2 a.m. Even without artificial light, moonlight sometimes makes it possible to travel or work outdoors at night.

Nightlife is a collective term for entertainment that is available and generally more popular from the late evening into the early hours of the morning. It includes pubs, bars, nightclubs, parties, live music, concerts, cabarets, theatre, cinemas, and shows. These venues often require a cover charge for admission. Nightlife entertainment is often more adult-oriented than daytime entertainment.

Cultural and psychological

Night is often associated with danger and evil, because of the psychological connection of night's all-encompassing darkness to the fear of the unknown and darkness's hindrance of a major sensory system (the sense of sight). Nighttime is naturally associated with vulnerability and danger for human physical survival. Criminals, animals, and other potential dangers can be concealed by darkness. Midnight has a particular importance in human imagination and culture.

Upper Paleolithic art was found to show (by André Leroi-Gourhan) a pattern of choices where the portrayal of animals that were experienced as dangerous were located at a distance from the entrance of a cave dwelling at a number of different cave locations.

The belief in magic often includes the idea that magic and magicians are more powerful at night. Séances of spiritualism are usually conducted closer to midnight. Similarly, mythical and folkloric creatures such as vampires, ghosts and werewolves are described as more active at night. In almost all cultures, legendary stories warn of the night's dangers.

The cultural significance of the night in Islam differs from that in Western culture. The Quran was revealed during the Night of Power, the most significant night according to Islam. Muhammad made his famous journey from Mecca to Jerusalem and then to heaven in the night. Another prophet, Abraham, came to realize the supreme being in charge of the universe at night.

People who prefer nocturnal activity are called night owls.

stars_blue_sky_mountains_reflection_on_water_anime_background_4k_hd_anime_background-HD.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1871 2023-08-16 00:58:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1874) Sleep

Gist

Sleep is a state of reduced mental and physical activity in which consciousness is altered and sensory activity is inhibited to a certain extent. During sleep, there is a decrease in muscle activity, and interactions with the surrounding environment.

Summary

Sleep is a state of reduced mental and physical activity in which consciousness is altered and sensory activity is inhibited to a certain extent. During sleep, there is a decrease in muscle activity, and interactions with the surrounding environment. While sleep differs from wakefulness in terms of the ability to react to stimuli, it still involves active brain patterns, making it more reactive than a coma or disorders of consciousness.

Sleep occurs in repeating periods, during which the body alternates between two distinct modes: REM and non-REM sleep. Although REM stands for "rapid eye movement", this mode of sleep has many other aspects, including virtual paralysis of the body.  Dreams are a succession of images, ideas, emotions, and sensations that usually occur involuntarily in the mind during certain stages of sleep.

During sleep, most of the body's systems are in an anabolic state, helping to restore the immune, nervous, skeletal, and muscular systems; these are vital processes that maintain mood, memory, and cognitive function, and play a large role in the function of the endocrine and immune systems. The internal circadian clock promotes sleep daily at night. The diverse purposes and mechanisms of sleep are the subject of substantial ongoing research. Sleep is a highly conserved behavior across animal evolution, likely going back hundreds of millions of years.

Humans may suffer from various sleep disorders, including dyssomnias such as insomnia, hypersomnia, narcolepsy, and sleep apnea; parasomnias such as sleepwalking and rapid eye movement sleep behavior disorder; bruxism; and circadian rhythm sleep disorders. The use of artificial light has substantially altered humanity's sleep patterns. Common sources of artificial light include outdoor lighting and the screens of electronic devices such as smartphones and televisions, which emit large amounts of blue light, a form of light typically associated with daytime. This disrupts the release of the hormone melatonin needed to regulate the sleep-cycle.

Details

Sleep is a normal, reversible, recurrent state of reduced responsiveness to external stimulation that is accompanied by complex and predictable changes in physiology. These changes include coordinated, spontaneous, and internally generated brain activity as well as fluctuations in hormone levels and relaxation of musculature. A succinctly defined specific purpose of sleep remains unclear, but that is partly because sleep is a dynamic state that influences all physiology, rather than an individual organ or other isolated physical system. Sleep contrasts with wakefulness, in which state there is an enhanced potential for sensitivity and an efficient responsiveness to external stimuli. The sleep-wakefulness alternation is the most-striking manifestation in higher vertebrates of the more-general phenomenon of periodicity in the activity or responsivity of living tissue.

There is no single perfectly reliable criterion for defining sleep. It is typically described by the convergence of observations satisfying several different behavioral, motor, sensory, and physiological criteria. Occasionally, one or more of those criteria may be absent during sleep (e.g., in sleepwalking) or present during wakefulness (e.g., when sitting calmly), but even in such cases there usually is little difficulty in achieving agreement among observers in the discrimination between the two behavioral states.

The nature of sleep

Sleep usually requires the presence of relaxed skeletal muscles and the absence of the overt goal-directed behaviour of which the waking organism is capable. The characteristic posture associated with sleep in humans and in many but not all other animals is that of horizontal repose. The relaxation of the skeletal muscles in that posture and its implication of a more-passive role toward the environment are symptomatic of sleep. Instances of activities such as sleepwalking raise interesting questions about whether the brain is capable of simultaneously being partly asleep and partly awake. In an extreme form of that principle, marine mammals appear to sleep with half the brain remaining responsive, possibly to maintain activities that allow them to surface for air.

Indicative of the decreased sensitivity of the human sleeper to the external environment are the typical closed eyelids (or the functional blindness associated with sleep while the eyes are open) and the presleep activities that include seeking surroundings characterized by reduced or monotonous levels of sensory stimulation. Three additional criteria—reversibility, recurrence, and spontaneity—distinguish sleep from other states. For example, compared with hibernation or coma, sleep is more easily reversible. Although the occurrence of sleep is not perfectly regular under all conditions, it is at least partially predictable from a knowledge of the duration of prior sleep periods and of the intervals between periods of sleep, and, although the onset of sleep may be facilitated by a variety of environmental or chemical means, sleep states are not thought of as being absolutely dependent upon such manipulations.

In experimental studies, sleep has also been defined in terms of physiological variables generally associated with recurring periods of inactivity identified behaviorally as sleep. For example, the typical presence of certain electroencephalogram (EEG) patterns (brain patterns of electrical activity) with behavioral sleep has led to the designation of such patterns as “signs” of sleep. Conversely, in the absence of such signs (as, for example, in a hypnotic trance), it is believed that true sleep is absent. Such signs as are now employed, however, are not invariably discriminating of the behavioral states of sleep and wakefulness. Advances in the technology of animal experimentation have made it possible to extend the physiological approach from externally measurable manifestations of sleep such as the EEG to the underlying neural (nerve) mechanisms presumably responsible for such manifestations. In addition, computational modeling of EEG signals may be used to obtain information about the brain activities that generate the signals. Such advances may eventually enable scientists to identify the specific structures that mediate sleep and to determine their functional roles in the sleep process.

In addition to the behavioral and physiological criteria already mentioned, subjective experience (in the case of the self) and verbal reports of such experience (in the case of others) are used at the human level to define sleep. Upon being alerted, one may feel or say, “I was asleep just then,” and such judgments ordinarily are accepted as evidence for identifying a pre-arousal state as sleep. Such subjective evidence, however, can be at variance with both behavioral classifications and sleep electrophysiology, raising interesting questions about how to define the true measure of sleep. Is sleep determined by objective or subjective evidence alone, or is it determined by some combination of the two? And what is the best way to measure such evidence?

More generally, problems in defining sleep arise when evidence for one or more of the several criteria of sleep is lacking or when the evidence generated by available criteria is inconsistent. Do all animals sleep? Other mammalian species whose EEG and other physiological correlates are akin to those observed in human sleep demonstrate recurring, spontaneous, and reversible periods of inactivity and decreased critical reactivity. There is general acceptance of the designation of such states as sleep in all mammals and many birds. For lizards, snakes, and closely related reptiles, as well as for fish and insects, however, such criteria are less successfully satisfied, and so the unequivocal identification of sleep becomes more difficult. Bullfrogs (Lithobates catesbeianus), for example, seem not to fulfill sensory threshold criteria of sleep during resting states. Tree frogs (genus Hyla), on the other hand, show diminished sensitivity as they move from a state of behavioral activity to one of rest. Yet, the EEGs of the alert rest of the bullfrog and the sleeplike rest of the tree frog are the same.

Problems in defining sleep may arise from the effects of artificial manipulation. For example, some of the EEG patterns commonly used as signs of sleep can be induced in an otherwise waking organism by the administration of certain drugs.

Developmental patterns of sleep and wakefulness

How much sleep does a person need? While the physiological bases of the need for sleep remain conjectural, rendering definitive answers to this question impossible despite contemporary knowledge, much evidence has been gathered on how much sleep people do in fact obtain. Perhaps the most-important conclusion to be drawn from the evidence is that there is great variability between individuals and across life spans in the total amount of sleep time.

Studies suggest that healthy adults between ages 26 and 64 need about 7 to 9 hours of sleep per night. Adults over age 65 need roughly 7 to 8 hours. Increasing numbers of people, however, sleep fewer than 7 or more than 8 hours. According to sleep polls taken in the United States in 2009, the average number of persons sleeping less than 6 hours per night increased from 12 percent in 1998 to 20 percent in 2009. During that same period the average number of persons sleeping more than 8 hours decreased from 35 percent to 28 percent. Sleep time also differs between weekdays and weekends. In the United States and other industrialized countries, including the United Kingdom and Australia, adults average less than 7 hours of sleep per night during the workweek. For Americans, that average increases only slightly, by an average of 30 minutes, on weekends. However, sleep norms inevitably vary with sleep criteria. The most precise and reliable figures on sleep time come from studies in sleep laboratories, where EEG criteria are employed.

The amount and characteristics of sleep vary significantly according to age. The newborn infant may spend an average of about 16 hours of each 24-hour period in sleep, although there is wide variability between individual babies. By about the sixth month of life, many infants are able to sustain longer sleep episodes and are beginning to consolidate sleep at night. That sleep period is typically accompanied by morning and afternoon napping. During the first year of life, sleep time drops sharply, and by two years of age it may range from 9 to 12 hours.

Sleep duration recommendations for toddlers (age 1 to 2) range from 11 to 14 hours and for preschoolers (age 3 to 5) from 10 to 13 hours, which includes time spent napping. Only a small percentage of 4- to 5-year-old children nap; for most, sleep is consolidated into a single nighttime period. A gradual shift to a later bedtime begins in school-age children (age 6 to 13), who have been estimated to need between 9 and 11 hours of sleep. Adolescents between ages 14 and 17 need at least 8.5 hours of sleep per night, while young adults (age 18 to 25) need at least 7 hours. Most individuals in those age groups, however, sleep fewer than 7 hours. Sleep durations outside the recommended ranges (e.g., as few as 7 or 8 hours or as many as 12 hours in some school-age children) may be normal. Youths whose sleep deviates far from the normal range (e.g., in school-age children, less than 7 hours or more than 12 hours) may be affected by a health or sleep-related problem.

Similar to adults, children and adolescents in some societies tend to show discrepancies between the amounts of sleep obtained on weekday nights versus the weekend or non-school-day nights, typically characterized by marked increases during the latter. Reduced sleep on weekday nights has been attributed to social schedules and late-night activities, combined with an early school start time. Sleep disorders and modern lifestyle habits (e.g., use of electronic media in the bedroom and caffeinated beverages) also have been implicated in influencing the amount and quality of sleep in those age groups.

In older individuals (age 65 and over), sleep duration recommendations are between 7 and 8 hours. Decreases to approximately 6 hours have been observed among the elderly; however, decreases in sleep time in that population may be attributed to the increased incidence of illness and use of medications rather than natural physiological declines in sleep.

It is important to emphasize that the amount of sleep a person obtains does not necessarily reflect the amount of sleep a person needs. There are significant individual differences in the optimal amount of sleep across development, and there is no correct amount of sleep that children, teenagers, or adults should obtain each night. As a rule of thumb, the right amount of sleep has been obtained if one feels well rested upon awakening. Some persons chronically deprive themselves of sleep by consistently obtaining too little sleep. Such people are often, but not always, sleepy. Although it is generally accepted that a person would not sleep more than needed, there are instances in which a person with disordered sleep might attempt to compensate, knowingly or not, by obtaining more sleep. Healthy sleep is likely a combination of both quantity and quality, with only limited means of making up for poor-quality sleep by expanding the time spent in sleep.

Studies of sleep indicate that it tends to be a dynamic process, fluctuating between different regularly occurring patterns seen on the EEG that can be considered to consist of several different stages, although that classification remains somewhat arbitrary. Developmental changes in the relative proportion of sleep time spent in those stages are as striking as age-related changes in total sleep time. For example, the newborn infant may spend 50 percent of total sleep time in a stage of EEG sleep that is accompanied by intermittent bursts of rapid eye movement (REM), which are indicative of a type of sleep called active sleep in newborns that in some respects bears more resemblance to wakefulness than to other forms of sleep (see below REM sleep). In children and adolescents, REM sleep declines to about 20 to 25 percent of total sleep time. Total sleep time spent in REM sleep for adults is approximately 25 percent and for the elderly is less than 20 percent.

Quiet non-REM (NREM) sleep in the newborn infant is slower to evolve than REM sleep. At 6 months (sometimes as early as 2 months) of age, substages of light and deep NREM sleep are seen. Indeterminate (neither active nor quiet) sleep in newborns occurs at sleep onset as well as sleep-to-wake and active-to-quiet NREM sleep transitions. NREM sleep in the child may be distinguished from that seen in the adult, because of the greater amount of higher-amplitude slow-wave activity in the brain. There is also a slow decline of EEG stage 3 (deep slumber) into old age; in some elderly persons, stage 3 may cease entirely (see below NREM sleep).

Sleep patterning consists of (1) the temporal spacing of sleep and wakefulness within a 24-hour period, driven by the need for sleep (referred to as “homeostatic sleep pressure”) and by circadian rhythm, and (2) the ordering of different sleep stages within a given sleep period, known as “ultradian” cycles. The homeostatic pressure increases with increasing time of wakefulness, typically making people progressively sleepy as the day goes on. For a typical adult, that is balanced by the circadian system, which counteracts homeostatic pressure by supplying support for wakefulness in the early evening. As circadian support for wakefulness subsides, usually late in the evening, the homeostatic system is left unbridled, and sleepiness ensues.

There are major developmental changes in the patterning of sleep across the human life cycle. In alternations between sleep and wakefulness, there is a developmental shift from polyphasic sleep to monophasic sleep (i.e., from intermittent to uninterrupted sleep). In infants there may be six or seven periods of sleep per day that alternate with an equivalent number of waking periods. With the decreasing occurrence of nocturnal feedings in infancy and of morning and afternoon naps in childhood, there is an increasing tendency toward the concentration of sleep in one long nocturnal period. The trend toward monophasic sleep probably reflects some blend of the effects of maturing and of pressures from a culture geared to daytime activity and nocturnal rest. In many Western cultures, monophasic sleep may become disrupted, particularly during adolescence and young adulthood. During those stages of life, sleep patterns show common features of irregular sleep-wake schedules, usually with large discrepancies between bedtimes and wake times on school nights versus nonschool nights, which can result in daytime sleepiness and napping. Those irregularities can also affect adults. Symptoms often interfere with the person’s daily schedule, warranting diagnosis with a circadian rhythm sleep disorder known as delayed sleep phase, which is characterized by a preference for later-than-normal bedtimes and wake times.

Among the elderly there may be a partial return to the polyphasic sleep pattern, with more-frequent daytime napping and less-extensive periods of nocturnal sleep. That may be due to reduced circadian influences or poor sleep quality at night or both. For example, sleep disorders such as sleep apnea are more common among older people, and even in healthy older people there is often an alteration of brain structures involved in sleep regulation, resulting in a weakening of sleep oscillations such as spindles and slow waves (see below NREM sleep).

Significant developmental effects also have been observed in the spacing of stages within sleep. Sleep in the infant, for example, is very different compared with the sleep of adults. The pattern of sleep cycles matures over the first two to six months of life, and the transition from wake to sleep switches from sleep-onset REM to sleep-onset NREM. The length of the REM-NREM sleep cycle increases across childhood from about 50 to 60 minutes to approximately 90 minutes by adolescence. In the adult, REM sleep rarely occurs at sleep onset. Compared with normal children and adult sleepers, infants spend the greatest amount of time in REM sleep.

In the search for the functional significance of sleep or of particular stages of sleep, the shifts in sleep variables can be linked with variations in waking developmental needs, the total capacities of the individual, and environmental demands. It has been suggested, for instance, that the high frequency of sleep in the newborn infant may reflect a need for stimulation from within the brain to permit orderly maturation of the central nervous system (CNS; see nervous system, human). As these views illustrate, developmental changes in the electrophysiology of sleep are germane not only to sleep but also to the role of CNS development in behavioral adaptation. In addition, different elements of sleep physiology are suspected to facilitate different components of the developing brain and may even exert different effects on the maintenance and plasticity of the adult brain (see neuroplasticity).

Psychophysiological variations in sleep

That there are different kinds of sleep has long been recognized. In everyday discourse there is talk of “good” sleep and “poor” sleep, of “light” sleep and “deep” sleep, yet not until the second half of the 20th century did scientists pay much attention to qualitative variations within sleep. Sleep was formerly conceptualized by scientists as a unitary state of passive recuperation. Revolutionary changes have occurred in scientific thinking about sleep, the most important of which has been increased appreciation of the diverse elements of sleep and their potential functional roles.

This revolution may be traced back to the discovery of sleep characterized by rapid eye movements (REM), first reported by American physiologists Eugene Aserinsky and Nathaniel Kleitman in 1953. REM sleep proved to have characteristics quite at variance with the prevailing model of sleep as recuperative deactivation of the central nervous system. Various central and autonomic nervous system measurements seemed to show that the REM stage of sleep is more nearly like activated wakefulness than it is like other sleep. Hence, REM sleep is sometimes referred to as “paradoxical sleep.” Thus, the earlier assumption that sleep is a unitary and passive state has yielded to the viewpoint that there are two different kinds of sleep: a relatively deactivated NREM (non-rapid eye movement) phase and an activated REM phase. However, data, notably from brain-imaging studies, stress that this view is somewhat simplistic and that both phases actually display complex brain activity in different locations of the brain and in different patterns over time.

NREM sleep

By the time a child reaches one year of age, NREM sleep can be classified into different sleep stages. NREM is conventionally subdivided into three different stages on the basis of EEG criteria: stage 1, stage 2, and stage 3 (sometimes referred to as NREM 1, NREM 2, and NREM 3, or simply N1, N2, and N3). Stage 3 is referred to as “slow-wave sleep” and traditionally was subdivided into stage 3 and stage 4, though both are now considered stage 3. The distinction between these stages of NREM sleep is made through information gleaned from multiple physiological parameters, including EEG, which are reported in frequency (in hertz [Hz], or cycles per second) and amplitude (in voltage) of the signal.

In the adult, stage 1 is a state of drowsiness, a transition state into sleep. It is observed at sleep onset or after momentary arousals during the night and is defined as a low-voltage mixed-frequency EEG tracing with a considerable representation of theta-wave activity (4–7 Hz). Stage 2 is a relatively low-voltage EEG tracing characterized by typical intermittent short sequences of waves of 11–15 Hz (“sleep spindles”). Some research suggests that stage 2 represents the genuine first stage of sleep and that the appearance of spindles, resulting from specific neural interactions between central (thalamus) and peripheral (cortex) brain structures, more reliably represents the onset of sleep. Stage 2 is also characterized on EEG tracings by the appearance of relatively high-voltage (more than 75-microvolt) low-frequency (0.5–2.0-Hz) biphasic waves. During stage 2 these waves, which are also called K-complexes, are induced by external stimulation (e.g., a sound) or occur spontaneously during sleep. Sleep spindles and spontaneous K-complexes are present in the infant at about six months of age (sometimes earlier). As sleep deepens, slow waves progressively become more abundant. Stage 3 is conventionally defined as the point at which slow waves occupy more than 20 percent of the 30-second window of an EEG tracing. Because of slow-wave predominance, stage 3 is also called slow-wave sleep (SWS). Slow-wave activity peaks in childhood and then decreases with age. Across childhood and adolescence there is progressive movement toward an adult sleep pattern consisting of longer 90-minute sleep cycles, shorter sleep totals, and decreased slow-wave activity.

Distinctions between sleep stages are somewhat arbitrary, and the true physiological boundary between stages is less clear than is described by these criteria. By analogy, the expression “teenager” is often used to refer to someone between ages 13 and 19, but there is only a subtle difference between a child of 12 years and 11 months and a child of 13 years and 0 months. The terminology serves to categorize different features, but it must be recognized that the boundary between categories is less clear physiologically than the distinction in terminology implies.

The EEG patterns of NREM sleep, particularly during stage 3, are those associated in other circumstances with decreased vigilance. Furthermore, after the transition from wakefulness to NREM sleep, most functions of the autonomic nervous system decrease their rate of activity and their moment-to-moment variability. Thus, NREM sleep is the kind of seemingly restful state that appears capable of supporting the recuperative functions assigned to sleep. There are in fact several lines of evidence suggesting such functions for NREM sleep: (1) increases in such sleep, in both humans and laboratory animals, observed after physical exercise; (2) the concentration of such sleep in the early portion of the sleep period (i.e., immediately after wakeful states of activity) in humans; and (3) the relatively high priority that such sleep has among humans in “recovery” sleep following abnormally extended periods of wakefulness.

However, some experimental evidence shows that such potential functions for NREM sleep are not likely to be purely passive and restorative. Although brain activity is on average decreased during NREM sleep, especially in the thalamus and the frontal cortex, functional brain imaging studies have shown that some regions of the brain, including those involved in memory consolidation (such as the hippocampus), can be spontaneously reactivated during NREM sleep, especially when sleep is preceded by intensive learning. It has also been shown that several areas of the brain are transiently and recurrently activated during NREM sleep, specifically each time that a spindle or slow wave is produced by the brain. In addition to possible recuperative functions of NREM sleep, these activations may serve to reinstate or reinforce neural connections that will later help in optimizing daytime cognitive function (e.g., attention, learning, and memory). In the past these roles were almost exclusively hypothesized to be a function of REM sleep, partly because of the fact that in REM sleep EEG frequencies are faster and more similar to lighter stages of sleep and to wakefulness than they are to NREM sleep. Research suggests that reductions in NREM sleep are a possible early sign of Alzheimer disease, occurring in association with the development of pathological features in the brain that typically precede the emergence of cognitive impairment symptomology.

REM sleep

Rapid eye movement, or REM, sleep is a state of diffuse bodily activation. Its EEG patterns (tracings of faster frequency and lower amplitude than in NREM stages 2 and 3) are superficially similar to those of drowsiness (stage 1 of NREM sleep). Whereas NREM is divided into three stages, REM is usually referred to as a single phase, despite the fact that a complex set of physiological fluctuations takes place in REM sleep.

REM sleep is named for the rapid movements of the eyes that occur in this stage. These movements, however, are not constant in REM sleep and are thus described as phasic. The other hallmark finding in REM sleep physiology is a reduced or nearly absent muscle tone (except for the diaphragm, one of the key muscles for maintaining breathing). Muscle activity in REM sleep may be nearly absent (tonic REM sleep) or may be characterized by brief bursts of activity (phasic REM sleep).

Most autonomic variables exhibit relatively high rates of activity and variability during REM sleep. For example, there are higher heart and respiration rates and more short-term variability in these rates than in NREM sleep, increased blood pressure, and, in males, full or partial penile erection. In addition, REM sleep is accompanied by a relatively low rate of gross body motility but includes some periodic twitching of the muscles of the face and extremities, relatively high levels of oxygen consumption by the brain, increased cerebral blood flow, and higher brain temperature. This brain activation during REM sleep has been shown to be localized in several areas of the brainstem and thalamus as well as in neural structures usually involved in the regulation of emotion (the limbic structures).

An even more-impressive demonstration of the activation of REM sleep is to be found in the firing rates of individual cerebral neurons, or nerve cells, in experimental animals: during REM sleep such rates exceed those of NREM sleep and often equal or surpass those of wakefulness. However, REM sleep also displays some localized areas of neural deactivation, particularly in the frontal (anterior) and parietal (posterior and lateral) regions of the brain cortex. The reasons for these distributed patterns of activations and deactivations remain hypothetical; some researchers have suggested that these responses may represent neural processes involved in REM sleep generation and in the production of dreams, which are known to be prominent during REM sleep.

For mammals, REM sleep is defined by the concurrence of three events: low-voltage mixed-frequency EEG, intermittent REMs, and suppressed muscle tone. The decrease in muscle tone and a similarly observed suppression of spinal reflexes are indicative of heightened motor inhibition during REM sleep. Animal studies have identified the locus ceruleus (or locus coeruleus), a region in the brainstem, as the probable source of that inhibition. When that structure is surgically destroyed in experimental animals, the animals periodically engage in active, apparently goal-directed behaviour during REM sleep, although they still show the unresponsivity to external stimulation characteristic of the stage. It has been suggested that such behaviour may be the acting out of the hallucinations of a dream.

As mentioned above, an important theoretical distinction is that between REM sleep phenomena that are continuous and those that are intermittent. Tonic (continuous) characteristics of REM sleep include the low-voltage EEG and the suppressed muscle tone. Intermittent events in REM sleep include the rapid eye movements themselves and spikelike electrical activity in those parts of the brain concerned with vision and in other parts of the cerebral cortex. The latter activations, which are known as ponto-geniculo-occipital waves, also occur in humans. Functional brain imaging studies have revealed that in humans those waves are closely associated with rapid eye movements.

REM sleep is the stage of sleep during which dreams prevail. Knowledge of human dreams is based largely on subjective reports recorded upon awakening from sleep. Although dreaming can also occur during NREM sleep, dreaming reports from people waking from REM sleep are more frequent, and the content of their dreams is florid, vivid, and hallucinatory.

Although the functions of dreaming remain largely elusive, the patterns of brain activity during REM sleep provide some clues about the characteristic properties of dreams. For instance, the activation of limbic structures during REM sleep may be linked to the high emotional content of dreams, and the deactivation of frontal areas may account for the “bizarreness” of dreams (distortion of time and space, lack of insight and control, etc.).

Sequences of NREM and REM sleep

As an individual matures into adulthood, an adult sleep pattern is established, characterized by the development of sleep-onset NREM sleep, the emergence of NREM sleep substages, the reduction or elimination of napping, and the decline of slow-wave activity. The usual temporal progression of the two kinds of sleep in adolescent and adult humans is for a period of approximately 70–90 minutes of NREM sleep (the stages being ordered 1–2–3–2) to precede the first period of REM sleep, which may last from approximately 5 to 15 minutes. NREM-REM cycles (ultradian cycles) of roughly equivalent total duration then recur through the night (about four to six times during a normal night’s sleep), with the REM portion lengthening somewhat and the stage 3 contribution to NREM portion shrinking correspondingly as sleep continues. Overnight sleep often is divided into three time periods: the first third of the night, which consists of the highest percentage of deep NREM sleep; the middle third of the night; and the last third of the night, the majority of which is made up of REM sleep. In the typical adult and adolescent, approximately 25 percent of total accumulated sleep is spent in REM sleep and 75 percent in NREM sleep. Most of the latter is EEG stage 2. The high proportion of stage 2 NREM sleep is attributable to the loss of stage 3 in the NREM portion of the NREM-REM cycles after the first two or three.

Light and deep sleep

Which of the various NREM stages is light sleep and which is deep sleep? The criteria used to establish sleep depth are the same as those used to distinguish sleep from wakefulness. In terms of motor behaviour, motility decreases (depth increases) from stages 1 through 3. By criteria of sensory responsivity, thresholds generally increase (sleep deepens) from stages 1 through 3. Thus, gradations within NREM sleep do seem fairly consistent, with a continuum extending from the “lightest” stage 1 to the “deepest” stage 3.

Relative to NREM sleep, is REM sleep light or deep? The answer seems to be that by some criteria REM sleep is light and by others it is deep. For example, in terms of muscle tone, which is at its lowest point during REM sleep, it is deep. In terms of its increased rates of intermittent fine body movements, REM sleep would have to be considered light. Arousal thresholds during REM sleep are variable, apparently as a function of the meaningfulness of the stimulus (and of the possibility of its incorporation into an ongoing dream sequence). With a meaningful stimulus (e.g., one that cannot be ignored with impunity), the capacity for responsivity can be demonstrated to be roughly equivalent to that of “light” NREM sleep (stages 1 and 2). With a stimulus having no particular significance to the sleeper, thresholds can be rather high. The discrepancy between those two conditions suggests an active shutting out of irrelevant stimuli during REM sleep. By most physiological criteria related to the autonomic and central nervous systems, REM sleep clearly is more like wakefulness than like NREM sleep. However, drugs that cause arousal in wakefulness, such as amphetamines and antidepressants, suppress REM sleep. In terms of subjective response, recently awakened sleepers often describe REM sleep as having been deep and NREM sleep as having been light. The subjectively felt depth of REM sleep may reflect the immersion of the sleeper in the vivid dream experiences of this stage.

Thus, as was true in defining sleep itself, there are difficulties in achieving unequivocal definitions of sleep depth. Several different criteria may be employed, and they are not always in agreement. REM sleep is particularly difficult to classify along any continuum of sleep depth. The current tendency is to consider it a unique state, sharing properties of both light and deep sleep. The fact that selective deprivation of REM sleep (elaborated below) results in a selective increase in such sleep on recovery nights is consistent with this view of REM sleep as unique.

Autonomic variables

Some autonomic physiological variables have a characteristic pattern relating their activity to cumulative sleep time, without respect to whether it is REM or NREM sleep. Such variables presumably reflect constant or slowly changing features of both kinds of sleep, such as the cumulative effects of immobility and of relaxation of skeletal muscles on metabolic processes. Body temperature, for example, drops during the early hours of sleep, reaching a low point after five or six hours, and then rises toward the morning awakening.

Behavioral variables

Another line of behavioral study is the observation of spontaneously occurring integrated behaviour patterns, such as walking and talking during sleep. In keeping with the idea of a heightened tonic (continuous) motor inhibition during REM sleep but contrary to the idea that such behaviour is an acting out of especially vivid dream experiences or a substitute for them, sleep talking and sleepwalking occur primarily in NREM sleep. Episodes of NREM sleepwalking generally do not seem to be associated with any remembered dreams, nor is NREM sleep talking consistently associated with reported dreams of related content.

Sleep deprivation

One time-honoured approach to determining the function of an organ or process is to deprive an organism of that organ or process. In the case of sleep, the deprivation approach to function has been applied—both experimentally and naturalistically—to sleep as a unitary state (general sleep deprivation) and to particular kinds of sleep (selective sleep deprivation). General sleep deprivation may be either total (e.g., a person has no sleep at all for a period of days) or partial (e.g., over a period of time a person obtains only three or four hours of sleep per night). The method of general deprivation studies is enforced wakefulness. The general idea of selective sleep deprivation studies is to allow the sleeper to have natural sleep until the point of entering the stage to be deprived and then to prevent that stage, either by experimental awakening or by other manipulations such as application of a mildly noxious stimulus (such as acoustic stimulation) or prior administration of a drug known to suppress it. The hope is that total sleep time will not be altered but that increased occurrence of some other stage will substitute for the loss of the one selectively eliminated. The purpose of such studies is manyfold; for example, they enable scientists to discern the function of a certain stage of sleep by observing physiology and behaviour that occur in the absence of that stage.

On a three-hour sleep schedule, partial deprivation does not reproduce in miniaturized form the same relative distribution of sleep patterns achieved in a seven- or eight-hour sleep period. Some increase is observed in absolute amounts of REM sleep during the last three-hour sleep period as compared with the first three hours of normal sleep, when there is a large amount of NREM 3 (slow-wave) sleep. In general, when one misses sleep on a given night and then attempts to recover that sleep loss on the subsequent night, stage 3 sleep occurs in greater abundance than usual. In that situation, it appears that the pressure by the brain for achieving stage 3 sleep prevails, with less pressure for REM sleep and lighter stages of sleep.

In view of several practical considerations, many sleep deprivation studies have used animals rather than humans as experimental subjects. Waking effects routinely observed in such studies have been of deteriorated physiological functioning, sometimes including actual tissue damage. Long-term sleep deprivation in the rat (6 to 33 days), accomplished by enforced locomotion of both experimental and control animals but timed to coincide with any sleep of the experimental animals, has been shown to result in severe debilitation and death of the experimental but not the control animals. This supports the view that sleep serves a vital physiological function. There is some suggestion that age is related to sensitivity to the effects of deprivation, younger organisms proving more capable of withstanding the stress than mature ones.

Among human subjects, a champion nonsleeper apparently was a 17-year-old student who voluntarily undertook a 264-hour sleep-deprivation experiment. Effects noted during the deprivation period included irritability, blurred vision, slurring of speech, memory lapses, and confusion concerning his identity. No long-term (i.e., postrecovery) effects were observed on either his personality or his intellect. More generally, although brief hallucinations and easily controlled episodes of bizarre behaviour have been observed after 5 to 10 days of continuous sleep deprivation, those symptoms do not occur in most subjects and thus offer little support to the hypothesis that sleep loss induces psychosis. In any event, the symptoms rarely persist beyond the period of sleep that follows the period of deprivation. When inappropriate behaviour does persist, it generally seems to be in persons known to have a tendency toward such behaviour. Generally, upon investigation, injury to the nervous system has not been discovered in persons who have been deprived of sleep for many days. That result must be understood in the context of the limited duration of the studies and should not be interpreted as indicating that sleep loss is either safe or desirable. The short-term effects observed with the student mentioned are typical and are of the sort that, in the absence of the continuous monitoring his vigil received, might well have endangered his health and safety.

Other commonly observed behavioral effects during sleep deprivation include fatigue, inability to concentrate, and visual or tactile illusions and hallucinations. Those effects generally become intensified with increased loss of sleep, but they also wax and wane in a cyclic fashion in line with 24-hour fluctuations in EEG alpha-wave (8 to 12 hertz) phenomena and with body temperature, becoming most acute in the early morning hours. Changes in intellectual performance during moderate sleep loss can to a certain extent be compensated for by increased effort and motivation. In general, tasks that are work-paced (subjects must respond at a particular instant of time not of their own choice) tend to be affected more adversely than tasks that are self-paced. Errors of omission are common with the former kind of task and are thought to be associated with “microsleep”—momentary lapses into sleep. Changes in body chemistry and in workings of the autonomic nervous system sometimes have been noted during deprivation, but it has proved difficult either to establish consistent patterning in such effects or to ascertain whether they should be attributed to sleep loss per se or to the stress or other incidental features of the deprivation manipulation. Some studies, however, have demonstrated that sleep deprivation has neuroendocrine and metabolic consequences, such as increasing the risk for obesity and diabetes. Insufficient sleep is further associated with an inability to lose weight among overweight persons. The length of the first recovery sleep session for the student mentioned above, following his 264 hours of wakefulness, was slightly less than 15 hours. His sleep demonstrated increased amounts of stage 3 NREM and REM sleep. Partial sleep deprivation over several weeks can lead to an accumulation of cognitive deficits that may mimic several days of complete sleep loss.

Studies of selective sleep deprivation have confirmed the attribution of need for both stage 3 NREM and REM sleep, because an increasing number of experimental arousals are required each night to suppress both stage 3 and REM sleep on successive nights of deprivation and because both show a clear rebound effect following deprivation. Rebound from stage 3 NREM sleep deprivation occurs only on the night following termination of the deprivation regardless of the length of the deprivation, whereas the duration of the rebound effect following REM sleep deprivation is related to the length of the prior deprivation. Although little is known of the consequences of stage 3 deprivation, reduction of that stage by acoustically disrupting slow waves in experimental conditions has been shown to decrease glucose tolerance and thereby increase the risk for diabetes.

The selective deprivation of REM sleep has unique and somewhat puzzling properties and is associated with vivid dreaming when the person is in other sleep stages. REM sleep deprivation studies once were considered also to be “dream-deprivation” studies. That psychological view of REM sleep deprivation has become less pervasive since the experimental demonstration of the occurrence of dreaming during NREM sleep stages and because, contrary to the Freudian position that the dream is an essential safety valve for the release of emotional tensions, it has become evident that REM sleep deprivation is not psychologically disruptive and may in fact be helpful in treating depression. REM sleep deprivation studies have focused more upon the presumed functions of the REM state than upon those of the vivid dreams that accompany it. Other animal studies have shown heightened levels of sexuality and aggressiveness after a period of deprivation, suggesting a drive-regulative function for REM sleep. Other observations suggest increased sensitivity of the central nervous system (CNS) to auditory stimuli and to electroconvulsive shock following deprivation, as might have been predicted from the theory that REM sleep somehow serves to maintain CNS integrity.

Although there likely is a need for REM sleep, it does not appear to be absolute. Animals have been deprived of REM sleep for as long as two months without showing behavioral or physiological evidence of injury. Persons who take certain antidepressant medications have little or no REM sleep; no apparent negative consequences have been noted in those individuals. Several problems arise, however, in connection with the methods of most REM sleep deprivation studies. Controls for factors such as stress, sleep interruption, and total sleep time are difficult to manage. Thus, it is unclear whether observed effects of REM sleep deprivation are the result of REM sleep loss or the result of such factors as stress and general sleep loss.

Pathological aspects

The pathologies of sleep can be divided into six major categories: insomnia (difficulty initiating or maintaining sleep); sleep-related breathing disorders (such as sleep apnea); hypersomnia of central origin (such as narcolepsy); circadian rhythm disorders (such as jet lag); parasomnias (such as sleepwalking); and sleep-related movement disorders (such as restless legs syndrome [RLS]). Each of these categories contains many different disorders and their subtypes. The clinical criteria for sleep pathologies are contained in the International Classification of Sleep Disorders, which uses a condensed grouping system: dyssomnias; parasomnias; sleep disorders associated with mental, neurological, or other conditions; and proposed sleep disorders. Although many sleep disorders occur in both children and adults, some disorders are unique to childhood.

Hypersomnia of central origin

Epidemic encephalitis lethargica is produced by viral infections of sleep-wakefulness mechanisms in the hypothalamus, a structure at the upper end of the brainstem. The disease often passes through several stages: fever and delirium, hyposomnia (loss of sleep), and hypersomnia (excessive sleep, sometimes bordering on coma). Inversions of 24-hour sleep-wakefulness patterns also are commonly observed, as are disturbances in eye movements. Although the disorder is extraordinarily rare, it has taught neuroscientists about the role of particular brain regions in sleep-wake transitions.

Narcolepsy is thought to involve specific abnormal functioning of subcortical sleep-regulatory centres, in particular a specialized area of the hypothalamus that releases a molecule called hypocretin (also referred to as orexin). Some people who experience attacks of narcolepsy have one or more of the following auxiliary symptoms: cataplexy, a sudden loss of muscle tone often precipitated by an emotional response such as laughter or startle and sometimes so dramatic as to cause the person to fall down; hypnagogic (sleep onset) and hypnopompic (awakening) visual hallucinations of a dreamlike sort; and hypnagogic or hypnopompic sleep paralysis, in which the person is unable to move voluntary muscles (except respiratory muscles) for a period ranging from several seconds to several minutes. Sleep attacks consist of periods of REM at the onset of sleep. That precocious triggering of REM sleep (which occurs in healthy adults generally only after 70–90 minutes of NREM sleep and in persons with narcolepsy within 10–20 minutes) may indicate that the accessory symptoms are dissociated aspects of REM sleep; i.e., the cataplexy and the paralysis represent the active motor inhibition of REM sleep, and the hallucinations represent the dream experience of REM sleep. The onset of narcoleptic symptoms is often evident in mid-adolescence and young adulthood. In children, excessive sleepiness is not necessarily obvious. Instead, sleepiness may manifest as attentional difficulties, behavioral problems, or hyperactivity. Because of that, the presence of other narcoleptic symptoms—such as cataplexy, sleep paralysis, and hypnagogic hallucinations—typically are investigated.

Idiopathic hypersomnia (excessive sleeping without a known cause) may involve either excessive daytime sleepiness and drowsiness or a nocturnal sleep period of greater than normal duration, but it does not include sleep-onset REM periods, as seen in narcolepsy. One reported concomitant of hypersomnia, the failure of the heart rate to decrease during sleep, suggests that hypersomniac sleep may not be as restful per unit of time as is normal sleep. In its primary form, hypersomnia is probably hereditary in origin (as is narcolepsy) and is thought to involve some disruption of the functioning of hypothalamic sleep centres; however, its causal mechanisms remain largely unknown. Although some subtle changes in NREM sleep regulation have been found in patients with narcolepsy, both narcolepsy and idiopathic hypersomnia generally are not characterized by grossly abnormal EEG sleep patterns. Some researchers believe that the abnormality in those disorders involves a failure in “turn on” and “turn off” mechanisms regulating sleep rather than in the sleep process itself. Convergent experimental evidence has demonstrated that narcolepsy is often characterized by a dysfunction of specific neurons located in the lateral and posterior hypothalamus that produce hypocretin. Hypocretin is involved in both appetite and sleep regulation. It is believed that hypocretin acts as a stabilizer for sleep-wake transitions, thereby explaining the sudden sleep attacks and the presence of dissociated aspects of (REM) sleep during wakefulness in narcoleptic patients. Narcoleptic and hypersomniac symptoms can sometimes be managed by excitatory drugs or by drugs that suppress REM sleep.

Several forms of hypersomnia are periodic rather than chronic. One rare disorder of periodically excessive sleep, Kleine-Levin syndrome, is characterized by periods of excessive sleep lasting days to weeks, along with a ravenous appetite, hypersexuality, and psychotic-like behaviour during the few waking hours. The syndrome typically begins during adolescence, appears to occur more frequently in males than in females, and eventually spontaneously disappears during late adolescence or early adulthood.

Insomnia

Insomnia is a disorder that is actually made up of many disorders, all of which have in common two characteristics. First, the person is unable to either initiate or maintain sleep. Second, the problem is not due to a known medical or psychiatric disorder, nor is it a side effect of medication.

It has been demonstrated that, by physiological criteria, self-described poor sleepers generally sleep much better than they imagine. Their sleep, however, does show signs of disturbance: frequent body movement, enhanced levels of autonomic functioning, reduced levels of REM sleep, and, in some, the intrusion of waking rhythms (alpha waves) throughout the various sleep stages. Although insomnia in a particular situation is common and without pathological import, chronic insomnia may be related to psychological disturbance. Insomnia conventionally is treated by administration of drugs but often with substances that are potentially addictive and otherwise dangerous when used over long periods. It has been demonstrated that treatments involving cognitive and behavioral programs (relaxation techniques, the temporary restriction of sleep time and its gradual reinstatement, etc.) are more effective in the long-term treatment of insomnia than are pharmacological interventions.

Sleep-related breathing disorders

One of the more-common sleep problems encountered in contemporary society is obstructive sleep apnea. In this disorder, the upper airway (in the region at the back of the throat, behind the tongue) repeatedly impedes the flow of air because of a mechanical obstruction. This can happen dozens of times per hour during sleep. As a consequence, there is impaired gas exchange in the lungs, leading to reductions in blood oxygen levels and unwanted elevations in blood levels of carbon dioxide (a gas that is a waste product of metabolism). In addition, there are frequent disruptions of sleep that can lead to chronic sleep deprivation unless treated. Obstructive sleep apnea usually is associated with obesity, though physical malformations of the chin area (e.g., retrognathia or micrognathia) and enlarged tonsils and adenoids can also cause the disorder. Obstructive sleep apnea can occur in adults, adolescents, and children.

Less-common causes of breathing problems in sleep include central sleep apnea. The term central (as opposed to obstructive) refers to the idea that in this set of disorders the airway mechanics are healthy but the brain is not providing the signal needed to breath during sleep.

Parasomnias

Among the episodes that are sometimes considered problematic in sleep are somniloquy (sleep talking), somnambulism (sleepwalking), enuresis (bed-wetting), bruxism (teeth grinding), snoring, and nightmares. Sleep talking seems more often to consist of inarticulate mumblings than of extended meaningful utterances. It occurs at least occasionally for many people and at that level cannot be considered pathological. Sleepwalking is common in children and can sometimes persist into adulthood. Enuresis may be a secondary symptom of a variety of organic conditions or, more frequently, a primary disorder in its own right. While mainly a disorder of early childhood, enuresis persists into late childhood or early adulthood for a small number of persons. Teeth grinding is not consistently associated with any particular stage of sleep, nor does it appreciably affect overall sleep patterning; it too seems to be an abnormality in, rather than of, sleep.

A variety of frightening experiences associated with sleep have at one time or another been called nightmares. Because not all such phenomena have proved to be identical in their associations with sleep stages or with other variables, several distinctions need to be made between them. Sleep terrors (pavor nocturnus) typically are disorders of early childhood. When NREM sleep is suddenly interrupted, the child may scream and sit up in apparent terror and be incoherent and inconsolable. After a period of minutes, the child returns to sleep, often without ever having been fully alert or awake. Dream recall generally is absent, and the entire episode may be forgotten in the morning. Anxiety dreams most often seem associated with spontaneous arousals from REM sleep. There is remembrance of a dream whose content is in keeping with the disturbed awakening. While their persistent recurrence probably indicates waking psychological disturbance or stress caused by a difficult situation, anxiety dreams occur occasionally in many otherwise healthy persons. The condition is distinct from panic attacks that occur during sleep.

REM sleep behaviour disorder (RBD) is a disease in which the sleeper acts out the dream content. The main characteristic of the disorder is a lack of the typical muscle paralysis seen during REM sleep. The consequence is that the sleeper is no longer able to refrain from physically acting out the various elements of the dream (such as hitting a baseball or running from someone). The condition is seen mainly in older men and is thought to be a degenerative brain disease. Those with RBD appear to be at increased risk for later developing Parkinson disease.

Sleep-related movement disorders

Restless legs syndrome (RLS) and a related disorder known as periodic limb movement disorder (PLMD) are examples of sleep-related movement disorders. A hallmark of RLS is an uncomfortable sensation in the legs that makes movement irresistible; the movement provides some temporary relief of the sensation. Although the primary complaint associated with RLS is wakefulness, the disorder is classified as a sleep disorder for two fundamental reasons. First, there is a circadian variation to the symptoms, making them much more common at night; the affected person’s ability to fall asleep is often disturbed by the relentless need to move when in bed. The second reason is that during sleep most people with RLS experience subtle periodic movements of their legs, which can sometimes disrupt sleep. The periodic limb movements, however, can occur in a variety of other circumstances, including sleep disorders other than RLS, such as PLMD, or as a side effect of some medications. The movements themselves are considered pathological if they disrupt sleep.

Disorders accentuated during sleep

A variety of medical symptoms may be accentuated by the conditions of sleep. Attacks of angina (spasmodic choking chest pain), for example, apparently can be augmented by the activation of the autonomic nervous system in REM sleep, and the same is true of gastric acid secretions in persons who have duodenal ulcers. NREM sleep, on the other hand, can increase the likelihood of certain kinds of epileptic discharge. In contrast, REM sleep appears to be protective against seizure activity.

Depressed people tend to have sleep complaints. They generally sleep either too much or not enough and have low energy and sleepiness in the daytime no matter how much they sleep. Persons with depression have an earlier first REM period in their night sleep than nondepressed people. The first REM period, occurring 40–60 minutes after sleep onset, is often longer than normal, with more eye-movement activity. That suggests a disruption in the drive-regulation function, affecting such things as sexuality, appetite, or aggressiveness, all of which are reduced in affected persons. REM deprivation by pharmacological agents (tricyclic antidepressants) or by REM awakening techniques appears to reverse that sleep abnormality and to relieve the waking symptoms.

Circadian rhythm disorders

There are two prominent types of sleep-schedule disorders: phase-advanced sleep and phase-delayed sleep. In the former the sleep onset and offset occur earlier than the social norms, and in the latter sleep onset is delayed and waking is also later in the day than is desirable. Phase-delayed sleep is a common circadian problem in individuals, particularly adolescents, who have a tendency to stay up late, sleep in, or take late afternoon naps. Alterations in the sleep-wake cycle may also occur in shift workers or following international travel across time zones. The disorders may also occur chronically without any obvious environmental factor. Different genes involved in this circadian regulation have been uncovered, suggesting a genetic component in certain cases of sleep-schedule disorders. The conditions can be treated by gradual readjustment of the timing of sleep. The readjustment can be facilitated by physical (e.g., light exposure) and pharmacological (e.g., melatonin) means.

Excessive daytime sleepiness is a frequent complaint among adolescents. The most-common cause is an inadequate number of hours spent sleeping, because of social schedules and early morning school start times. In addition, for persons of all ages, exposure to blue light-emitting devices, such as smartphones and tablets, prior to falling asleep can contribute to sleep problems, presumably because blue light affects levels of melatonin, which plays a role in sleep induction. Psychological disorders (e.g., major depression), circadian rhythm disorders, or other types of sleep disorders can also cause excessive daytime sleepiness.

Theories of sleep

Two kinds of approaches dominate theories about the functional purpose of sleep. One begins with the measurable physiology of sleep and attempts to relate those findings to certain functions, known or hypothetical. For example, after the discovery of REM sleep was reported in the 1950s, many hypothesized that the function of REM sleep was to replay and reexperience daytime thinking. That was extended to the theory that REM sleep is important for strengthening memories. Later the slow brain waves of NREM sleep gained popularity among scientists who were attempting to demonstrate that sleep physiology plays a role in memory or other alterations in brain function.

Other sleep theories take behavioral consequences of sleep and attempt to find physiological measures to substantiate sleep as the driver of that behaviour. For example, it is known that with less sleep people are more tired and that tiredness can build up over successive nights of inadequate sleep. Thus, sleep plays a critical role in alertness. With that as a starting point, sleep researchers have identified two major factors that appear to drive this function: the circadian pacemaker, lodged deep in the brain in an area of the hypothalamus called the suprachiasmatic nucleus; and the homeostatic regulator, possibly driven by the buildup of certain molecules, such as adenosine, that break down products of cellular metabolism in the brain (interestingly, caffeine blocks the binding of adenosine to receptors on neurons, thereby inhibiting adenosine’s sleep signal).

To describe sleep’s purpose as preventing sleepiness is the equivalent of saying that food’s purpose is to prevent hunger. It is known that food consists of many molecules and substances that drive myriad essential bodily functions and that hunger and satiation are means for the brain to direct behaviour toward eating or not eating. Perhaps sleepiness acts in the same way: a mechanism to lead animals toward a behaviour that achieves sleep, which in turn provides a host of physiological functions.

A broad theory of sleep is necessarily incomplete until scientists gain a full understanding of the functions that sleep plays in all aspects of physiology. Thus, scientists have been reluctant to assign any single purpose to sleep, and in fact many researchers maintain that it is likely more accurate to describe sleep as serving multiple purposes. For example, sleep may facilitate memory formation, boost alertness and attention, stabilize mood, reduce strain on joints and muscles, enhance the immune system, and signal changes in hormone release.

Neural theories

Among neural theories of sleep, there are certain issues that each must face. Is the sleep-wakefulness alternation to be considered a property of individual neurons, making unnecessary the postulation of specific regulative centres, or is it to be assumed that there are some aggregations of neurons that play a dominant role in sleep induction and maintenance? The Russian physiologist Ivan Petrovich Pavlov adopted the former position, proposing that sleep is the result of irradiating inhibition among cortical and subcortical neurons (nerve cells in the outer brain layer and in the brain layers beneath the cortex). Microelectrode studies, on the other hand, have revealed high rates of discharge during sleep from many neurons in the motor and visual areas of the cortex, and it thus seems that, as compared with wakefulness, sleep must consist of a different organization of cortical activity rather than a general overall decline.

Another issue has been whether there is a waking centre, fluctuations in whose level of functioning are responsible for various degrees of wakefulness and sleep, or whether the induction of sleep requires another centre that is actively antagonistic to the waking centre. Early speculation favoured the passive view of sleep. A cerveau isolé preparation, an animal in which a surgical incision high in the midbrain has separated the cerebral hemispheres from sensory input, demonstrated chronic somnolence. It has been reasoned that a similar cutting off of sensory input, functional rather than structural, must characterize natural states of sleep. Other supporting observations for the stimulus-deficiency theory of sleep included presleep rituals such as turning out the lights, regulation of stimulus input, and the facilitation of sleep induction by muscular relaxation. With the discovery of the ascending reticular activating system (ARAS; a network of nerves in the brainstem), it was found that it is not the sensory nerves themselves that maintain cortical arousal but rather the ARAS, which projects impulses diffusely to the cortex from the brainstem. Presumably, sleep would result from interference with the active functioning of the ARAS. Injuries to the ARAS were in fact found to produce sleep. Sleep thus seemed passive, in the sense that it was the absence of something (ARAS support of sensory impulses) characteristic of wakefulness.

Theory has tended to depart from that belief and to move toward conceiving of sleep as an actively produced state. Several kinds of observation have been primarily responsible for the shift. First, earlier studies showing that sleep can be induced directly by electrical stimulation of certain areas in the hypothalamus have been confirmed and extended to other areas in the brain. Second, the discovery of REM sleep has been even more significant in leading theorists to consider the possibility of actively produced sleep. REM sleep, by its very active nature, defies description as a passive state. REM sleep can be eliminated in experimental animals by the surgical destruction of a group of nerve cells in the pons, the active function of which appears to be necessary for REM sleep. Thus, it is difficult to imagine that the various manifestations of REM sleep reflect merely the deactivation of wakefulness mechanisms. Furthermore, sleep is a dynamic process that fluctuates between different states, viewed broadly as stages of REM and NREM and now known to be much more diverse within a particular stage.

Functional theories

Functional theories stress the recuperative and adaptive value of sleep. Sleep arises most unequivocally in animals that maintain a constant body temperature and that can be active at a wide range of environmental temperatures. In such forms, increased metabolic requirements may find partial compensation in periodic decreases in body temperature and metabolic rate (i.e., during NREM sleep). Thus, the parallel evolution of temperature regulation and NREM sleep has suggested to some authorities that NREM sleep may best be viewed as a regulatory mechanism conserving energy expenditure in species whose metabolic requirements are otherwise high. As a solution to the problem of susceptibility to predation that comes with the torpor of sleep, it has been suggested that the periodic reactivation of the organism during sleep better prepares it for a fight-or-flight response and that the possibility of enhanced processing of significant environmental stimuli during REM sleep may even reduce the need for sudden confrontation with danger.

Other functional theorists agree that NREM sleep may be a state of “bodily repair” while suggesting that REM sleep is one of “brain repair” or restitution, a period, for example, of increased cerebral protein synthesis or of “reprogramming” the brain so that information achieved in wakeful functioning is most efficiently assimilated. In their specification of functions and provision of evidence for such functions, such theories are necessarily vague and incomplete. The function of stage 2 NREM sleep is still unclear, for example. Such sleep is present in only rudimentary form in subprimate species yet consumes approximately half of human sleep time. Comparative, physiological, and experimental evidence is unavailable to suggest why so much human sleep is spent in that stage. In fact, poor sleepers whose laboratory sleep records show high proportions of stage 2 and little or no REM sleep often report feeling that they have not slept at all.

Another theory is that of adaptive inactivity. This theory considers that sleep serves a universal function, one in which an animal’s ecological niche shapes its sleep behaviour. For example, carnivores whose prey is nocturnal tend to be most active at night. Thus, the carnivore sleeps during the day, when hunting is inefficient, and thereby conserves energy for hunting at night. Furthermore, an animal’s predators’ being active during the day but not at night encourages the animal’s daytime inactivity and hence daytime sleep. For humans the bulk of activity occurs during the day, leaving nighttime as a period for inactivity. In addition, light and dark cycles, which influence circadian rhythm, serve to encourage nighttime inactivity and sleep.

Different theories regarding the function of sleep are not necessarily mutually exclusive. For instance, it is likely that there was evolutionary pressure for rest, enabling the body to conserve energy; sleep served as the extreme form of rest. It is also possible, given that the brain and body would be asleep for extended periods of time, that a highly evolved set of physiological processes recharged by sleep would be highly advantageous. For humans, with their complex brains, the need for the brain to synthesize and strengthen information learned during waking hours would yield a very efficient system: acquire information during the day, strengthen it during sleep, and use that newly formed memory in future waking experiences. In fact, experiments have pointed to sleep as playing an essential role in the modification of memories, particularly in making them stronger (i.e., more resistant to forgetting).

shutterstock-medium-file.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1872 2023-08-17 00:40:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1876) Sore throat

Gist

Sore throat : Pain or irritation in the throat that can occur with or without swallowing, often accompanies infections, such as a cold or flu.

A sore throat may be a symptom of influenza or of other respiratory infections, a result of irritation by foreign objects or fumes, or a reaction to certain drugs.

Summary

Pharyngitis is inflammatory illness of the mucous membranes and underlying structures of the throat (pharynx). Inflammation usually involves the nasopharynx, uvula, soft palate, and tonsils. The illness can be caused by bacteria, viruses, mycoplasmas, fungi, and parasites and by recognized diseases of uncertain causes.

Pharyngitis often is associated with infection by Streptococcus bacteria, usually as a complication arising from a common cold. The symptoms of streptococcal pharyngitis (commonly known as strep throat) are generally redness and swelling of the throat, a pustulant fluid on the tonsils or discharged from the mouth, extremely sore throat that is felt during swallowing, swelling of lymph nodes, and a slight fever. Children often experience additional symptoms, including abdominal pain, nausea, headache, and irritability.

Diagnosis of pharyngitis is established by a detailed medical history and by physical examination; the cause of pharyngeal inflammation can be determined by throat culture. Usually only the symptoms can be treated—with throat lozenges to control sore throat and with acetaminophen or aspirin to control fever. If a diagnosis of streptococcal infection is established by culture, appropriate antibiotic therapy, usually with penicillin, is instituted. Within approximately three days the fever leaves; the other symptoms may persist for another two to three days.

Viral pharyngitis can produce raised whitish to yellow lesions in the pharynx that are surrounded by reddened tissue. Symptoms of viral pharyngitis typically include fever, headache, and sore throat; symptoms last 4 to 14 days. Lymphatic tissue in the pharynx may also become involved.

A number of other infectious diseases may cause pharyngitis, including tuberculosis, syphilis, diphtheria, and meningitis.

Details

Sore throat, also known as throat pain, is pain or irritation of the throat. Usually, causes of sore throat include:

* viral infections
* group A streptococcal infection (GAS) bacterial infection
* pharyngitis (inflammation of the throat)
* tonsillitis (inflammation of the tonsils), or dehydration, which leads to the throat drying up.

The majority of sore throats are caused by a virus, for which antibiotics are not helpful. A strong association between antibiotic misuse and antibiotic resistance has been shown.

For sore throat caused by bacteria (GAS), treatment with antibiotics may help the person get better faster, reduce the risk that the bacterial infection spreads, prevent retropharyngeal abscesses and quinsy, and reduce the risk of other complications such as rheumatic fever and rheumatic heart disease. In most developed countries, post-streptococcal diseases have become far less common. For this reason, awareness and public health initiatives to promote minimizing the use of antibiotics for viral infections has become the focus.

Approximately 35% of childhood sore throats and 5-25% of adults sore throats are caused by a bacterial infection from group A streptococcus. Sore throats that are "non-group A streptococcus" are assumed to be caused by a viral infection. Sore throat is a common reason for people to visit their primary care doctors and the top reason for antibiotic prescriptions by primary care practitioners such as family doctors. In the United States, about 1% of all visits to the hospital emergency department, physician office and medical clinics, and outpatient clinics are for sore throat (over 7 million visits for adults and 7 million visits for children per year).

Definition

A sore throat is pain felt anywhere in the throat.

Presentation

Symptoms of sore throat include:

* a scratchy sensation
* pain during swallowing
* discomfort while speaking
* a burning sensation
* swelling in the neck

Diagnosis

The most common cause (80%) is acute viral pharyngitis, a viral infection of the throat. Other causes include other bacterial infections (such as group A streptococcus or streptococcal pharyngitis), trauma, and tumors. Gastroesophageal (acid) reflux disease can cause stomach acid to back up into the throat and also cause the throat to become sore. In children, streptococcal pharyngitis is the cause of 35–37% of sore throats.

The symptoms of a viral infection and a bacterial infection may be very similar. Some clinical guidelines suggest that the cause of a sore throat is confirmed prior to prescribing antibiotic therapy and only recommend antibiotics for children who are at high risk of non-suppurative complications. A group A streptococcus infection can be diagnosed by throat culture or a rapid test. In order to perform a throat culture, a sample from the throat (obtained by swabbing) is cultured (grown) on a blood agar plate to confirm the presence of group A streptococcus. Throat cultures are effective for people who have a low bacterial count (high sensitivity), however, throat cultures usually take about 48 hours to obtain the results.

Clinicians often also make treatment decisions based on the person's signs and symptoms alone. In the US, approximately 2/3rd of adults and half of children with sore throat are diagnosed based on symptoms and do not have testing for the presence of GAS to confirm a bacterial infection.

Rapid tests to detect GAS (bacteria) give a positive or negative result that is usually based on a colour change on a test strip that contains a throat swab (sample). Test strips detect a cell wall carbohydrate that is specific to GAS by using an immunologic reaction. Rapid testing can be performed in the doctors office and usually takes 5–10 minutes for the test strip to indicate the result. Specificity for most rapid tests is approximately 95%, however sensitivity is about 85%. Although the use of rapid testing has been linked with an overall reduction in antibiotic prescriptions, further research is necessary to understand other outcomes such as safety, and when the person starts to feel better.

Numerous clinical scoring systems (decision tools) have also been developed to support clinical decisions. Scoring systems that have been proposed include Centor's, McIsaac's, and the feverPAIN. A clinical scoring system is often used along with a rapid test. The scoring systems use observed signs and symptoms in order to determine the likelihood of a bacterial infection.

Management

Sore or scratchy throat can temporarily be relieved by gargling a solution of 1/4 to 1/2 teaspoon salt dissolved in an 8-ounce or 230 ml glass of water.

Pain medications such as non-steroidal anti-inflammatory drugs (NSAIDs) and paracetamol (acetaminophen) help in the management of pain. The use of corticosteroids seems to increase slightly the likelihood of resolution and the reduction of pain, but more analysis is necessary to ensure that this minimal benefit outweighs the risks. Antibiotics probably reduce pain, diminish headaches and could prevent some sore throat complications; but, since these effects are small, they must be balanced with the threat of antimicrobial resistance. It is not known whether antibiotics are effective for preventing recurrent sore throat.

There is an old wives' tale that having a hot drink can help with common cold and influenza symptoms, including sore throat, but there is only limited evidence to support this idea. If the sore throat is unrelated to a cold and is caused by for example tonsillitis, a cold drink may be helpful.

There are also other medications like lozenges which can help people to cope with a sore throat.

Without active treatment, symptoms usually last two to seven days.

Statistics

In the United States, there are about 2.4 million emergency department visits with throat-related complaints per year.

Additional Information

A sore throat is pain, scratchiness or irritation of the throat that often worsens when you swallow. The most common cause of a sore throat (pharyngitis) is a viral infection, such as a cold or the flu. A sore throat caused by a virus resolves on its own.

Strep throat (streptococcal infection), a less common type of sore throat caused by bacteria, requires treatment with antibiotics to prevent complications. Other less common causes of sore throat might require more complex treatment.

Symptoms

Symptoms of a sore throat can vary depending on the cause. Signs and symptoms might include:

* Pain or a scratchy sensation in the throat
* Pain that worsens with swallowing or talking
* Difficulty swallowing
* Sore, swollen glands in your neck or jaw
* Swollen, red tonsils
* White patches or pus on your tonsils
* A hoarse or muffled voice

Infections causing a sore throat might result in other signs and symptoms, including:

* Fever
* Cough
* Runny nose
* Sneezing
* Body aches
* Headache
* Nausea or vomiting

When to see a doctor

Take your child to a doctor if your child's sore throat doesn't go away with the first drink in the morning, recommends the American Academy of Pediatrics.

Get immediate care if your child has severe signs and symptoms such as:

* Difficulty breathing
* Difficulty swallowing
* Unusual drooling, which might indicate an inability to swallow

If you're an adult, see your doctor if you have a sore throat and any of the following associated problems, according to the American Academy of Otolaryngology — Head and Neck Surgery:

* A sore throat that is severe or lasts longer than a week
* Difficulty swallowing
* Difficulty breathing
* Difficulty opening your mouth
* Joint pain
* Earache
* Rash
* Fever higher than 101 F (38.3 C)
* Blood in your saliva or phlegm
* Frequently recurring sore throats
* A lump in your neck
* Hoarseness lasting more than two weeks
* Swelling in your neck or face.

shutterstock_225480898.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1873 2023-08-18 00:32:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1877) Speed

Gist

Speed is rapidity in moving, going, traveling, proceeding, or performing; swiftness; celerity; relative rapidity in moving, going, etc.; rate of motion or progress.

Summary

The rate at which an object moves through time is known as its speed. Speed can be measured in a variety of ways, such as kilometers per hour (km/h) or meters per second (m/s). One can determine the average speed at which an object such as a car travels over a certain distance by dividing the distance it travels by the time it takes to make the journey. This can be summarized as

Speed = Distance / Time.

According to this equation, if a car travels a distance of 120 kilometers in 2 hours, it has an average speed of

120 km/2 hours.

or 60 kilometers per hour. This is equivalent to 1,000 meters per minute or 16.7 meters per second.

While the average speed of the car is 60 kilometers per hour, over the course of its trip its speed probably varied considerably. At times, it may have sped up to 100 kilometers per hour, and at others it may have slowed to 15 kilometers per hour or temporarily stopped at a stoplight. The speed of an object at any given moment is known as its instantaneous speed. This is what the speedometer of a car measures. The instantaneous speed represents the distance traveled over a very short period of time (an instant) divided by that very short period of time. The average speed is equal to the total distance traveled divided by the total time of the journey.

Another example can help illustrate how to calculate average speed. In 1996 scientists at NASA sent a small robot to explore the surface of Mars. They called the mission spacecraft Pathfinder and the mobile robot Sojourner. The scientists used radio waves to communicate with Sojourner via Pathfinder. Radio waves travel at about 300,000 kilometers per second. Using the equation for speed, one can calculate how long it would take a message to reach Earth from Pathfinder when Earth was relatively close to Mars, about 190 million kilometers away:

Speed = Distance/Time

t = Distance/Speed

t = 190,000,000 km/300,000 km/s.

t = 633 seconds or t = 10 minutes 33 seconds.

It would take about 10 minutes 33 seconds for a message to travel between Pathfinder and Earth. The scientists on Earth could not rely on rapid communication with their robot.

Details

In everyday use and in kinematics, the speed (commonly referred to as v) of an object is the magnitude of the change of its position over time or the magnitude of the change of its position per unit of time; it is thus a scalar quantity. The average speed of an object in an interval of time is the distance travelled by the object divided by the duration of the interval; the instantaneous speed is the limit of the average speed as the duration of the time interval approaches zero. Speed is the magnitude of velocity (a vector), which indicates additionally the direction of motion.

Speed has the dimensions of distance divided by time. The SI unit of speed is the metre per second (m/s), but the most common unit of speed in everyday usage is the kilometre per hour (km/h) or, in the US and the UK, miles per hour (mph). For air and marine travel, the knot is commonly used.

The fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in vacuum c = 299792458 metres per second (approximately 1079000000 km/h or 671000000 mph). Matter cannot quite reach the speed of light, as this would require an infinite amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed.

Definition:

Historical definition

Italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time. In equation form, that is

where

v is speed, d is distance, and t is time. A cyclist who covers 30 metres in a time of 2 seconds, for example, has a speed of 15 metres per second. Objects in motion often have variations in speed (a car might travel along a street at 50 km/h, slow to 0 km/h, and then reach 30 km/h).

Instantaneous speed

Speed at some instant, or assumed constant during a very short period of time, is called instantaneous speed. By looking at a speedometer, one can read the instantaneous speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, but if it did go at that speed for a full hour, it would travel 50 km. If the vehicle continued at that speed for half an hour, it would cover half that distance (25 km). If it continued for only one minute, it would cover about 833 m.

In mathematical terms, the instantaneous speed

v is defined as the magnitude of the instantaneous velocity

, that is, the derivative of the position

with respect to time:

If
s is the length of the path (also known as the distance) travelled until time,
t, the speed equals the time derivative of s:

In the special case where the velocity is constant (that is, constant speed in a straight line), this can be simplified to

The average speed over a finite time interval is the total distance travelled divided by the time duration.

Average speed

Different from instantaneous speed, average speed is defined as the total distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, the average speed is 80 kilometres per hour. Likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres (km) is divided by a time in hours (h), the result is in kilometres per hour (km/h).

Average speed does not describe the speed variations that may have taken place during shorter time intervals (as it is the entire distance covered divided by the total time of travel), and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, the distance travelled can be calculated by rearranging the definition to

Using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres.

Expressed in graphical language, the slope of a tangent line at any point of a distance-time graph is the instantaneous speed at this point, while the slope of a chord line of the same graph is the average speed during the time interval covered by the chord. Average speed of an object is Vav = s÷t

Difference between speed and velocity

Speed denotes only how fast an object is moving, whereas velocity describes both how fast and in which direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified.

The big difference can be discerned when considering movement around a circle. When something moves in a circular path and returns to its starting point, its average velocity is zero, but its average speed is found by dividing the circumference of the circle by the time taken to move around the circle. This is because the average velocity is calculated by considering only the displacement between the starting and end points, whereas the average speed considers only the total distance travelled.

Tangential speed

Tangential speed is the speed of something moving along a circular path. A point on the outside edge of a merry-go-round or turntable travels a greater distance in one complete rotation than a point nearer the center. Travelling a greater distance in the same time means a greater speed, and so linear speed is greater on the outer edge of a rotating object than it is closer to the axis. This speed along a circular path is known as tangential speed because the direction of motion is tangent to the circumference of the circle. For circular motion, the terms linear speed and tangential speed are used interchangeably, and both use units of m/s, km/h, and others.

Units

Units of speed include:

* metres per second (symbol m/s), the SI derived unit;
* kilometres per hour (symbol km/h);
* miles per hour (symbol mi/h or mph);
* knots (nautical miles per hour, symbol kn or kt);
* feet per second (symbol fps or ft/s);
* Mach number (dimensionless), speed divided by the speed of sound;
* in natural units (dimensionless), speed divided by the speed of light in vacuum (symbol c = 299792458 m/s).

Psychology

According to Jean Piaget, the intuition for the notion of speed in humans precedes that of duration, and is based on the notion of outdistancing. Piaget studied this subject inspired by a question asked to him in 1928 by Albert Einstein: "In what order do children acquire the concepts of time and speed?" Children's early concept of speed is based on "overtaking", taking only temporal and spatial orders into consideration, specifically: "A moving object is judged to be more rapid than another when at a given moment the first object is behind and a moment or so later ahead of the other object."

Additional Information:

Sound

The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. At 20 °C (68 °F), the speed of sound in air is about 343 metres per second (1,125 ft/s; 1,235 km/h; 767 mph; 667 kn), or one kilometre in 2.91 s or one mile in 4.69 s. It depends strongly on temperature as well as the medium through which a sound wave is propagating. At 0 °C (32 °F), the speed of sound in air is about 331 m/s (1,086 ft/s; 1,192 km/h; 740 mph; 643 kn). More simply, the speed of sound is how fast vibrations travel.

The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior. In colloquial speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at 343 m/s in air, it travels at 1,481 m/s in water (almost 4.3 times as fast) and at 5,120 m/s in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at 12,000 metres per second (39,000 ft/s), — about 35 times its speed in air and about the fastest it can travel under normal conditions. In theory, the speed of sound is actually the speed of vibrations. Sound waves in solids are composed of compression waves (just as in gases and liquids) and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus, and density. The speed of shear waves is determined only by the solid material's shear modulus and density.

In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds greater than the speed of sound (Mach1) are said to be traveling at supersonic speeds.

Light

The speed of light in vacuum, commonly denoted c, is a universal physical constant that is exactly equal to 299,792,458 metres per second (approximately 300,000 kilometres per second; 186,000 miles per second; 671 million miles per hour). According to the special theory of relativity, c is the upper limit for the speed at which conventional matter or energy (and thus any signal carrying information) can travel through space.

All forms of electromagnetic radiation, including visible light, travel at the speed of light. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. Any starlight viewed on Earth is from the distant past, allowing humans to study the history of the universe by viewing distant objects. When communicating with distant space probes, it can take minutes to hours for signals to travel. In computing, the speed of light fixes the ultimate minimum communication delay. The speed of light can be used in time of flight measurements to measure large distances to extremely high precision.

Ole Rømer first demonstrated in 1676 that light does not travel instantaneously by studying the apparent motion of Jupiter's moon Io. Progressively more accurate measurements of its speed came over the following centuries. In a paper published in 1865, James Clerk Maxwell proposed that light was an electromagnetic wave and, therefore, travelled at speed c. In 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame of reference is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and, in doing so, showed that the parameter c had relevance outside of the context of light and electromagnetism.

Massless particles and field perturbations, such as gravitational waves, also travel at speed c in vacuum. Such particles and waves travel at c regardless of the motion of the source or the inertial reference frame of the observer. Particles with nonzero rest mass can be accelerated to approach c but can never reach it, regardless of the frame of reference in which their speed is measured. In the special and general theories of relativity, c interrelates space and time and also appears in the famous equation of mass–energy equivalence, E = mc^2.

In some cases, objects or waves may appear to travel faster than light (e.g., phase velocities of waves, the appearance of certain high-speed astronomical objects, and particular quantum effects). The expansion of the universe is understood to exceed the speed of light beyond a certain boundary.

The speed at which light propagates through transparent materials, such as glass or air, is less than c; similarly, the speed of electromagnetic waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the refractive index n of the material.

Animals

The cheetah is the world's fastest land mammal. With acceleration that would leave most automobiles in the dust, a cheetah can go from zero to 60 miles an hour in only three seconds. Wild cheetahs are thought to be able to reach speeds of nearly 70 miles an hour—although they can only sustain that speed for about 30 seconds. These cats are nimble at high speeds, able to make quick and sudden turns in pursuit of prey.

make-internet-faster-featured.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1874 2023-08-19 00:20:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1878) Pixel

Gist

Pixel is any one of the very small dots that together form the picture on a television screen, computer monitor, etc.

Summary

Pixel, in full picture element, is Smallest resolved unit of a video image that has specific luminescence and colour. Its proportions are determined by the number of lines making up the scanning raster (the pattern of dots that form the image) and the resolution along each line. In the most common form of computer graphics, the thousands of tiny pixels that make up an individual image are projected onto a display screen as illuminated dots that from a distance appear as a continuous image. An electron beam creates the grid of pixels by tracing each horizontal line from left to right, one pixel at a time, from the top line to the bottom line. A pixel may also be the smallest element of a light-sensitive device, such as cameras that use charge-coupled devices.

Display resolution is number of pixels shown on a screen, expressed in the number of pixels across by the number of pixels high. For example, a 4K organic light-emitting diode (OLED) television’s display might measure 3,840 × 2,160. This indicates that the screen is 3,840 pixels wide and 2,160 pixels high and thus 8,294,400 total pixels in total.

Pixels are the smallest physical units and base components of any image displayed on a screen. The higher a screen’s resolution, the more pixels that screen can display. More pixels allow for visual information on a screen to be seen in greater clarity and detail, making screens with higher display resolutions more desirable to consumers.

A screen’s display resolution is simply measured in terms of the display rectangle’s width and height. For screens on phones, monitors, televisions, and so forth, display resolution is commonly defined the same way.

However, it is important to note that the use of terms such as “High Definition” to refer to display resolution, while common, is technically incorrect; these terms actually refer to video or image formats. Furthermore, providing just the dimensions associated with various display resolutions is inadequate; for example, a 4-inch screen does not offer the same clarity as a 3.5-inch screen if the two screens have the same number of pixels.

Accurately measuring a screen’s display resolution is accomplished not by calculating the total number of its pixels but by finding its pixel density, which is the number of pixels per unit area on the screen. For digital measurement, pixel density is measured in PPI (pixels per inch), sometimes incorrectly referred to as DPI (dots per inch). In analog measurement, the horizontal resolution is measured by the same distance as the screen’s height. A screen that is 10 inches high, for instance, would have its pixel density measured within 10 linear inches of space.

Sometimes an i or a p adjoins a statement of a screen’s resolution—e.g., 1080i or 1080p. These letters stand for “interlaced scan” and “progressive scan,” respectively, and have to do with how the pixels change to make images. On all monitors, pixels are “painted” line by line. Interlaced displays paint all the odd lines of an image first, then the even ones. By only painting half the image at a time—at a speed so fast people will not notice—interlaced displays use less bandwidth. Progressive displays, which became the universal standard in the 21st century, paint the lines in order.

The first television broadcasts, which occurred between the late 1920s and early 1930s, offered between just 24 and 96 lines of resolution. In the United States, standards were developed in the early 1940s by the National Television System Committee (NTSC) in advance of the television-ownership boom of 1948; standards were then modified in 1953 to include colour programming. NTSC-standard signals, which were sent out over VHF and UHF radio, sent out 525 lines of resolution, roughly 480 of which contributed to an image. In Europe, televisions used different standards: Sequential Color with Memory (SECAM) and Phase Alternating Line (PAL), each of which sent out 625 lines.

There was little improvement in resolution until the 1990s, as analog signals lacked the additional bandwidth necessary to increase the number of lines. But the mid-to-late 1990s saw the introduction of digital broadcasting, wherein broadcasters could compress their data to free up additional bandwidth. The Advanced Television Systems Committee (ATSC) introduced new “high-definition” television standards, which included 480 progressive and interlaced (480p, 480i), 720 progressive (720p), and 1080 progressive and interlaced (1080p, 1080i). Most major display manufacturers offered “4K” or “Ultra HD” (3,840 × 2,160) screens by 2015. That year, Sharp introduced the world’s first 8K (7,680 × 4,320) television set.

The resolution of personal computer screens developed more gradually, albeit over a shorter period of time. Initially, many personal computers used television receivers as their display devices, thereby shackling them to NTSC standards. Common resolutions included 160 × 200, 320 × 200, and 640 × 200. Home computers such as the Commodore Amiga and the Atari Falcon later introduced 640 × 400i resolution (720 × 480i with the borders disabled). In the late 1980s, the IBM PS/2 VGA (multicolour) presented consumers with 640 × 480, which became the standard until the mid-1990s, when it was replaced by 800 × 600. Around the turn of the century, websites and multimedia products were optimized for the new, bestselling 1,024 × 768. As of 2023, the two most common resolutions for desktop computer monitors are 1,366 × 768 and 1,920 × 1,080. For television sets, the most common resolution is 1,920 × 1,080p.

Details

In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.

Each pixel is a sample of an original or synthetic image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.

In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position.

Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels).

Etymology

The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el' include the words voxel and texel. The word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, "pix" was being used in reference to still pictures by photojournalists.

The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of scanned images from space probes to the Moon and Mars. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (circa 1963).

The concept of a "picture element" dates to the earliest days of television, for example as "Bildpunkt" (the German word for pixel, literally 'picture point') in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927, though it had been used earlier in various U.S. patents filed as early as 1911.

Some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel. For example, IBM used it in their Technical Reference for the original PC.

Pixilation, spelled with a second i, is an unrelated filmmaking technique that dates to the beginnings of cinema, in which live actors are posed frame by frame and photographed to create stop-motion animation. An archaic British word meaning "possession by spirits (pixies)", the term has been used to describe the animation process since the early 1950s; various animators, including Norman McLaren and Grant Munro, are credited with popularizing it.

Technical

A pixel is generally thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart.

Pixel art

The measures "dots per inch" (dpi) and "pixels per inch" (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution.

The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display) and therefore has a total number of 640 × 480 = 307,200 pixels, or 0.3 megapixels.

The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques.

Sampling patterns

For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another.

For example:

* LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens.
* The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid.
* A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy.
* Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space.
* The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit.
* Pixels on computer monitors are normally "square" (that is, have equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, have unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard.

Resolution of computer monitors

Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. LCD monitors also use pixels to display an image, and have a native resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On some CRT monitors, the beam sweep rate may be fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all - instead they have a set of resolutions that are equally well supported.To produce the sharpest images possible on an LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor.

bf94761093d1b1eadef86c60f6005dba.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1875 2023-08-20 00:35:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,351

Re: Miscellany

1879) Pathologist

Gist

A pathologist is a medical healthcare provider who examines bodies and body tissues. He or she is also responsible for performing lab tests. A pathologist helps other healthcare providers reach diagnoses and is an important member of the treatment team.

Summary

Pathology is the study of the causes and effects of disease or injury. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area which includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"), and the affix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.

As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).

Pathology is a significant field in modern medical diagnosis and medical research.

History

The study of pathology, including the detailed examination of the body, including dissection and inquiry into specific maladies, dates back to antiquity. Rudimentary understanding of many conditions was present in most early societies and is attested to in the records of the earliest historical societies, including those of the Middle East, India, and China. By the Hellenic period of ancient Greece, a concerted causal study of disease was underway, with many notable early physicians (such as Hippocrates, for whom the modern Hippocratic Oath is named) having developed methods of diagnosis and prognosis for a number of diseases. The medical practices of the Romans and those of the Byzantines continued from these Greek roots, but, as with many areas of scientific inquiry, growth in understanding of medicine stagnated some after the Classical Era, but continued to slowly develop throughout numerous cultures. Notably, many advances were made in the medieval era of Islam, during which numerous texts of complex pathologies were developed, also based on the Greek tradition. Even so, growth in complex understanding of disease mostly languished until knowledge and experimentation again began to proliferate in the Renaissance, Enlightenment, and Baroque eras, following the resurgence of the empirical method at new centers of scholarship. By the 17th century, the study of rudimentary microscopy was underway and examination of tissues had led British Royal Society member Robert Hooke to coin the word "cell", setting the stage for later germ theory.

Modern pathology began to develop as a distinct field of inquiry during the 19th Century through natural philosophers and physicians that studied disease and the informal study of what they termed "pathological anatomy" or "morbid anatomy". However, pathology as a formal area of specialty was not fully developed until the late 19th and early 20th centuries, with the advent of detailed study of microbiology. In the 19th century, physicians had begun to understand that disease-causing pathogens, or "germs" (a catch-all for disease-causing, or pathogenic, microbes, such as bacteria, viruses, fungi, amoebae, molds, protists, and prions) existed and were capable of reproduction and multiplication, replacing earlier beliefs in humors or even spiritual agents, that had dominated for much of the previous 1,500 years in European medicine. With the new understanding of causative agents, physicians began to compare the characteristics of one germ's symptoms as they developed within an affected individual to another germ's characteristics and symptoms. This approach led to the foundational understanding that diseases are able to replicate themselves, and that they can have many profound and varied effects on the human host. To determine causes of diseases, medical experts used the most common and widely accepted assumptions or symptoms of their times, a general principal of approach that persists into modern medicine.

Modern medicine was particularly advanced by further developments of the microscope to analyze tissues, to which Rudolf Virchow gave a significant contribution, leading to a slew of research developments. By the late 1920s to early 1930s pathology was deemed a medical specialty. Combined with developments in the understanding of general physiology, by the beginning of the 20th century, the study of pathology had begun to split into a number of distinct fields, resulting in the development of a large number of modern specialties within pathology and related disciplines of diagnostic medicine.

Etymology

The term pathology comes from the Ancient Greek roots of pathos, meaning "experience" or "suffering" and -logia, "study of". The Latin term is of early sixteenth-century origin and became increasingly popularized after the 1530s.

Details

If you have ever seen a doctor about an illness, chances are you have benefited from the services of a pathologist. However, you probably never saw the pathologist. So, what is pathology, and what does a pathologist do?

Pathology is the study of the causes, nature, and effects of disease. You may be wondering: what is a pathology doctor called? A pathology doctor is called a pathologist. Both pathology and pathologist come from the Greek word pathos, meaning suffering. To answer the question : what’s a pathologist? A pathologist is a medical doctor with additional training in laboratory techniques used to study disease. Pathologists may work in a lab alongside scientists with special medical training. Pathologists study tissues and other materials taken from the body. They analyze these items to diagnose illness, monitor ongoing medical conditions, and to help guide treatment.

A pathologist is a vital part of any patient’s care team. And yet, the pathologist may be largely invisible to the patient. That’s because much of a pathologist’s work is conducted in the lab. There, a pathologist draws on medical knowledge and a detective’s passion for mystery to put together the picture of an illness.

What Is Pathology?

A pathologist studies fluids, tissues, or organs taken from the body. Pathologists often work with a surgically removed sample of diseased tissue, called a biopsy. The pathological examination of an entire body is an autopsy.

Pathologists are often involved in the diagnosis of illness. A pathologist may examine a sample of tissue for a virus, bacteria, or other infectious agents. The vast majority of cancer diagnoses are made by, or in conjunction with, a pathologist.

Pathologists may also help guide the course of treatment. For example, a pathologist may analyze blood samples, helping to monitor and track the progression of a bloodborne illness.

Modern pathologists have more than microscopes at their disposal. They may use genetic studies and gene markers to diagnose a hereditary condition.

Much of a pathologist’s work culminates in the form of a pathology report. In such a report, the pathologist details the analysis of samples sent to the lab by a doctor or other professional, meticulously laying out their findings.

How to Become a Pathologist?

Because pathologists often play a behind-the-scenes role, even medical students may find themselves wondering exactly what is pathology and how to go about becoming a pathologist. A pathologist education begins with a four-year undergraduate degree. The next step is four years of education at a quality medical school, such as the American University of the Caribbean School of Medicine (AUC).

There is no such thing as a pathology degree. Rather, the aspiring pathologist generally must undertake a residency. During residency, future pathologists study and practice pathology under the training of experts in the field.

Pathology is sometimes divided into anatomical pathology and clinical pathology. Anatomical pathology involves the analysis of body organs and tissues. Clinical pathology involves the analysis of body fluids, such as blood and urine.

A doctor may be able to complete a residency in either anatomical pathology or clinical pathology in three years. Combined anatomical and clinical residencies may take four years or more. The final step to becoming a pathologist is passing a board certification exam.

Meet a Pathologist

Constantine "Aki" Kanakis, MD, a 2020 graduate of the American University of the Caribbean School of Medicine (AUC), is a resident physician at Loyola University Medical Center Department of Pathology and Laboratory Medicine. We asked Dr. Kanakis to describe the role of a pathologist.

Q: Why did you decide to go into your specialty?

A: So many reasons. After seeing first-hand how patients go through the process of being managed and treated for a variety of illnesses, nothing I've found comes as close as pathology to the root of the medical philosophy. It helps that I worked for a decade as a medical laboratory scientist in clinical diagnostics. Those years cemented my interest in the field that treats all the patients in a hospital or community.

Nearly 70% of information in a patient's chart comes from laboratory-derived diagnostic data and virtually 99% of cancer diagnoses are signed out by pathologists. With as many subspecialties and fellowships as there are for all of clinical medicine, your career path is as broad as you like. And, when it comes to work-life balance, it can't be beat!

Pathology focuses on the most accurate and expedient final diagnosis of illness in order to make sure that every patient receives the best and most prompt care. And, while we work primarily with our patient-facing colleagues to coordinate a strong interdisciplinary approach to medicine, some of us see patients, too. For example, a pathologist trained in transfusion medicine sees patients who need apheresis therapy, or have transfusion-related complications, or even cellular therapy.

We are an integral part of intraoperative cancer spot diagnoses that determine long-term implications and treatment, invaluable prognostic and diagnostic information comes from our tireless research, and we're the ones to turn to when medicine becomes a mystery.

A long time ago pathologists were called the "doctor's doctor," but I like to think we're an inseparable part of practicing medicine. Who do you think makes and validates COVID tests out of thin air? We do way more than just autopsies. (Even though those are important for their medical and public health implications)

Q: Any advice to medical students considering the specialty?

A: This is an invisible specialty!  Please find exposure to the field, do a rotation, meet your friendly neighborhood pathologists, and engage in something that could change your future. There are plenty of stereotypes about every specialty, and ours is often brushed aside (or kept in the basement). But, I assure you, we would love to work with anyone who has a passion for engaging patient care at this level. Read some of my writing on Lablogatory, do your own in-person elective, and learn more about pathology and laboratory medicine.

Q: What’s the most rewarding part of your job?

A: First, those of us in pathology and laboratory medicine pride ourselves on not managing 1 or 10 patients at a time, but a whole demographic of patients. You rarely practice pathology alone, and when you're part of  a thriving department, you care for a whole region or community of patients together. You've got a front row seat to trends in cancer, infections, and any disease that needs attention. Lab data is public health data.

Second, pathology is full of fulfilling experiences. Every single microscopic slide, or laboratory test, is a living, breathing patient waiting to discuss the results of your interpretation with their PCP or specialist that has downstream effects on their care plans. If you're a surgical pathologist in a regional hospital, you might be paged to come in for a frozen section intraoperative consultation on a suspicious brain mass and rule out or diagnose cancer then and there. If you're a hematopathologist, you might be sent a critical consultation on a bizarre lymphoma by one of your heme/onc colleagues. Your call and your diagnosis can change the course of an entire treatment plan, and that's something to take very seriously and can be very rewarding.

Subspecialties Within Pathology

Pathology includes a myriad of subspecialties. In fact, it may seem as if nearly every specialty in medicine has a counterpart subspecialty within pathology. Dermatologists depend on dermatopathologists to diagnose skin disease; neurologists relying on the expertise of neuropathologists; and so on. Here are a few of the more popular subspecialties within pathology:

* Cytopathology, sometimes called cellular pathology, involves the study of changes in cells. Cytopathologists are instrumental in the diagnosis of cancer.
* Hematology is the study of bloodborne disorders and illnesses, such as anemia, leukemia, hemophilia.
* Forensic pathology is the study of the bodies that died suddenly, unexpectedly, or violently.
* Medical microbiology is the study of infectious organisms. Pathologists in this subspecialty may advise doctors and public health officials how to fight contagious illness.
* Immunology involves the study of the immune system as well as disorders caused by a malfunctioning immune system.
* Molecular genetic pathology involves analyzing a patient’s genes to diagnose chronic conditions.
* Toxicology is the study of poisons and poisoning.

Your Career as a Pathologist

As a specialty, pathology tends to attract critical thinkers and problem solvers. Pathologists tend to be methodical, step-wise thinkers with an eye for recognizing patterns in evidence.

Many doctors spend most of the day seeing patients. Pathologists, on the other hand, generally spend most time in the lab. As a result, pathologists tend to enjoy more regular hours and better work/life balance than many other specialties.

Many pathologists work in hospital laboratories or in independent labs. Others work in academic institutions or private practice. A typical day for a pathologist might begin with taking in samples for analysis and planning experiments. The middle of the day might involve working with lab equipment to analyze samples and refine results. The afternoon might be spent communicating results to other members of the treatment team.

Several factors are driving the need for more pathologists. The population is getting larger. The population is also aging. Both these trends are increasing the demand for all medical services, including pathology.

Additional Information

Pathology is a medical specialty concerned with the determining causes of disease and the structural and functional changes occurring in abnormal conditions. Early efforts to study pathology were often stymied by religious prohibitions against autopsies, but these gradually relaxed during the late Middle Ages, allowing autopsies to determine the cause of death, the basis for pathology. The resultant accumulating anatomical information culminated in the publication of the first systematic textbook of morbid anatomy by the Italian Giovanni Battista Morgagni in 1761, which located diseases within individual organs for the first time. The correlation between clinical symptoms and pathological changes was not made until the first half of the 19th century.

The existing humoral theories of pathology were replaced by a more scientific cellular theory; Rudolf Virchow in 1858 argued that the nature of disease could be understood by means of the microscopic analysis of affected cells. The bacteriologic theory of disease developed late in the 19th century by Louis Pasteur and Robert Koch provided the final clue to understanding many disease processes.

Pathology as a separate specialty was fairly well established by the end of the 19th century. The pathologist does much of his work in the laboratory and reports to and consults with the clinical physician who directly attends to the patient. The types of laboratory specimens examined by the pathologist include surgically removed body parts, blood and other body fluids, urine, feces, exudates, etc. Pathology practice also includes the reconstruction of the last chapter of the physical life of a deceased person through the procedure of autopsy, which provides valuable and otherwise unobtainable information concerning disease processes. The knowledge required for the proper general practice of pathology is too great to be attainable by single individuals, so wherever conditions permit it, subspecialists collaborate. Among the laboratory subspecialties in which pathologists work are neuropathology, pediatric pathology, general surgical pathology, dermatopathology, and forensic pathology.

Microbial cultures for the identification of infectious disease, simpler access to internal organs for biopsy through the use of glass fibre-optic instruments, finer definition of subcellular structures with the electron microscope, and a wide array of chemical stains have greatly expanded the information available to the pathologist in determining the causes of disease. Formal medical education with the attainment of an M.D. degree or its equivalent is required prior to admission to pathology postgraduate programs in many Western countries. The program required for board certification as a pathologist roughly amounts to five years of postgraduate study and training.

Pathology.webp?h=07294c09&itok=H5qjddhH


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB