Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1676 2023-02-23 20:06:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1579) International Development Association

Summary

The International Development Association (IDA) (French: Association internationale de développement) is an international financial institution which offers concessional loans and grants to the world's poorest developing countries. The IDA is a member of the World Bank Group and is headquartered in Washington, D.C. in the United States. It was established in 1960 to complement the existing International Bank for Reconstruction and Development by lending to developing countries which suffer from the lowest gross national income, from troubled creditworthiness, or from the lowest per capita income. Together, the International Development Association and International Bank for Reconstruction and Development are collectively generally known as the World Bank, as they follow the same executive leadership and operate with the same staff.

The association shares the World Bank's mission of reducing poverty and aims to provide affordable development financing to countries whose credit risk is so prohibitive that they cannot afford to borrow commercially or from the Bank's other programs. The IDA's stated aim is to assist the poorest nations in growing more quickly, equitably, and sustainably to reduce poverty. The IDA is the single largest provider of funds to economic and human development projects in the world's poorest nations. From 2000 to 2010, it financed projects which recruited and trained 3 million teachers, immunized 310 million children, funded $792 million in loans to 120,000 small and medium enterprises, built or restored 118,000 kilometers of paved roads, built or restored 1,600 bridges, and expanded access to improved water to 113 million people and improved sanitation facilities to 5.8 million people. The IDA has issued a total US$238 billion in loans and grants since its launch in 1960. Thirty-six of the association's borrowing countries have graduated from their eligibility for its concessional lending. However, nine of these countries have relapsed and have not re-graduated.

Details

What is IDA?

The World Bank’s International Development Association (IDA) is one of the largest and most effective platforms for fighting extreme poverty in the world’s lowest income countries. 

* IDA works in74countries in Africa, East Asia & Pacific, South Asia, Europe & Central Asia, Latin America & Caribbean, and Middle East & North Africa. 

* IDA aims to reduce poverty by providing financing and policy advice for programs that boost economic growth, build resilience, and improve the lives of poor people around the world. 

* More than half of active IDA countries already receive all, or half, of their IDA resources on grant terms, which carry no repayments at all. Grants are targeted to low-income countries at higher risk of debt distress. 

* Over the past 62 years, IDA has provided about $458 billion for investments in 114 countries. IDA also has a strong track record in supporting countries through multiple crises. 

How is IDA funded?

* IDA partners and representatives from borrower countries come together every three years to replenish IDA funds and review IDA’s policies. The replenishment consists of contributions from IDA donors, the World Bank, and financing raised from the capital markets.   

* Since its founding in 1960, IDA has had 20 replenishment cycles. The current 20th cycle, known as IDA20, was replenished in December 2021. It took place one year earlier than scheduled to meet the unprecedented need brought about by the COVID-19 pandemic in developing countries. 

* The $93 billion IDA20 package was made possible by donor contributions from 52 high- and middle-income countries totaling $23.5 billion, with additional financing raised in the capital markets, repayments, and the World Bank’s own contributions.

What does the IDA20 program include?

* IDA is a multi-issue institution and supports a range of development activities thatpave the way toward equality, economic growth, job creation, higher incomes, and better living conditions.

* To help IDA countries address multiple crises and restore their trajectories to the 2030 development agenda, IDA20 will focus on Building Back Better from the Crisis: Towards a Green, Resilient and Inclusive Future. The IDA20 cycle runs from July 1, 2022, to June 30, 2025.

IDA20 will be supported by five special themes:

* Human Capital: Address the current crises and lay the foundations for an inclusive recovery. This theme will continue to help countries manage the pandemic through vaccination programs deployment, scaling up safety nets, and building strong and pandemic-ready health systems. 

* Climate: Raise the ambition to build back better and greener; scale up investments in renewable energy, resilience, and mitigation, while tackling issues like nature and biodiversity. 

* Gender: Scale up efforts to close social and economic gaps between women and men, boys and girls. It will address issues like economic inclusion, gender-based violence, childcare, and reinforcing women’s land rights. 

* Fragility, Conflict, and Violence (FCV): Address drivers of FCV, support policy reforms for refugees, and scale-up regional initiatives in the Sahel, Lake Chad, and Horn of Africa. 

* Jobs and Economic Transformation: Enable better jobs for more people through a green, resilient, and inclusive recovery. IDA20 will continue to address macroeconomic instability, support reforms and public investments, and focus on quality infrastructure, renewable energy, and inclusive urban development. 

IDA20 will deepen recovery efforts by focusing on four cross-cutting issues:

* Crisis Preparedness: Strengthen national systems that can be adapted quickly, and shock preparedness investments to increase country readiness (e.g., shock-responsive safety nets).

* Debt Sustainability and Transparency: The Sustainable Development Finance Policy will continue to be key to support countries on debt sustainability, transparency, and management.   

* Governance and Institutions: Strengthen public institutions to create a conducive environment for a sustainable recovery. Reinforce domestic resource mobilization, digital development, and combat illicit financial flows. 

* Technology: Speed up digital transformation with the focus on digital infrastructure, skills, financial services, and businesses. Also address the risks of digital exclusion and support the creation of reliable, cyber secure data systems. 

As part of the package, IDA20 expects to deliver the following results:

* essential health, nutrition, and population services for up to 430 million people,
* immunizations for up to 200 million children,
* social safety net programs to up to 375 million people,
* new/improved electricity service to up to 50 million people,
* access to clean cooking for 20 million people, and
* improved access to water sources to up to 20 million people.

What are some recent examples of IDA’s work?

Here are some recent examples of how IDA is empowering countries towards a resilient recovery:

* Across Benin, Malawi, Cote D’Ivoire in Africa, and across South Asia, IDA is working with partners like the World Health Organization to finance vaccine purchase and deployment, and address hesitancy.

* In West Africa, an Ebola-era disease surveillance project has prepared countries to face the COVID-19 health crisis.

* In Tonga, IDA’s emergency funding allowed the government to respond to the eruption and Tsunami in January 2022.
The project strengthened Tonga’s resilience to natural and climate-related risks by facilitating a significant reform of disaster risk management legislation. 

* In the Sahel, where climate change is compounding the impacts of COVID-19—IDA is setting up monitoring initiatives, strengthening existing early warning systems, and providing targeted responses to support the agro-pastoral sectors.

* In Yemen, IDA provides critical facilities backed by solar systems to more than 3.2 million people – 51 percent female – including water, educational services, and health care.

* In Bangladesh, IDA helped restart the immunization of children under 12 months after the lockdown due to the COVID-19 outbreak in 2020. It enabled the country to vaccinate 28,585 children in 2020 and 25,864 in 2021 in the camps for the displaced Rohingya people. 

* Across Burkina Faso, Chad, Mali, and Niger, IDA’s response helped two million people benefit through cash transfers and food vouchers for basic food needs. Some 30,000 vulnerable farmers received digital coupons to access seeds and fertilizers, and 73,500 people, of which 32,500 were women, were provided temporary jobs through land restoration activities.

Additional Information

International Development Association (IDA) is a United Nations specialized agency affiliated with but legally and financially distinct from the International Bank for Reconstruction and Development (World Bank). It was instituted in September 1960 to make loans on more flexible terms than those of the World Bank. IDA members must be members of the bank, and the bank’s officers serve as IDA’s ex officio officers. Headquarters are in Washington, D.C.

Most of the IDA’s resources have come from the subscriptions and supplementary contributions of member countries, chiefly the 26 wealthiest. Although the wealthier members pay their subscriptions in gold or freely convertible currencies, the less developed nations may pay 10 percent in this form and the remainder in their own currencies.

The International Development Association (IDA) is one of the world’s largest and most effective platforms for tackling extreme poverty. IDA is part of the World Bank, which helps 74 of the world’s poorest countries and is their main source of funding for social services. IDA aims to reduce poverty by providing grants, interest-free loans, and policy advice to programs that promote growth, increase resilience, and improve the lives of the world’s poor. Over the past 60 years, IDA has invested approximately $ 422 billion in 114 countries. Representatives of IDA partners and lender countries meet every three years to replenish IDA resources and review IDA policies. The surcharge includes contributions from IDA donors, contributions from the World Bank and funds raised in capital markets. Since its inception in 1960, IDA has received 19 regular updates. The current round, called IDA19, was completed for $ 82 billion in December 2019, of which $ 23.5 billion came from IDA donors. Pressed by the COVID-19 crisis, the World Bank has allocated nearly half of IDA19’s resources to meet the financial needs of its first financial period (July 2020 – July 2021). In February 2021, representatives of IDA lenders and lender countries agreed that the IDA20 cycle would be shortened by one year and the IDA19 cycle by two years. IDA19 now covers July 2020 – June 2022 and IDA20 covers July 2022 – June 2025.

ida.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1677 2023-02-24 19:25:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1580) Isosurface

Gist

An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous function whose domain is 3-space.

Details

An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous function whose domain is 3-space.

The term isoline is also sometimes used for domains of more than 3 dimensions.

Isosurface of vorticity trailed from a propeller blade. Note that this is an isosurface plotted with a colormapped slice.

Applications

Isosurfaces are normally displayed using computer graphics, and are used as data visualization methods in computational fluid dynamics (CFD), allowing engineers to study features of a fluid flow (gas or liquid) around objects, such as aircraft wings. An isosurface may represent an individual shock wave in supersonic flight, or several isosurfaces may be generated showing a sequence of pressure values in the air flowing around a wing. Isosurfaces tend to be a popular form of visualization for volume datasets since they can be rendered by a simple polygonal model, which can be drawn on the screen very quickly.

In medical imaging, isosurfaces may be used to represent regions of a particular density in a three-dimensional CT scan, allowing the visualization of internal organs, bones, or other structures.

Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information about pharmacology, chemistry, geophysics and meteorology.

Implementation algorithms:

Marching cubes

The marching cubes algorithm was first published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, and it creates a surface by intersecting the edges of a data volume grid with the volume contour. Where the surface intersects the edge the algorithm creates a vertex. By using a table of different triangles depending on different patterns of edge intersections the algorithm can create a surface. This algorithm has solutions for implementation both on the CPU and on the GPU.

Asymptotic decider

The asymptotic decider algorithm was developed as an extension to marching cubes in order to resolve the possibility of ambiguity in it.

Marching tetrahedra

The marching tetrahedra algorithm was developed as an extension to marching cubes in order to solve an ambiguity in that algorithm and to create higher quality output surface.

Surface nets

The Surface Nets algorithm places an intersecting vertex in the middle of a volume voxel instead of at the edges, leading to a smoother output surface.

Dual contouring

The dual contouring algorithm was first published in the 2002 SIGGRAPH proceedings by Ju and Losasso, developed as an extension to both surface nets and marching cubes. It retains a dual vertex within the voxel but no longer at the center. Dual contouring leverages the position and normal of where the surface crosses the edges of a voxel to interpolate the position of the dual vertex within the voxel. This has the benefit of retaining sharp or smooth surfaces where surface nets often look blocky or incorrectly beveled. Dual contouring often uses surface generation that leverages octrees as an optimization to adapt the number of triangles in output to the complexity of the surface.

Manifold dual contouring

Manifold dual contouring includes an analysis of the octree neighborhood to maintain continuity of the manifold surface.

Examples:

Examples of isosurfaces are 'Metaballs' or 'blobby objects' used in 3D visualisation. A more general way to construct an isosurface is to use the function representation.

Part7_SlicesIsosurfaces_03.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1678 2023-02-25 01:58:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1581) Ventriloquism

Summary

Ventriloquism is the art of “throwing” the voice, i.e., speaking in such a manner that the sound seems to come from a distance or from a source other than the speaker. At the same time, the voice is disguised (partly by its heightened pitch), adding to the effect. The art of ventriloquism was formerly supposed to result from a peculiar use of the stomach during inhalation—hence the name, from Latin venter and loqui, “belly-speaking.” In fact, the words are formed in the normal manner, but the breath is allowed to escape slowly, the tones being muffled by narrowing the glottis and the mouth being opened as little as possible, while the tongue is retracted and only its tip moves. This pressure on the vocal cords diffuses the sound; the greater the pressure, the greater the illusion of distance.

A figure, or dummy, is commonly used by the ventriloquist to assist in the deception. The ventriloquist animates the dummy by moving its mouth while his own lips remain still, thereby completing the illusion that the voice is the dummy’s, not his. When not using a dummy, the ventriloquist employs pantomime to direct the attention of his listeners to the location or object from which the sound presumably emanates.

Ventriloquism is of ancient origin. Traces of the art are found in Egyptian and Hebrew archaeology. Eurycles of Athens was the most celebrated of Greek ventriloquists, who were called, after him, eurycleides, as well as engastrimanteis (“belly prophets”). Many peoples are adepts in ventriloquism—e.g., Zulus, Maoris, and Eskimo. The first known ventriloquist as such was Louis Brabant, valet to the French king Francis I in the 16th century. Henry King, called the King’s Whisperer, had the same function for the English king Charles I in the first half of the 17th century. The technique was perfected in the 18th century. It is also well known in India and China. In Europe and the United States, ventriloquism holds a place in popular entertainment. Notable ventriloquists have included Edgar Bergen in the United States and Robert Lamouret in France.

Details

Ventriloquism, or ventriloquy, is a performance act of stagecraft in which a person (a ventriloquist) creates the illusion that their voice is coming from elsewhere, usually a puppeteered prop known as a "dummy". The act of ventriloquism is ventriloquizing, and the ability to do so is commonly called in English the ability to "throw" one's voice.

History:

Origins

Originally, ventriloquism was a religious practice. The name comes from the Latin for 'to speak from the stomach: venter (belly) and loqui (speak). The Greeks called this gastromancy.  The noises produced by the stomach were thought to be the voices of the unliving, who took up residence in the stomach of the ventriloquist. The ventriloquist would then interpret the sounds, as they were thought to be able to speak to the dead, as well as foretell the future. One of the earliest recorded group of prophets to use this technique was the Pythia, the priestess at the temple of Apollo in Delphi, who acted as the conduit for the Delphic Oracle.

One of the most successful early gastromancers was Eurykles, a prophet at Athens; gastromancers came to be referred to as Euryklides in his honour. Other parts of the world also have a tradition of ventriloquism for ritual or religious purposes; historically there have been adepts of this practice among the Zulu, Inuit, and Māori peoples.

Emergence as entertainment

Sadler's Wells Theatre in the early 19th century, at a time when ventriloquist acts were becoming increasingly popular.
The shift from ventriloquism as manifestation of spiritual forces toward ventriloquism as entertainment happened in the eighteenth century at travelling funfairs and market towns. An early depiction of a ventriloquist dates to 1754 in England, where Sir John Parnell is depicted in the painting An Election Entertainment by William Hogarth as speaking via his hand. In 1757, the Austrian Baron de Mengen performed with a small doll.

By the late 18th century, ventriloquist performances were an established form of entertainment in England, although most performers "threw their voice" to make it appear that it emanated from far away (known as distant ventriloquism), rather than the modern method of using a puppet (near ventriloquism). A well-known ventriloquist of the period, Joseph Askins, who performed at the Sadler's Wells Theatre in London in the 1790s advertised his act as "curious ad libitum Dialogues between himself and his invisible familiar, Little Tommy". However, other performers were beginning to incorporate dolls or puppets into their performance, notably the Irishman James Burne who "carries in his pocket, an ill-shaped doll, with a broad face, which he exhibits ... as giving utterance to his own childish jargon," and Thomas Garbutt.

The entertainment came of age during the era of the music hall in the United Kingdom and vaudeville in the United States. George Sutton began to incorporate a puppet act into his routine at Nottingham in the 1830s, followed by Fred Neiman later in the century, but it is Fred Russell who is regarded as the father of modern ventriloquism. In 1886, he was offered a professional engagement at the Palace Theatre in London and took up his stage career permanently. His act, based on the cheeky-boy dummy "Coster Joe" that would sit in his lap and 'engage in a dialogue' with him was highly influential for the entertainment format and was adopted by the next generation of performers. A blue plaque has been embedded in a former residence of Russell by the British Heritage Society which reads 'Fred Russell the father of ventriloquism lived here'.

Ventriloquist Edgar Bergen and his best-known sidekick, Charlie McCarthy, in the film Stage Door Canteen (1943)
Fred Russell's successful comedy team format was applied by the next generation of ventriloquists. It was taken forward by the British Arthur Prince with his dummy Sailor Jim, who became one of the highest paid entertainers on the music hall circuit, and by the Americans The Great Lester, Frank Byron Jr., and Edgar Bergen. Bergen popularized the idea of the comedic ventriloquist. Bergen, together with his favourite figure, Charlie McCarthy, hosted a radio program that was broadcast from 1937 to 1956. It was the #1 program on the nights it aired. Bergen continued performing until his death in 1978, and his popularity inspired many other famous ventriloquists who followed him, including Paul Winchell, Jimmy Nelson, David Strassman, Jeff Dunham, Terry Fator, Ronn Lucas, Wayland Flowers, Shari Lewis, Willie Tyler, Jay Johnson, Nina Conti, Paul Zerdin, and Darci Lynne. Another ventriloquist act popular in the United States in the 1950s and 1960s was Señor Wences.

The art of ventriloquism was popularized by Y. K. Padhye and M. M. Roy in South India, who are believed to be the pioneers of this field in India. Y. K. Padhye's son Ramdas Padhye borrowed from him and made the art popular amongst the masses through his performance on television. Ramdas Padhye's name is synonymous with puppet characters like Ardhavatrao (also known as Mr. Crazy), Tatya Vinchu and Bunny the Funny which features in a television advertisement for Lijjat Papad, an Indian snack. Ramdas Padhye's son Satyajit Padhye is also a ventriloquist.

The popularity of ventriloquism fluctuates. In the UK in 2010, there were only 15 full-time professional ventriloquists, down from around 400 in the 1950s and '60s. A number of modern ventriloquists have developed a following as the public taste for live comedy grows. In 2007, Zillah & Totte won the first season of Sweden's Got Talent and became one of Sweden's most popular family/children entertainers. A feature-length documentary about ventriloquism, I'm No Dummy, was released in 2010. Three ventriloquists have won America's Got Talent: Terry Fator in 2007, Paul Zerdin in 2015 and Darci Lynne in 2017.

Vocal technique

One difficulty ventriloquists face is that all the sounds that they make must be made with lips slightly separated. For the labial sounds f, v, b, p, and m, the only choice is to replace them with others. A widely parodied example of this difficulty is the "gottle o' gear", from the reputed inability of less skilled practitioners to pronounce "bottle of beer". If variations of the sounds th, d, t, and n are spoken quickly, it can be difficult for listeners to notice a difference.

Ventriloquist's dummy

Modern ventriloquists use multiple types of puppets in their presentations, ranging from soft cloth or foam puppets (Verna Finly's work is a pioneering example), flexible latex puppets (such as Steve Axtell's creations) and the traditional and familiar hard-headed knee figure (Tim Selberg's mechanized carvings). The classic dummies used by ventriloquists (the technical name for which is ventriloquial figure) vary in size anywhere from twelve inches tall to human-size and larger, with the height usually 34–42 in (86–107 cm). Traditionally, this type of puppet has been made from papier-mâché or wood. In modern times, other materials are often employed, including fiberglass-reinforced resins, urethanes, filled (rigid) latex, and neoprene.

Great names in the history of dummy making include Jeff Dunham, Frank Marshall (the Chicago creator of Bergen's Charlie McCarthy, Nelson's Danny O'Day, and Winchell's Jerry Mahoney), Theo Mack and Son (Mack carved Charlie McCarthy's head), Revello Petee, Kenneth Spencer, Cecil Gough, and Glen & George McElroy. The McElroy brothers' figures are still considered by many ventriloquists as the apex of complex movement mechanics, with as many as fifteen facial and head movements controlled by interior finger keys and switches. Jeff Dunham referred to his McElroy figure Skinny Duggan as "the Stradivarius of dummies." The Juro Novelty Company also manufactured dummies.

Phobia

The plots of some films and television programs are based on "killer toy" dummies that are alive and horrific. These include "The Dummy", a May 4, 1962 episode of The Twilight Zone; Devil Doll; Dead Silence; Zapatlela; Buffy The Vampire Slayer; Goosebumps; Tales from the Crypt; Gotham (the episode "Nothing's Shocking"); Friday the 13th: The Series; Toy Story 4; and Doctor Who in different episodes. This genre has also been satirized on television in ALF (the episode "I'm Your Puppet"); Seinfeld (the episode "The Chicken Roaster"); and the comic strip Monty.

Some psychological horror films have plots based on psychotic ventriloquists who believe their dummies are alive and use them as surrogates to commit frightening acts including murder. Examples of these include the 1978 film Magic and the 1945 anthology film Dead of Night.

Literary examples of frightening ventriloquist dummies include Gerald Kersh's The Horrible Dummy and the story "The Glass Eye" by John Keir Cross. In music, NRBQ's video for their song "Dummy" (2004) features four ventriloquist dummies modelled after the band members who 'lip-sync' the song while wandering around a dark, abandoned house.

winchx7.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1679 2023-02-26 01:45:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1582) Mirage

Summary

Mirage, in optics, is the deceptive appearance of a distant object or objects caused by the bending of light rays (refraction) in layers of air of varying density.

Under certain conditions, such as over a stretch of pavement or desert air heated by intense sunshine, the air rapidly cools with elevation and therefore increases in density and refractive power. Sunlight reflected downward from the upper portion of an object—for example, the top of a camel in the desert—will be directed through the cool air in the normal way. Although the light would not be seen ordinarily because of the angle, it curves upward after it enters the rarefied hot air near the ground, thus being refracted to the observer’s eye as though it originated below the heated surface. A direct image of the camel is seen also because some of the reflected rays enter the eye in a straight line without being refracted. The double image seems to be that of the camel and its upside-down reflection in water. When the sky is the object of the mirage, the land is mistaken for a lake or sheet of water.

Sometimes, as over a body of water, a cool, dense layer of air underlies a heated layer. An opposite phenomenon will then prevail, in which light rays will reach the eye that were originally directed above the line of sight. Thus, an object ordinarily out of view, like a boat below the horizon, will be apparently lifted into the sky. This phenomenon is called looming.

Details

A mirage is a naturally-occurring optical phenomenon in which light rays bend via refraction to produce a displaced image of distant objects or the sky. The word comes to English via the French (se) mirer, from the Latin mirari, meaning "to look at, to wonder at".

Mirages can be categorized as "inferior" (meaning lower), "superior" (meaning higher) and "Fata Morgana", one kind of superior mirage consisting of a series of unusually elaborate, vertically stacked images, which form one rapidly-changing mirage.

In contrast to a hallucination, a mirage is a real optical phenomenon that can be captured on camera, since light rays are actually refracted to form the false image at the observer's location. What the image appears to represent, however, is determined by the interpretive faculties of the human mind. For example, inferior images on land are very easily mistaken for the reflections from a small body of water.

Inferior mirage

In an inferior mirage, the mirage image appears below the real object. The real object in an inferior mirage is the (blue) sky or any distant (therefore bluish) object in that same direction. The mirage causes the observer to see a bright and bluish patch on the ground.

Light rays coming from a particular distant object all travel through nearly the same layers of air, and all are refracted at about the same angle. Therefore, rays coming from the top of the object will arrive lower than those from the bottom. The image is usually upside-down, enhancing the illusion that the sky image seen in the distance is a specular reflection on a puddle of water or oil acting as a mirror.

While the aero-dynamics are highly active, the image of the inferior mirage is stable unlike the fata morgana which can change within seconds. Since warmer air rises while cooler air (being denser) sinks, the layers will mix, causing turbulence. The image will be distorted accordingly; it may vibrate or be stretched vertically (towering) or compressed vertically (stooping). A combination of vibration and extension are also possible. If several temperature layers are present, several mirages may mix, perhaps causing double images. In any case, mirages are usually not larger than about half a degree high (roughly the angular diameter of the Sun and Moon) and are from objects between dozens of meters and a few kilometers away.

Heat haze


A hot-road mirage, in which "fake water" appears on the road, is the most commonly observed instance of an inferior mirage.

Heat haze, also called heat shimmer, refers to the inferior mirage observed when viewing objects through a mass of heated air. Common instances when heat haze occurs include images of objects viewed across asphalt concrete (also known as tarmac) roads and over masonry rooftops on hot days, above and behind fire (as in burning candles, patio heaters, and campfires), and through exhaust gases from jet engines. When appearing on roads due to the hot asphalt, it is often referred to as a "highway mirage". It also occurs in deserts; in that case, it is referred to as a "desert mirage". Both tarmac and sand can become very hot when exposed to the sun, easily being more than 10 °C (18 °F) higher than the air a meter (3.3 feet) above, enough to make conditions suitable to cause the mirage.

Convection causes the temperature of the air to vary, and the variation between the hot air at the surface of the road and the denser cool air above it causes a gradient in the refractive index of the air. This produces a blurred shimmering effect, which hinders the ability to resolve the image and increases when the image is magnified through a telescope or telephoto lens.

Light from the sky at a shallow angle to the road is refracted by the index gradient, making it appear as if the sky is reflected by the road's surface. This might appear as a pool of liquid (usually water, but possibly others, such as oil) on the road, as some types of liquid also reflect the sky. The illusion moves into the distance as the observer approaches the miraged object giving one the same effect as approaching a rainbow.

Heat haze is not related to the atmospheric phenomenon of haze.

Superior mirage

A superior mirage is one in which the mirage image appears to be located above the real object. A superior mirage occurs when the air below the line of sight is colder than the air above it. This unusual arrangement is called a temperature inversion, since warm air above cold air is the opposite of the normal temperature gradient of the atmosphere during the daytime. Passing through the temperature inversion, the light rays are bent down, and so the image appears above the true object, hence the name superior.

Superior mirages are quite common in polar regions, especially over large sheets of ice that have a uniform low temperature. Superior mirages also occur at more moderate latitudes, although in those cases they are weaker and tend to be less smooth and stable. For example, a distant shoreline may appear to tower and look higher (and, thus, perhaps closer) than it really is. Because of the turbulence, there appear to be dancing spikes and towers. This type of mirage is also called the Fata Morgana or hafgerðingar in the Icelandic language.

A superior mirage can be right-side up or upside-down, depending on the distance of the true object and the temperature gradient. Often the image appears as a distorted mixture of up and down parts.

Since Earth is round, if the downward bending curvature of light rays is about the same as the curvature of Earth, light rays can travel large distances, including from beyond the horizon. This was observed and documented in 1596, when a ship in search of the Northeast passage became stuck in the ice at Novaya Zemlya, above the Arctic Circle. The Sun appeared to rise two weeks earlier than expected; the real Sun had still been below the horizon, but its light rays followed the curvature of Earth. This effect is often called a Novaya Zemlya mirage. For every 111.12 kilometres (69.05 mi) that light rays travel parallel to Earth's surface, the Sun will appear 1° higher on the horizon. The inversion layer must have just the right temperature gradient over the whole distance to make this possible.

In the same way, ships that are so far away that they should not be visible above the geometric horizon may appear on or even above the horizon as superior mirages. This may explain some stories about flying ships or coastal cities in the sky, as described by some polar explorers. These are examples of so-called Arctic mirages, or hillingar in Icelandic.

If the vertical temperature gradient is +12.9 °C (23.2 °F) per 100 meters/330 feet (where the positive sign means the temperature increases at higher altitudes) then horizontal light rays will just follow the curvature of Earth, and the horizon will appear flat. If the gradient is less (as it almost always is) the rays are not bent enough and get lost in space, which is the normal situation of a spherical, convex "horizon".

In some situations, distant objects can be elevated or lowered, stretched or shortened with no mirage involved.

Fata Morgana

A Fata Morgana (the name comes from the Italian translation of Morgan le Fay, the fairy, shapeshifting half-sister of King Arthur) is a very complex superior mirage. It appears with alternations of compressed and stretched areas, erect images, and inverted images. A Fata Morgana is also a fast-changing mirage.

Fata Morgana mirages are most common in polar regions, especially over large sheets of ice with a uniform low temperature, but they can be observed almost anywhere. In polar regions, a Fata Morgana may be observed on cold days; in desert areas and over oceans and lakes, a Fata Morgana may be observed on hot days. For a Fata Morgana, temperature inversion has to be strong enough that light rays' curvatures within the inversion are stronger than the curvature of Earth.

The rays will bend and form arcs. An observer needs to be within an atmospheric duct to be able to see a Fata Morgana. Fata Morgana mirages may be observed from any altitude within Earth's atmosphere, including from mountaintops or airplanes.

Distortions of image and bending of light can produce spectacular effects. In his book Pursuit: The Chase and Sinking of the "Bismarck", Ludovic Kennedy describes an incident that allegedly took place below the Denmark Strait during 1941, following the sinking of the Hood. The Bismarck, while pursued by the British cruisers Norfolk and Suffolk, passed out of sight into a sea mist. Within a matter of seconds, the ship re-appeared steaming toward the British ships at high speed. In alarm the cruisers separated, anticipating an imminent attack, and observers from both ships watched in astonishment as the German battleship fluttered, grew indistinct and faded away. Radar watch during these events indicated that the Bismarck had in fact made no change to her course.

Night-time mirages

The conditions for producing a mirage can occur at night as well as during the day. Under some circumstances mirages of astronomical objects and mirages of lights from moving vehicles, aircraft, ships, buildings, etc. can be observed at night.

Mirage of astronomical objects

A mirage of an astronomical object is a naturally occurring optical phenomenon in which light rays are bent to produce distorted or multiple images of an astronomical object. Mirages can be observed for such astronomical objects as the Sun, the Moon, the planets, bright stars, and very bright comets. The most commonly observed are sunset and sunrise mirages.

highway-mirageSheryl-R-Garrison-Southern-Alberta-CN-Jul1-2021-e1625241594866.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1680 2023-02-26 23:24:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1583) Sternum

Summary

Sternum, also called breastbone, in the anatomy of tetrapods (four-limbed vertebrates), is an elongated bone in the centre of the chest that articulates with and provides support for the clavicles (collarbones) of the shoulder girdle and for the ribs. Its origin in evolution is unclear. A sternum appears in certain salamanders; it is present in most other tetrapods but lacking in legless lizards, snakes, and turtles (in which the shell provides needed support). In birds an enlarged keel develops, to which flight muscles are attached; the sternum of the bat is also keeled as an adaptation for flight.

In mammals the sternum is divided into three parts, from anterior to posterior: (1) the manubrium, which articulates with the clavicles and first ribs; (2) the mesosternum, often divided into a series of segments, the sternebrae, to which the remaining true ribs are attached; and (3) the posterior segment, called the xiphisternum. In humans the sternum is elongated and flat; it may be felt from the base of the neck to the pit of the abdomen. The manubrium is roughly trapezoidal, with depressions where the clavicles and the first pair of ribs join. The mesosternum, or body, consists of four sternebrae that fuse during childhood or early adulthood. The mesosternum is narrow and long, with articular facets for ribs along its sides. The xiphisternum is reduced to a small, usually cartilaginous xiphoid (“sword-shaped”) process. The sternum ossifies from several centres. The xiphoid process may ossify and fuse to the body in middle age; the joint between manubrium and mesosternum remains open until old age.

Details

The sternum or breastbone is a long flat bone located in the central part of the chest. It connects to the ribs via cartilage and forms the front of the rib cage, thus helping to protect the heart, lungs, and major blood vessels from injury. Shaped roughly like a necktie, it is one of the largest and longest flat bones of the body. Its three regions are the manubrium, the body, and the xiphoid process. The word sternum originates from Ancient Greek 'chest'.

Structure

The sternum is a narrow, flat bone, forming the middle portion of the front of the chest. The top of the sternum supports the clavicles (collarbones) and its edges join with the costal cartilages of the first two pairs of ribs. The inner surface of the sternum is also the attachment of the sternopericardial ligaments. Its top is also connected to the sternocleidomastoid muscle. The sternum consists of three main parts, listed from the top:

* Manubrium
* Body (gladiolus)
* Xiphoid process

In its natural position, the sternum is angled obliquely, downward and forward. It is slightly convex in front and concave behind; broad above, shaped like a "T", becoming narrowed at the point where the manubrium joins the body, after which it again widens a little to below the middle of the body, and then narrows to its lower extremity. In adults the sternum is on average about 1.7 cm longer in the male than in the female.

Manubrium

The manubrium (Latin for 'handle') is the broad upper part of the sternum. It has a quadrangular shape, narrowing from the top, which gives it four borders. The suprasternal notch (jugular notch) is located in the middle at the upper broadest part of the manubrium. This notch can be felt between the two clavicles. On either side of this notch are the right and left clavicular notches.

The manubrium joins with the body of the sternum, the clavicles and the cartilages of the first pair of ribs. The inferior border, oval and rough, is covered with a thin layer of cartilage for articulation with the body. The lateral borders are each marked above by a depression for the first costal cartilage, and below by a small facet, which, with a similar facet on the upper angle of the body, forms a notch for the reception of the costal cartilage of the second rib. Between the depression for the first costal cartilage and the demi-facet for the second is a narrow, curved edge, which slopes from above downward towards the middle. Also, the superior sternopericardial ligament attaches the pericardium to the posterior side of the manubrium.

Body

The body, or gladiolus, is the longest sternal part. It is flat and considered to have only a front and back surface. It is flat on the front, directed upward and forward, and marked by three transverse ridges which cross the bone opposite the third, fourth, and fifth articular depressions. The pectoralis major attaches to it on either side. At the junction of the third and fourth parts of the body is occasionally seen an orifice, the sternal foramen, of varying size and form. The posterior surface, slightly concave, is also marked by three transverse lines, less distinct, however, than those in front; from its lower part, on either side, the transversus thoracis takes origin.

The sternal angle is located at the point where the body joins the manubrium. The sternal angle can be felt at the point where the sternum projects farthest forward. However, in some people the sternal angle is concave or rounded. During physical examinations, the sternal angle is a useful landmark because the second rib attaches here.

Each outer border, at its superior angle, has a small facet, which with a similar facet on the manubrium, forms a cavity for the cartilage of the second rib; below this are four angular depressions which receive the cartilages of the third, fourth, fifth, and sixth ribs. The inferior angle has a small facet, which, with a corresponding one on the xiphoid process, forms a notch for the cartilage of the seventh rib. These articular depressions are separated by a series of curved interarticular intervals, which diminish in length from above downward, and correspond to the intercostal spaces. Most of the cartilages belonging to the true ribs, articulate with the sternum at the lines of junction of its primitive component segments. This is well seen in some other vertebrates, where the parts of the bone remain separated for longer.

The upper border is oval and articulates with the manubrium, at the sternal angle. The lower border is narrow, and articulates with the xiphoid process.

Xiphoid process

Located at the inferior end of the sternum is the pointed xiphoid process. Improperly performed chest compressions during cardiopulmonary resuscitation can cause the xiphoid process to snap off, driving it into the liver which can cause a fatal hemorrhage.

The sternum is composed of highly vascular tissue, covered by a thin layer of compact bone which is thickest in the manubrium between the articular facets for the clavicles. The inferior sternopericardial ligament attaches the pericardium to the posterior xiphoid process.

Joints

The cartilages of the top five ribs join with the sternum at the sternocostal joints. The right and left clavicular notches articulate with the right and left clavicles, respectively. The costal cartilage of the second rib articulates with the sternum at the sternal angle making it easy to locate.

The transversus thoracis muscle is innervated by one of the intercostal nerves and superiorly attaches at the posterior surface of the lower sternum. Its inferior attachment is the internal surface of costal cartilages two through six and works to depress the ribs.

Development

The sternum develops from two cartilaginous bars one on the left and one on the right, connected with the cartilages of the ribs on each side. These two bars fuse together along the middle to form the cartilaginous sternum which is ossified from six centers: one for the manubrium, four for the body, and one for the xiphoid process.

The ossification centers appear in the intervals between the articular depressions for the costal cartilages, in the following order: in the manubrium and first piece of the body, during the sixth month of fetal life; in the second and third pieces of the body, during the seventh month of fetal life; in its fourth piece, during the first year after birth; and in the xiphoid process, between the fifth and eighteenth years.

The centers make their appearance at the upper parts of the segments, and proceed gradually downward. To these may be added the occasional existence of two small episternal centers, which make their appearance one on either side of the jugular notch; they are probably vestiges of the episternal bone of the monotremata and lizards.

Occasionally some of the segments are formed from more than one center, the number and position of which vary.  Thus, the first piece may have two, three, or even six centers.

When two are present, they are generally situated one above the other, the upper being the larger; the second piece has seldom more than one; the third, fourth, and fifth pieces are often formed from two centers placed laterally, the irregular union of which explains the rare occurrence of the sternal foramen, or of the vertical fissure which occasionally intersects this part of the bone constituting the malformation known as fissura sterni; these conditions are further explained by the manner in which the cartilaginous sternum is formed.

More rarely still the upper end of the sternum may be divided by a fissure. Union of the various centers of the body begins about puberty, and proceeds from below upward; by the age of 25 they are all united.

The xiphoid process may become joined to the body before the age of thirty, but this occurs more frequently after forty; on the other hand, it sometimes remains ununited in old age. In advanced life the manubrium is occasionally joined to the body by bone. When this takes place, however, the bony tissue is generally only superficial, the central portion of the intervening cartilage remaining unossified.

In early life, the sternum's body is divided into four segments, not three, called sternebrae (singular: sternebra).

Variations

In 2.5–13.5% of the population, a foramen known as sternal foramen may be presented at the lower third of the sternal body. In extremely rare cases, multiple foramina may be observed. Fusion of the manubriosternal joint also occurs in around 5% of the population. Small ossicles known as episternal ossicles may also be present posterior to the superior end of the manubrium. Another variant called suprasternal tubercle is formed when the episternal ossicles fuse with the manubrium.

Clinical significance

Because the sternum contains bone marrow, it is sometimes used as a site for bone marrow biopsy. In particular, patients with a high BMI (obese or grossly overweight) may present with excess tissue that makes access to traditional marrow biopsy sites such as the pelvis difficult.

Sternal opening

A somewhat rare congenital disorder of the sternum sometimes referred to as an anatomical variation is a sternal foramen, a single round hole in the sternum that is present from birth and usually is off-centered to the right or left, commonly forming in the 2nd, 3rd, and 4th segments of the breastbone body. Congenital sternal foramina can often be mistaken for bullet holes. They are usually without symptoms but can be problematic if acupuncture in the area is intended.

Fractures

Fractures of the sternum are rather uncommon. They may result from trauma, such as when a driver's chest is forced into the steering column of a car in a car accident. A fracture of the sternum is usually a comminuted fracture. The most common site of sternal fractures is at the sternal angle. Some studies reveal that repeated punches or continual beatings, sometimes called "breastbone punches", to the sternum area have also caused fractured sternums. Those are known to have occurred in contact sports such as hockey and football. Sternal fractures are frequently associated with underlying injuries such as pulmonary contusions, or bruised lung tissue.

Dislocation

A manubriosternal dislocation is rare and usually caused by severe trauma. It may also result from minor trauma where there is a precondition of arthritis.

Sternotomy

The breastbone is sometimes cut open (a median sternotomy) to gain access to the thoracic contents when performing cardiothoracic surgery.

Resection

The sternum can be totally removed (resected) as part of a radical surgery, usually to surgically treat a malignancy, either with or without a mediastinal lymphadenectomy (Current Procedural Terminology codes # 21632 and # 21630, respectively).

Bifid sternum or sternal cleft

A bifid sternum is an extremely rare congenital abnormality caused by the fusion failure of the sternum. This condition results in sternal cleft which can be observed at birth without any symptom.

Other animals

The sternum, in vertebrate anatomy, is a flat bone that lies in the middle front part of the rib cage. It is endochondral in origin. It probably first evolved in early tetrapods as an extension of the pectoral girdle; it is not found in fish. In amphibians and reptiles it is typically a shield-shaped structure, often composed entirely of cartilage. It is absent in both turtles and snakes. In birds it is a relatively large bone and typically bears an enormous projecting keel to which the flight muscles are attached. Only in mammals does the sternum take on the elongated, segmented form seen in humans.

Arthropods

In arachnids, the sternum is the ventral (lower) portion of the cephalothorax. It consists of a single sclerite situated between the coxa, opposite the carapace.

4e139eccfa07b894a5ae602046fe35ee.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1681 2023-02-27 16:16:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1584) Winter

Summary

Winter is the coldest season of the year, between autumn and spring; the name comes from an old Germanic word that means “time of water” and refers to the rain and snow of winter in middle and high latitudes. In the Northern Hemisphere it is commonly regarded as extending from the winter solstice (year’s shortest day), December 21 or 22, to the vernal equinox (day and night equal in length), March 20 or 21, and in the Southern Hemisphere from June 21 or 22 to September 22 or 23. The low temperatures associated with winter occur only in middle and high latitudes; in equatorial regions, temperatures are almost uniformly high throughout the year. For physical causes of the seasons, see season.

The concept of winter in European languages is associated with the season of dormancy, particularly in relation to crops; some plants die, leaving their seeds, and others merely cease growth until spring. Many animals also become dormant, especially those that hibernate; numerous insects die.

Details

Winter is the coldest season of the year in polar and temperate climates. It occurs after autumn and before spring. The tilt of Earth's axis causes seasons; winter occurs when a hemisphere is oriented away from the Sun. Different cultures define different dates as the start of winter, and some use a definition based on weather.

When it is winter in the Northern Hemisphere, it is summer in the Southern Hemisphere, and vice versa. In many regions, winter brings snow and freezing temperatures. The moment of winter solstice is when the Sun's elevation with respect to the North or South Pole is at its most negative value; that is, the Sun is at its farthest below the horizon as measured from the pole. The day on which this occurs has the shortest day and the longest night, with day length increasing and night length decreasing as the season progresses after the solstice.

The earliest sunset and latest sunrise dates outside the polar regions differ from the date of the winter solstice and depend on latitude. They differ due to the variation in the solar day throughout the year caused by the Earth's elliptical orbit.

Cause

The tilt of the Earth's axis relative to its orbital plane plays a large role in the formation of weather. The Earth is tilted at an angle of 23.44° to the plane of its orbit, causing different latitudes to directly face the Sun as the Earth moves through its orbit. This variation brings about seasons. When it is winter in the Northern Hemisphere, the Southern Hemisphere faces the Sun more directly and thus experiences warmer temperatures than the Northern Hemisphere. Conversely, winter in the Southern Hemisphere occurs when the Northern Hemisphere is tilted more toward the Sun. From the perspective of an observer on the Earth, the winter Sun has a lower maximum altitude in the sky than the summer Sun.

During winter in either hemisphere, the lower altitude of the Sun causes the sunlight to hit the Earth at an oblique angle. Thus a lower amount of solar radiation strikes the Earth per unit of surface area. Furthermore, the light must travel a longer distance through the atmosphere, allowing the atmosphere to dissipate more heat. Compared with these effects, the effect of the changes in the distance of the Earth from the Sun (due to the Earth's elliptical orbit) is negligible.

The manifestation of the meteorological winter (freezing temperatures) in the northerly snow–prone latitudes is highly variable depending on elevation, position versus marine winds and the amount of precipitation. For instance, within Canada (a country of cold winters), Winnipeg on the Great Plains, a long way from the ocean, has a January high of −11.3 °C (11.7 °F) and a low of −21.4 °C (−6.5 °F).

In comparison, Vancouver on the west coast with a marine influence from moderating Pacific winds has a January low of 1.4 °C (34.5 °F) with days well above freezing at 6.9 °C (44.4 °F). Both places are at 49°N latitude, and in the same western half of the continent. A similar but less extreme effect is found in Europe: in spite of their northerly latitude, the British Isles have not a single non-mountain weather station with a below-freezing mean January temperature.

Additional Information

Winter is one of the four seasons and it is the coldest time of the year. The days are shorter, and the nights are longer. Winter comes after autumn and before spring.

Winter begins at the winter solstice. In the Northern Hemisphere the winter solstice is usually December 21 or December 22. In the Southern Hemisphere the winter solstice is usually June 21 or June 22.

Some animals hibernate during this season. In temperate climates there are no leaves on deciduous trees. People wear warm clothing, and eat food that was grown earlier. Many places have snow in winter, and some people use sleds or skis. Holidays in winter for many countries include Christmas and New Year's Day.

The name comes from an old Germanic word that means "time of water" and refers to the rain and snow of winter in middle and high latitudes.

winter-snow-tracks_full_width.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1682 2023-02-28 16:25:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1585) Summer

Summary

Summer is the warmest season of the year, between spring and autumn. In the Northern Hemisphere, it is usually defined as the period between the summer solstice (year’s longest day), June 21 or 22, and the autumnal equinox (day and night equal in length), September 22 or 23; and in the Southern Hemisphere, as the period between December 22 or 23 and March 20 or 21. The temperature contrast between summer and the other seasons exists only in middle and high latitudes; temperatures in the equatorial regions generally vary little from month to month. For physical causes of the seasons, see season.

The concept of summer in European languages is associated with growth and maturity, especially that of cultivated plants, and indeed summer is the season of greatest plant growth in regions with sufficient summer rainfall. Festivals and rites have been used in many cultures to celebrate summer in recognition of its importance in food production.

A period of exceptionally hot weather, often with high humidity, during the summer is called a heat wave. Such an occurrence in the temperate regions of the Northern Hemisphere in the latter part of summer is sometimes called the dog days.

Details

Summer is the hottest of the four temperate seasons, occurring after spring and before autumn. At or centred on the summer solstice, daylight hours are longest and darkness hours are shortest, with day length decreasing as the season progresses after the solstice. The earliest sunrises and latest sunsets also occur near the date of the solstice. The date of the beginning of summer varies according to climate, tradition, and culture. When it is summer in the Northern Hemisphere, it is winter in the Southern Hemisphere, and vice versa.

Timing

From an astronomical view, the equinoxes and solstices would be the middle of the respective seasons, but sometimes astronomical summer is defined as starting at the solstice, the time of maximal insolation, often identified with the 21st day of June or December. By solar reckoning, summer instead starts on May Day and the summer solstice is Midsummer. A variable seasonal lag means that the meteorological centre of the season, which is based on average temperature patterns, occurs several weeks after the time of maximal insolation.

The meteorological convention is to define summer as comprising the months of June, July, and August in the northern hemisphere and the months of December, January, and February in the southern hemisphere. Under meteorological definitions, all seasons are arbitrarily set to start at the beginning of a calendar month and end at the end of a month. This meteorological definition of summer also aligns with the commonly viewed notion of summer as the season with the longest (and warmest) days of the year, in which daylight predominates.

The meteorological reckoning of seasons is used in countries including Australia, New Zealand, Austria, Denmark, Russia and Japan. It is also used by many people in the United Kingdom and Canada. In Ireland, the summer months according to the national meteorological service, Met Éireann, are June, July and August. By the Irish calendar, summer begins on 1 May (Beltane) and ends on 31 July (Lughnasadh).

Days continue to lengthen from equinox to solstice and summer days progressively shorten after the solstice, so meteorological summer encompasses the build-up to the longest day and a diminishing thereafter, with summer having many more hours of daylight than spring. Reckoning by hours of daylight alone, summer solstice marks the midpoint, not the beginning, of the seasons. Midsummer takes place over the shortest night of the year, which is the summer solstice, or on a nearby date that varies with tradition.

Where a seasonal lag of half a season or more is common, reckoning based on astronomical markers is shifted half a season. By this method, in North America, summer is the period from the summer solstice (usually 20 or 21 June in the Northern Hemisphere) to the autumn equinox.

Reckoning by cultural festivals, the summer season in the United States is traditionally regarded as beginning on Memorial Day weekend (the last weekend in May) and ending on Labor Day (the first Monday in September), more closely in line with the meteorological definition for the parts of the country that have four-season weather. The similar Canadian tradition starts summer on Victoria Day one week prior (although summer conditions vary widely across Canada's expansive territory) and ends, as in the United States, on Labour Day.

In some Southern Hemisphere countries such as Brazil, Argentina, South Africa, Australia and New Zealand, summer is associated with the Christmas and New Year holidays. Many families take extended holidays for two or three weeks or longer during summer.

In Australia and New Zealand, summer begins on 1 December and ends on 28 February (29 February in leap years).

In Chinese astronomy, summer starts on or around 5 May, with the jiéqì (solar term) known as lìxià, i.e. "establishment of summer", and it ends on or around 6 August.

In southern and southeast Asia, where the monsoon occurs, summer is more generally defined as lasting from March, April, May and June, the warmest time of the year, ending with the onset of the monsoon rains.

Because the temperature lag is shorter in the oceanic temperate southern hemisphere, most countries in this region use the meteorological definition with summer starting on 1 December and ending on the last day of February.

Additional Information

Summer is one of the four seasons. It is the hottest season of the year. In some places, summer is the wettest season (with the most rain), and in other places, it is a dry season. Four seasons are found in areas which are not too hot or too cold. Summer happens to the north and south sides of the Earth at opposite times of the year. In the north part of the world, summer takes place between the months of June and September, and in the south part of the world, it takes place between December and March. This is because when the north part of the Earth points towards the Sun, the south part points away.

Many people in rich countries travel in summer, to seaside resorts, beaches, camps or picnics. In some countries, they celebrate things in the summer as well as enjoying cool drinks. Other countries get snow in the summer just like winter.

Summer is usually known as the hottest season.

1280px-20190212_SKJ0426-HDR-2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1683 2023-03-01 02:03:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1586) Spring (season)

Summary

Spring, in climatology, is season of the year between winter and summer during which temperatures gradually rise. It is generally defined in the Northern Hemisphere as extending from the vernal equinox (day and night equal in length), March 20 or 21, to the summer solstice (year’s longest day), June 21 or 22, and in the Southern Hemisphere from September 22 or 23 to December 22 or 23. The spring temperature transition from winter cold to summer heat occurs only in middle and high latitudes; near the Equator, temperatures vary little during the year. Spring is very short in the polar regions. For physical causes of the seasons, see season.

In many cultures spring has been celebrated with rites and festivals revolving around its importance in food production. In European languages, the concept of spring is associated with the sowing of crops. During this time of the year all plants, including cultivated ones, begin growth anew after the dormancy of winter. Animals are greatly affected, too: they come out of their winter dormancy or hibernation and begin their nesting and reproducing activities, and birds migrate poleward in response to the warmer temperatures.

Details

Spring, also known as springtime, is one of the four temperate seasons, succeeding winter and preceding summer. There are various technical definitions of spring, but local usage of the term varies according to local climate, cultures and customs. When it is spring in the Northern Hemisphere, it is autumn in the Southern Hemisphere and vice versa. At the spring (or vernal) equinox, days and nights are approximately twelve hours long, with daytime length increasing and nighttime length decreasing as the season progresses until the Summer Solstice in June (Northern Hemisphere) and December (Southern Hemisphere).

Spring and "springtime" refer to the season, and also to ideas of rebirth, rejuvenation, renewal, resurrection and regrowth. Subtropical and tropical areas have climates better described in terms of other seasons, e.g. dry or wet, monsoonal or cyclonic. Cultures may have local names for seasons which have little equivalence to the terms originating in Europe.

Meteorological reckoning

Meteorologists generally define four seasons in many climatic areas: spring, summer, autumn (fall), and winter. These are determined by the values of their average temperatures on a monthly basis, with each season lasting three months. The three warmest months are by definition summer, the three coldest months are winter, and the intervening gaps are spring and autumn. Meteorological spring can therefore, start on different dates in different regions.

In the US and UK, spring months are March, April, and May.

In Australia and New Zealand, spring begins on 22nd or 23rd of September and ends on 21 December.

In Ireland, following the Irish calendar, spring is often defined as February, March, and April.

In Sweden, meteorologists define the beginning of spring as the first occasion on which the average 24 hours temperature exceeds zero degrees Celsius for seven consecutive days, thus the date varies with latitude and elevation.

In Brazil, spring months are September, October, November.

Astronomical and solar reckoning

In the Northern Hemisphere (e.g. Germany, the United States, Canada, and the UK), the astronomical vernal equinox (varying between 19 and 21 March) can be taken to mark the first day of spring with the summer solstice (around 21 June) marked as first day of summer. By solar reckoning, Spring is held to begin 1 February until the first day of Summer on May Day, with the summer solstice being marked as Midsummer instead of the beginning of Summer as with astronomical reckoning.

In Persian culture the first day of spring is the first day of the first month (called Farvardin) which begins on 20 or 21 March.

In the traditional Chinese calendar, the "spring" season consists of the days between Lichun (3–5 February), taking Chunfen (20–22 March) as its midpoint, then ending at Lixia (5–7 May). Similarly, according to the Celtic tradition, which is based solely on daylight and the strength of the noon sun, spring begins in early February (near Imbolc or Candlemas) and continues until early May (Beltane).

The spring season in India is culturally in the months of March and April, with an average temperature of approx 32 °C. Some people in India especially from Karnataka state celebrate their new year in spring, Ugadi.

4a6fdfc194a4406563f39ac19be5028a.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1684 2023-03-02 01:37:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1587) Stainless steel

Summary

Stainless steel is any one of a family of alloy steels usually containing 10 to 30 percent chromium. In conjunction with low carbon content, chromium imparts remarkable resistance to corrosion and heat. Other elements, such as nickel, molybdenum, titanium, aluminum, niobium, copper, nitrogen, sulfur, phosphorus, or selenium, may be added to increase corrosion resistance to specific environments, enhance oxidation resistance, and impart special characteristics.

Most stainless steels are first melted in electric-arc or basic oxygen furnaces and subsequently refined in another steelmaking vessel, mainly to lower the carbon content. In the argon-oxygen decarburization process, a mixture of oxygen and argon gas is injected into the liquid steel. By varying the ratio of oxygen and argon, it is possible to remove carbon to controlled levels by oxidizing it to carbon monoxide without also oxidizing and losing expensive chromium. Thus, cheaper raw materials, such as high-carbon ferrochromium, may be used in the initial melting operation.

There are more than 100 grades of stainless steel. The majority are classified into five major groups in the family of stainless steels: austenitic, ferritic, martensitic, duplex, and precipitation-hardening. Austenitic steels, which contain 16 to 26 percent chromium and up to 35 percent nickel, usually have the highest corrosion resistance. They are not hardenable by heat treatment and are nonmagnetic. The most common type is the 18/8, or 304, grade, which contains 18 percent chromium and 8 percent nickel. Typical applications include aircraft and the dairy and food-processing industries. Standard ferritic steels contain 10.5 to 27 percent chromium and are nickel-free; because of their low carbon content (less than 0.2 percent), they are not hardenable by heat treatment and have less critical anticorrosion applications, such as architectural and auto trim. Martensitic steels typically contain 11.5 to 18 percent chromium and up to 1.2 percent carbon with nickel sometimes added. They are hardenable by heat treatment, have modest corrosion resistance, and are employed in cutlery, surgical instruments, wrenches, and turbines. Duplex stainless steels are a combination of austenitic and ferritic stainless steels in equal amounts; they contain 21 to 27 percent chromium, 1.35 to 8 percent nickel, 0.05 to 3 percent copper, and 0.05 to 5 percent molybdenum. Duplex stainless steels are stronger and more resistant to corrosion than austenitic and ferritic stainless steels, which makes them useful in storage-tank construction, chemical processing, and containers for transporting chemicals. Precipitation-hardening stainless steel is characterized by its strength, which stems from the addition of aluminum, copper, and niobium to the alloy in amounts less than 0.5 percent of the alloy’s total mass. It is comparable to austenitic stainless steel with respect to its corrosion resistance, and it contains 15 to 17.5 percent chromium, 3 to 5 percent nickel, and 3 to 5 percent copper. Precipitation-hardening stainless steel is used in the construction of long shafts.

Details

Stainless steel is an alloy of iron that is resistant to rusting and corrosion. It contains at least 11% chromium and may contain elements such as carbon, other nonmetals and metals to obtain other desired properties. Stainless steel's resistance to corrosion results from the chromium, which forms a passive film that can protect the material and self-heal in the presence of oxygen.

The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products.

The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants.

Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table.

Properties:

Conductivity

Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values, and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments.

Melting point

As with all other alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a singular temperature. This temperature range goes from 1,400 to 1,530 °C (2,550 to 2,790 °F) depending on the specific consistency of the alloy in question.

Magnetism

Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself.

Corrosion

The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means:

* increasing chromium content to more than 11%
* adding nickel to at least 8%
* adding molybdenum (which also improves resistance to pitting corrosion)

Wear

Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminium and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall.

Density

The density of stainless steel can be somewhere between 7,500kg/m^3 to 8,000kg/m^3 depending on the alloy.

stainless-steel-types.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1685 2023-03-03 00:19:37

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1588) Intravenous therapy

Summary

Intravenous (IV) therapy is a way of administering fluids directly into a vein. The procedure enables water, medication, blood, or nutrients to access the body faster through the circulatory system.

IV therapy is the most common invasive procedure that medical professionals use in healthcare. This article discusses its uses, procedures, benefits, risks, and more.

Overview

Healthcare professionals can use an IV to deliver medication, vitamins, blood, or other fluids to those who need them.

Doctors can provide IV therapy through an IV line directly into a vein. This bypasses the gastric system so the body can take on more fluids quickly.

During the procedure, a healthcare professional will insert a cannula into a person’s vein, usually in the crook of their arm. They can then attach a tube with an IV bag containing fluids, which then drip down the tube directly into the vein.

The fluids or nutrition in IV therapies are specific to each person requiring the treatment.

Uses

IV therapy can treat:

* severe dehydration by administering fluids
* health conditions by administering medication
* pain by administering pain relief
* blood loss by blood transfusion
* malnutrition or inability to take food by administering nutrients

Doctors use the technique as a fast-acting way to feed essential fluids into the body’s system.

IV vitamin therapy

IV vitamin therapy can administer a high concentration of minerals and vitamins directly into the bloodstream rather than through the stomach.

A mix known as the Myers’ mix includes high doses of vitamins B and C, calcium, and magnesium. A medical professional dilutes the vitamins with sterile water.

They then put the fluid into an IV bag with a tube attached to the cannula.

Other types of IV vitamin therapies include:

* IV magnesium sulfate for acute asthma: A 2018 study found this treatment to be more beneficial than using a nebulizer, a device for inhalation medications through a face mask or mouthpiece, for children with acute asthma.
* IV selenium for acute respiratory distress syndrome: This can provide help for critically ill people who need mechanical ventilation.
* IV vitamin C for cancer: Healthcare professionals may administer high doses of vitamin C to those living with cancer. However, researchers have not proven this treatment effective with studies.

A 2020 study concluded that there was insufficient evidence to recommend the use of multivitamin IV therapy outside of medical settings. However, more research is necessary.

Q: Is it a good idea to have IV vitamin therapy at home?

A: Unless recommended by your physician, daily requirements of vitamins can be easily obtained through a well-balanced diet that includes multiple servings of vegetables and fruits. There are very few true reasons to require IV vitamin therapy, the most common being a history of small intestine removal due to illness, cancer, or trauma, which will hamper the absorption of edible nutrients.

Procedure

Below is what happens during a typical IV therapy procedure:

* Before the procedure, a healthcare professional will choose a vein where they insert the cannula. This may be in the forearm, wrist, the back of the hand, or the top of the foot. If a vein is difficult to find, they may use an ultrasound scan to guide the needle.
* Once they have found a vein, the healthcare professional will sanitize the area with a wipe before inserting a fine needle attached to the cannula. They may use adhesive tape to hold the cannula in place.
* Once the cannula is in place, healthcare professionals will use tubing to connect it to the IV.
* During the infusion, a healthcare professional will regularly check the cannula to ensure that the IV is flowing properly and there is no pain or swelling in the area.
* Once the IV therapy infusion is complete, the healthcare professional will disconnect the cannula from the tubing and remove it from the vein.
* They will then apply pressure over the insertion wound to help slow any bleeding. They may dress the area with a cotton bud and adhesive tape.

For procedures that require a regular IV, healthcare professionals will leave the cannula in place.

Benefits

The benefits of using IV therapy in a healthcare setting include:

* Speed: By inserting medications, nutrients, and fluids directly into the vein, healthcare professionals can help a patient recover quickly. This may be particularly useful if the person needs fluids or electrolytes quickly, such as during severe dehydration.
* Efficiency: IV therapy bypasses the gastric system, so the body can absorb more fluids without them having to pass through the digestive system. This makes it easier to provide medication to the target organs quickly.
* Convenience: Once a medical professional inserts a cannula, it can stay there for several days. This means they can provide regular treatment without repeatedly inserting a needle to deliver more fluids.

IV vitamin therapy

There are no studies that support the claims of benefits from IV vitamin therapy. Most studies look at the effects on people in medical facilities with serious conditions.

A 2020 study examining IV multivitamin use in both outpatient and medical settings concluded that there was insufficient evidence to recommend their use outside medical settings. The authors concluded that more research was necessary.

Side effects

Although IV therapy is generally safe and effective, it can cause side effects. These may include:

* damage to blood vessels
* bleeding from the site of insertion
* swelling in the area
* inflammation of the veins if the IV is present for a long time
* bruising at the site of insertion

Risks and complications

According to a 2020 study, complications of IV therapy may include:

* allergic reaction to the adhesive tape that secures the IV in place
* hematoma, or swelling from clotted blood under the skin
* the formation of a blood clot
* cellulitis, or swelling in deep layers of the skin
* skin necrosis, or premature death of skin cells
* the development of an abscess

More extreme types of complications after IV therapy usually occur after 3 or more days of having IV insertion.

The risks of complications rise if a person has not completed full IV insertion training or if this is not a procedure they carry out regularly. For this reason, a medical setting with trained professionals is the best place to receive IV therapy.

Frequently asked questions

Below are answers to common questions about IV therapy.

Can you do IV therapy at home?

Some services offer IV vitamin therapy at home. However, IV therapy comes with risks and complications, and it is best for a person to only undergo IV therapy with trained medical professionals when they need it.

A person should always consult a doctor or other healthcare professional before booking an at-home medical procedure.

How long does IV therapy stay in your system?

Fluids that enter the body through an IV may take effect quicker than if a person has consumed them orally. However, fluids, such as water, vitamins, and medication, should exit the body naturally, depending on the body’s digestive system. This may take a longer or shorter time, depending on the individual.

Summary

IV therapy is a way of administering blood, medication, water, nutrients, and other fluids directly into the bloodstream via the veins. It allows medical professionals to administer fluids to a patient quickly and efficiently.

Although it is common, IV therapy is an invasive procedure that carries some risks. Side effects may include bruising, bleeding, and swelling at the insertion site.

It is best to receive IV therapy in a hospital setting involving trained medical professionals.

Details

Intravenous therapy (abbreviated as IV therapy) is a medical technique that administers fluids, medications and nutrients directly into a person's vein. The intravenous route of administration is commonly used for rehydration or to provide nutrients for those who cannot, or will not—due to reduced mental states or otherwise—consume food or water by mouth. It may also be used to administer medications or other medical therapy such as blood products or electrolytes to correct electrolyte imbalances. Attempts at providing intravenous therapy have been recorded as early as the 1400s, but the practice did not become widespread until the 1900s after the development of techniques for safe, effective use.

The intravenous route is the fastest way to deliver medications and fluid replacement throughout the body as they are introduced directly into the circulatory system and thus quickly distributed. For this reason, the intravenous route of administration is also used for the consumption of some recreational drugs. Many therapies are administered as a "bolus" or one-time dose, but they may also be administered as an extended infusion or drip. The act of administering a therapy intravenously, or placing an intravenous line ("IV line") for later use, is a procedure which should only be performed by a skilled professional. The most basic intravenous access consists of a needle piercing the skin and entering a vein which is connected to a syringe or to external tubing. This is used to administer the desired therapy. In cases where a patient is likely to receive many such interventions in a short period (with consequent risk of trauma to the vein), normal practice is to insert a cannula which leaves one end in the vein, and subsequent therapies can be administered easily through tubing at the other end. In some cases, multiple medications or therapies are administered through the same IV line.

IV lines are classified as "central lines" if they end in a large vein close to the heart, or as "peripheral lines" if their output is to a small vein in the periphery, such as the arm. An IV line can be threaded through a peripheral vein to end near the heart, which is termed a "peripherally inserted central catheter" or PICC line. If a person is likely to need long-term intravenous therapy, a medical port may be implanted to enable easier repeated access to the vein without having to pierce the vein repeatedly. A catheter can also be inserted into a central vein through the chest, which is known as a tunneled line. The specific type of catheter used and site of insertion are affected by the desired substance to be administered and the health of the veins in the desired site of insertion.

Placement of an IV line may cause pain, as it necessarily involves piercing the skin. Infections and inflammation (termed phlebitis) are also both common side effects of an IV line. Phlebitis may be more likely if the same vein is used repeatedly for intravenous access, and can eventually develop into a hard cord which is unsuitable for IV access. The unintentional administration of a therapy outside a vein, termed extravasation or infiltration, may cause other side effects.

Uses:

Medical uses

Intravenous (IV) access is used to administer medications and fluid replacement which must be distributed throughout the body, especially when rapid distribution is desired. Another use of IV administration is the avoidance of first-pass metabolism in the liver. Substances that may be infused intravenously include volume expanders, blood-based products, blood substitutes, medications and nutrition.

Fluid solutions

Fluids may be administered as part of "volume expansion", or fluid replacement, through the intravenous route. Volume expansion consists of the administration of fluid-based solutions or suspensions designed to target specific areas of the body which need more water. There are two main types of volume expander: crystalloids and colloids. Crystalloids are aqueous solutions of mineral salts or other water-soluble molecules. Colloids contain larger insoluble molecules, such as gelatin. Blood itself is considered a colloid.

The most commonly used crystalloid fluid is normal saline, a solution of sodium chloride at 0.9% concentration, which is isotonic with blood. Lactated Ringer's (also known as Ringer's lactate) and the closely related Ringer's acetate, are mildly hypotonic solutions often used in those who have significant burns. Colloids preserve a high colloid osmotic pressure in the blood, while, on the other hand, this parameter is decreased by crystalloids due to hemodilution. Crystalloids generally are much cheaper than colloids.

Buffer solutions which are used to correct acidosis or alkalosis are also administered through intravenous access. Lactated Ringer's solution used as a fluid expander or base solution to which medications are added also has some buffering effect. Another solution administered intravenously as a buffering solution is sodium bicarbonate.

Medication and treatment

Medications may be mixed into the fluids mentioned above, commonly normal saline, or dextrose solutions. Compared with other routes of administration, such as oral medications, the IV route is the fastest way to deliver fluids and medications throughout the body. For this reason, the IV route is commonly preferred in emergency situations or when a fast onset of action is desirable. In extremely high blood pressure (termed a hypertensive emergency), IV antihypertensives may be given to quickly decrease the blood pressure in a controlled manner to prevent organ damage. In atrial fibrillation, IV amiodarone may be administered to attempt to restore normal heart rhythm. IV medications can also be used for chronic health conditions such as cancer, for which chemotherapy drugs are commonly administered intravenously. In some cases, such as with vancomycin, a loading or bolus dose of medicine is given before beginning a dosing regimen to more quickly increase the concentration of medication in the blood.

The bioavailability of an IV medication is by definition 100%, unlike oral administration where medication may not be fully absorbed, or may be metabolized prior to entering the bloodstream. For some medications, there is virtually zero oral bioavailability. For this reason certain types of medications can only be given intravenously, as there is insufficient uptake by other routes of administration, such is the case of severe dehydration where the patient is required to be treated via IV therapy for a quick recovery. The unpredictability of oral bioavailability in different people is also a reason for a medication to be administered IV, as with furosemide. Oral medications also may be less desirable if a person is nauseous or vomiting, or has severe diarrhea, as these may prevent the medicine from being fully absorbed from the gastrointestinal tract. In these cases, a medication may be given IV only until the patient can tolerate an oral form of the medication. The switch from IV to oral administration is usually performed as soon as viable, as there is generally cost and time savings over IV administration. Whether a medication can be potentially switched to an oral form is sometimes considered when choosing appropriate antibiotic therapy for use in a hospital setting, as a person is unlikely to be discharged if they still require IV therapy.

Some medications, such as aprepitant, are chemically modified to be better suited for IV administration, forming a prodrug such as fosaprepitant. This can be for pharmacokinetic reasons or to delay the effect of the drug until it can be metabolized into the active form.

Blood products

A blood product (or blood-based product) is any component of blood which is collected from a donor for use in a blood transfusion. Blood transfusions can be used in massive blood loss due to trauma, or can be used to replace blood lost during surgery. Blood transfusions may also be used to treat a severe anaemia or thrombocytopenia caused by a blood disease. Early blood transfusions consisted of whole blood, but modern medical practice commonly uses only components of the blood, such as packed red blood cells, fresh frozen plasma or cryoprecipitate.

Nutrition

Parenteral nutrition is the act of providing required nutrients to a person through an intravenous line. This is used in people who are unable to get nutrients normally, by eating and digesting food. A person receiving parenteral nutrition will be given an intravenous solution which may contain salts, dextrose, amino acids, lipids and vitamins. The exact formulation of a parenteral nutrition used will depend on the specific nutritional needs of the person it is being given to. If a person is only receiving nutrition intravenously, it is called total parenteral nutrition (TPN), whereas if a person is only receiving some of their nutrition intravenously it is called partial parenteral nutrition (or supplemental parenteral nutrition).

Imaging

Medical imaging relies on being able to clearly distinguish internal parts of the body from each other. One way this is accomplished is through the administration of a contrast agent into a vein. The specific imaging technique being employed will determine the characteristics of an appropriate contrast agent to increase visibility of blood vessels or other features. Common contrast agents are administered into a peripheral vein from which they are distributed throughout the circulation to the imaging site.

iv-therapy-woman-hanging-iv-bag.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1686 2023-03-04 01:26:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1589) Multicellular organism

Summary

A multicellular organism is an organism composed of many cells, which are to varying degrees integrated and independent. The development of multicellular organisms is accompanied by cellular specialization and division of labour; cells become efficient in one process and are dependent upon other cells for the necessities of life.

Specialization in single-celled organisms exists at the subcellular level; i.e., the basic functions that are divided among the cells, tissues, and organs of the multicellular organism are collected within one cell. Unicellular organisms are sometimes grouped together and classified as the kingdom Protista.

Details

A multicellular organism is an organism that consists of more than one cell, in contrast to unicellular organism.

All species of animals, land plants and most fungi are multicellular, as are many algae, whereas a few organisms are partially uni- and partially multicellular, like slime molds and social amoebae such as the genus Dictyostelium.

Multicellular organisms arise in various ways, for example by cell division or by aggregation of many single cells. Colonial organisms are the result of many identical individuals joining together to form a colony. However, it can often be hard to separate colonial protists from true multicellular organisms, because the two concepts are not distinct; colonial protists have been dubbed "pluricellular" rather than "multicellular". There are also multinucleate though technically unicellular organisms that are macroscopic, such as the xenophyophorea that can reach 20 cm.

Evolutionary history:

Occurrence

Multicellularity has evolved independently at least 25 times in eukaryotes, and also in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, Magnetoglobus multicellularis or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, symbiomycotan fungi, brown algae, red algae, green algae, and land plants. It evolved repeatedly for Chloroplastida (green algae and land plants), once for animals, once for brown algae, three times in the fungi (chytrids, ascomycetes, and basidiomycetes) and perhaps several times for slime molds and red algae. The first evidence of multicellular organization, which is when unicellular organisms coordinate behaviors and may be an evolutionary precursor to true multicellularity, is from cyanobacteria-like organisms that lived 3.0–3.5 billion years ago. To reproduce, true multicellular organisms must solve the problem of regenerating a whole organism from germ cells (i.e., sperm and egg cells), an issue that is studied in evolutionary developmental biology. Animals have evolved a considerable diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants and fungi.

Loss of multicellularity

Loss of multicellularity occurred in some groups. Fungi are predominantly multicellular, though early diverging lineages are largely unicellular (e.g., Microsporidia) and there have been numerous reversions to unicellularity across fungi (e.g., Saccharomycotina, Cryptococcus, and other yeasts). It may also have occurred in some red algae (e.g., Porphyridium), but it is possible that they are primitively unicellular. Loss of multicellularity is also considered probable in some green algae (e.g., Chlorella vulgaris and some Ulvophyceae). In other groups, generally parasites, a reduction of multicellularity occurred, in number or types of cells (e.g., the myxozoans, multicellular organisms, earlier thought to be unicellular, are probably extremely reduced cnidarians).

Cancer

Multicellular organisms, especially long-living animals, face the challenge of cancer, which occurs when cells fail to regulate their growth within the normal program of development. Changes in tissue morphology can be observed during this process. Cancer in animals (metazoans) has often been described as a loss of multicellularity. There is a discussion about the possibility of existence of cancer in other multicellular organisms or even in protozoa. For example, plant galls have been characterized as tumors, but some authors argue that plants do not develop cancer.

Separation of somatic and germ cells

In some multicellular groups, which are called Weismannists, a separation between a sterile somatic cell line and a germ cell line evolved. However, Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox), as a great part of species have the capacity for somatic embryogenesis (e.g., land plants, most algae, many invertebrates).

Origin hypotheses

One hypothesis for the origin of multicellularity is that a group of function-specific cells aggregated into a slug-like mass called a grex, which moved as a multicellular unit. This is essentially what slime molds do. Another hypothesis is that a primitive cell underwent nucleus division, thereby becoming a coenocyte. A membrane would then form around each nucleus (and the cellular space and organelles occupied in the space), thereby resulting in a group of connected cells in one organism (this mechanism is observable in Drosophila). A third hypothesis is that as a unicellular organism divided, the daughter cells failed to separate, resulting in a conglomeration of identical cells in one organism, which could later develop specialized tissues. This is what plant and animal embryos do as well as colonial choanoflagellates.

Because the first multicellular organisms were simple, soft organisms lacking bone, shell or other hard body parts, they are not well preserved in the fossil record. One exception may be the demosponge, which may have left a chemical signature in ancient rocks. The earliest fossils of multicellular organisms include the contested Grypania spiralis and the fossils of the black shales of the Palaeoproterozoic Francevillian Group Fossil B Formation in Gabon (Gabonionta). The Doushantuo Formation has yielded 600 million year old microfossils with evidence of multicellular traits.

Until recently, phylogenetic reconstruction has been through anatomical (particularly embryological) similarities. This is inexact, as living multicellular organisms such as animals and plants are more than 500 million years removed from their single-cell ancestors. Such a passage of time allows both divergent and convergent evolution time to mimic similarities and accumulate differences between groups of modern and extinct ancestral species. Modern phylogenetics uses sophisticated techniques such as alloenzymes, satellite DNA and other molecular markers to describe traits that are shared between distantly related lineages.

The evolution of multicellularity could have occurred in a number of different ways, some of which are described below:

The symbiotic theory

This theory suggests that the first multicellular organisms occurred from symbiosis (cooperation) of different species of single-cell organisms, each with different roles. Over time these organisms would become so dependent on each other they would not be able to survive independently, eventually leading to the incorporation of their genomes into one multicellular organism. Each respective organism would become a separate lineage of differentiated cells within the newly created species.

This kind of severely co-dependent symbiosis can be seen frequently, such as in the relationship between clown fish and Riterri sea anemones. In these cases, it is extremely doubtful whether either species would survive very long if the other became extinct. However, the problem with this theory is that it is still not known how each organism's DNA could be incorporated into one single genome to constitute them as a single species. Although such symbiosis is theorized to have occurred (e.g., mitochondria and chloroplasts in animal and plant cells—endosymbiosis), it has happened only extremely rarely and, even then, the genomes of the endosymbionts have retained an element of distinction, separately replicating their DNA during mitosis of the host species. For instance, the two or three symbiotic organisms forming the composite lichen, although dependent on each other for survival, have to separately reproduce and then re-form to create one individual organism once more.

The cellularization (syncytial) theory

This theory states that a single unicellular organism, with multiple nuclei, could have developed internal membrane partitions around each of its nuclei. Many protists such as the ciliates or slime molds can have several nuclei, lending support to this hypothesis. However, the simple presence of multiple nuclei is not enough to support the theory. Multiple nuclei of ciliates are dissimilar and have clear differentiated functions. The macronucleus serves the organism's needs, whereas the micronucleus is used for sexual reproduction with exchange of genetic material. Slime molds syncitia form from individual amoeboid cells, like syncitial tissues of some multicellular organisms, not the other way round. To be deemed valid, this theory needs a demonstrable example and mechanism of generation of a multicellular organism from a pre-existing syncytium.

The colonial theory

The colonial theory of Haeckel, 1874, proposes that the symbiosis of many organisms of the same species (unlike the symbiotic theory, which suggests the symbiosis of different species) led to a multicellular organism. At least some, it is presumed land-evolved, multicellularity occurs by cells separating and then rejoining (e.g., cellular slime molds) whereas for the majority of multicellular types (those that evolved within aquatic environments), multicellularity occurs as a consequence of cells failing to separate following division. The mechanism of this latter colony formation can be as simple as incomplete cytokinesis, though multicellularity is also typically considered to involve cellular differentiation.

The advantage of the Colonial Theory hypothesis is that it has been seen to occur independently in 16 different protoctistan phyla. For instance, during food shortages the amoeba Dictyostelium groups together in a colony that moves as one to a new location. Some of these amoeba then slightly differentiate from each other. Other examples of colonial organisation in protista are Volvocaceae, such as Eudorina and Volvox, the latter of which consists of up to 500–50,000 cells (depending on the species), only a fraction of which reproduce.[36] For example, in one species 25–35 cells reproduce, 8 asexually and around 15–25 sexually. However, it can often be hard to separate colonial protists from true multicellular organisms, as the two concepts are not distinct; colonial protists have been dubbed "pluricellular" rather than "multicellular".

The synzoospore theory

Some authors suggest that the origin of multicellularity, at least in Metazoa, occurred due to a transition from temporal to spatial cell differentiation, rather than through a gradual evolution of cell differentiation, as affirmed in Haeckel's gastraea theory.

GK-PID

About 800 million years ago, a minor genetic change in a single molecule called guanylate kinase protein-interaction domain (GK-PID) may have allowed organisms to go from a single cell organism to one of many cells.

The role of viruses

Genes borrowed from viruses and mobile genetic elements (MGEs) have recently been identified as playing a crucial role in the differentiation of multicellular tissues and organs and even in sexual reproduction, in the fusion of egg cell and sperm. Such fused cells are also involved in metazoan membranes such as those that prevent chemicals crossing the placenta and the brain body separation. Two viral components have been identified. The first is syncytin, which came from a virus. The second identified in 2007 is called EFF1, which helps form the skin of Caenorhabditis elegans, part of a whole family of FF proteins. Felix Rey, of the Pasteur Institute in Paris has constructed the 3D structure of the EFF1 protein and shown it does the work of linking one cell to another, in viral infections. The fact that all known cell fusion molecules are viral in origin suggests that they have been vitally important to the inter-cellular communication systems that enabled multicellularity. Without the ability of cellular fusion, colonies could have formed, but anything even as complex as a sponge would not have been possible.

Oxygen availability hypothesis

This theory suggests that the oxygen available in the atmosphere of early Earth could have been the limiting factor for the emergence of multicellular life. This hypothesis is based on the correlation between the emergence of multicellular life and the increase of oxygen levels during this time. This would have taken place after the Great Oxidation Event but before the most recent rise in oxygen. Mills concludes that the amount of oxygen present during the Ediacaran is not necessary for complex life and therefore is unlikely to have been the driving factor for the origin of multicellularity.

Snowball Earth hypothesis

A snowball Earth is a geological event where the entire surface of the Earth is covered in snow and ice. The term can either refer to individual events (of which there were at least two) or to the larger geologic period during which all the known total glaciations occurred.

The most recent snowball Earth took place during the Cryogenian period and consisted of two global glaciation events known as the Sturtian and Marinoan glaciations. Xiao et al. suggest that between the period of time known as the "Boring Billion" and the snowball Earth, simple life could have had time to innovate and evolve, which could later lead to the evolution of multicellularity.

The snowball Earth hypothesis in regards to multicellularity proposes that the Cryogenian period in Earth history could have been the catalyst for the evolution of complex multicellular life. Brocks suggests that the time between the Sturtian Glacian and the more recent Marinoan Glacian allowed for planktonic algae to dominate the seas making way for rapid diversity of life for both plant and animal lineages. Complex life quickly emerged and diversified in what is known as the Cambrian explosion shortly after the Marinoan.

Predation hypothesis

The predation hypothesis suggests that in order to avoid being eaten by predators, simple single-celled organisms evolved multicellularity to make it harder to be consumed as prey. Herron et al. performed laboratory evolution experiments on the single-celled green alga, Chlamydomonas reinhardtii, using paramecium as a predator. They found that in the presence of this predator, C. reinhardtii does indeed evolve simple multicellular features.

Advantages

Multicellularity allows an organism to exceed the size limits normally imposed by diffusion: single cells with increased size have a decreased surface-to-volume ratio and have difficulty absorbing sufficient nutrients and transporting them throughout the cell. Multicellular organisms thus have the competitive advantages of an increase in size without its limitations. They can have longer lifespans as they can continue living when individual cells die. Multicellularity also permits increasing complexity by allowing differentiation of cell types within one organism.

Whether all of these can be seen as advantages however is debatable: The vast majority of living organisms are single celled, and even in terms of biomass, single celled organisms are far more successful than animals, although not plants. Rather than seeing traits such as longer lifespans and greater size as an advantage, many biologists see these only as examples of diversity, with associated tradeoffs.

Formation-of-Earth.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1687 2023-03-05 01:10:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1590) Python (programming language)

Summary

Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation.

Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. It is often described as a "batteries included" language due to its comprehensive standard library.

Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language and first released it in 1991 as Python 0.9.0. Python 2.0 was released in 2000. Python 3.0, released in 2008, was a major revision not completely backward-compatible with earlier versions. Python 2.7.18, released in 2020, was the last release of Python 2.

Python consistently ranks as one of the most popular programming languages.

Details

Python is a high-level, general-purpose and a very popular programming language. Python programming language (latest Python 3) is being used in web development, Machine Learning applications, along with all cutting edge technology in Software Industry. Python Programming Language is very well suited for Beginners, also for experienced programmers with other programming languages like C++ and Java.

This specially designed Python tutorial will help you learn Python Programming Language in most efficient way, with the topics from basics to advanced (like Web-scraping, Django, Deep-Learning, etc.) with examples.

Below are some facts about Python Programming Language:

* Python is currently the most widely used multi-purpose, high-level programming language.
* Python allows programming in Object-Oriented and Procedural paradigms.
* Python programs generally are smaller than other programming languages like Java. Programmers have to type relatively less and indentation requirement of the language, makes them readable all the time.
* Python language is being used by almost all tech-giant companies like – Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which can be used for the following:

* Machine Learning
* GUI Applications (like Kivy, Tkinter, PyQt etc. )
* Web frameworks like Django (used by YouTube, Instagram, Dropbox)
* Image processing (like OpenCV, Pillow)
* Web scraping (like Scrapy, BeautifulSoup, Selenium)
* Test frameworks
* Multimedia
* Scientific computing
* Text processing and many more..

Additional Information

Python has become one of the most popular programming languages in the world in recent years. It's used in everything from machine learning to building websites and software testing. It can be used by developers and non-developers alike.

Python, one of the most popular programming languages in the world, has created everything from Netflix’s recommendation algorithm to the software that controls self-driving cars. Python is a general-purpose language, which means it’s designed to be used in a range of applications, including data science, software and web development, automation, and generally getting stuff done.

What is Python?

Python is a computer programming language often used to build websites and software, automate tasks, and conduct data analysis. Python is a general-purpose language, meaning it can be used to create a variety of different programs and isn’t specialized for any specific problems. This versatility, along with its beginner-friendliness, has made it one of the most-used programming languages today. A survey conducted by industry analyst firm RedMonk found that it was the second-most popular programming language among developers in 2021.

What is Python used for?
Python is commonly used for developing websites and software, task automation, data analysis, and data visualization. Since it’s relatively easy to learn, Python has been adopted by many non-programmers such as accountants and scientists, for a variety of everyday tasks, like organizing finances.

“Writing programs is a very creative and rewarding activity,” says University of Michigan and Coursera instructor Charles R Severance in his book Python for Everybody. “You can write programs for many reasons, ranging from making your living to solving a difficult data analysis problem to having fun to helping someone else solve a problem.”

What can you do with python? Some things include:

* Data analysis and machine learning
* Web development
* Automation or scripting
* Software testing and prototyping
* Everyday tasks

Data analysis and machine learning

Python has become a staple in data science, allowing data analysts and other professionals to use the language to conduct complex statistical calculations, create data visualizations, build machine learning algorithms, manipulate and analyze data, and complete other data-related tasks.

Python can build a wide range of different data visualizations, like line and bar graphs, pie charts, histograms, and 3D plots. Python also has a number of libraries that enable coders to write programs for data analysis and machine learning more quickly and efficiently, like TensorFlow and Keras.

Web development

Python is often used to develop the back end of a website or application—the parts that a user doesn’t see. Python’s role in web development can include sending data to and from servers, processing data and communicating with databases, URL routing, and ensuring security. Python offers several frameworks for web development. Commonly used ones include Django and Flask.

Some web development jobs that use Python include back end engineers, full stack engineers, Python developers, software engineers, and DevOps engineers.

Automation or scripting

If you find yourself performing a task repeatedly, you could work more efficiently by automating it with Python. Writing code used to build these automated processes is called scripting. In the coding world, automation can be used to check for errors across multiple files, convert files, execute simple math, and remove duplicates in data.

Python can even be used by relative beginners to automate simple tasks on the computer—such as renaming files, finding and downloading online content or sending emails or texts at desired intervals.

Software testing and prototyping

In software development, Python can aid in tasks like build control, bug tracking, and testing. With Python, software developers can automate testing for new products or features. Some Python tools used for software testing include Green and Requestium.

Everyday tasks

Python isn't only for programmers and data scientists. Learning Python can open new possibilities for those in less data-heavy professions, like journalists, small business owners, or social media marketers. Python can also enable non-programmers to simplify certain tasks in their lives. Here are just a few of the tasks you could automate with Python:

* Keep track of stock market or crypto prices
* Send yourself a text reminder to carry an umbrella anytime it’s raining
* Update your grocery shopping list
* Renaming large batches of files
* Converting text files to spreadsheets
* Randomly assign chores to family members
* Fill out online forms automatically

Why is Python so popular?

Python is popular for a number of reasons. Here’s a deeper look at what makes it so versatile and easy to use for coders.

* It has a simple syntax that mimics natural language, so it’s easier to read and understand. This makes it quicker to build projects, and faster to improve on them.
* It’s versatile. Python can be used for many different tasks, from web development to machine learning.
* It’s beginner friendly, making it popular for entry-level coders.
* It’s open source, which means it’s free to use and distribute, even for commercial purposes.
* Python’s archive of modules and libraries—bundles of code that third-party users have created to expand Python’s capabilities—is vast and growing.
* Python has a large and active community that contributes to Python’s pool of modules and libraries, and acts as a helpful resource for other programmers. The vast support community means that if coders run into a stumbling block, finding a solution is relatively easy; somebody is bound to have encountered the same problem before.

Scientific-Python-Scipy.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1688 2023-03-06 01:02:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1591) Dishwasher

Summary

Basically, a dishwasher is a robot that cleans and rinses dirty dishes. Humans have to load the dishes, add detergent, set the proper washing cycles and turn it on, but the dishwasher accomplishes a whole series of functions by itself. A dishwasher:

Adds water
Heats the water to the appropriate temperature
Automatically opens the detergent dispenser at the right time
Shoots the water through spray arms to get the dishes clean
Drains the dirty water
Sprays more water on the dishes to rinse them
Drains itself again
Heats the air to dry the dishes off, if the user has selected that setting

In addition, dishwashers monitor themselves to make sure everything is running properly. A timer (or a small computer) regulates the length of each cycle. A sensor detects the water and air temperature to prevent the dishwasher from overheating or damaging your dishes. Another sensor can tell if the water level gets too high and activates the draining function to keep the dishwasher from overflowing. Some dishwashers even have sensors that can detect the dirtiness of the water coming off the dishes. When the water is clear enough, the dishwasher knows the dishes are clean.

Although dishwashers are watertight, they don't actually fill with water. Just a small basin at the bottom fills up. There, heating elements heat the water up to as much as 155 degrees Fahrenheit (68 Celsius) while mixing in the detergent. Then a pump propels the water up to the spray arms, where it is forced out and sprayed against the dirty dishes.

Think about a garden hose with no nozzle — if you put your thumb over the end of the hose, decreasing the space for the water to come out, it sprays out more forcefully. The dishwasher's jets work on the same principle. The force of the water also makes the spray arms rotate, just like a lawn sprinkler.

Once the food particles are washed off of the dishes, they are either caught in a filter or chopped up into small pieces and disintegrated, similar to the actions of a garbage disposal. Then the cycle of heating water, spraying it and letting it drip back into the pool below repeats several times.

When the washing and rinsing is finished, the water drains down to the basin again, where the pump propels the water out of the dishwasher. Depending on the type of dishwasher, the drained water might go right into the pipes under your sink or into your garbage disposal.

The final step in a wash cycle is optional — the dry cycle. The heating element at the bottom of the dishwasher heats the air inside to help the dishes dry. Some people just let them dry without heat to save energy.

Dishwashers are not very mechanically complex.

Details

A dishwasher is a machine that is used to clean dishware, cookware, and cutlery automatically. Unlike manual dishwashing, which relies heavily on physical scrubbing to remove soiling, the mechanical dishwasher cleans by spraying hot water, typically between 45 and 75 °C (110 and 170 °F), at the dishes, with lower temperatures of water used for delicate items.

A mix of water and dishwasher detergent is pumped to one or more rotating sprayers, cleaning the dishes with the cleaning mixture. The mixture is recirculated to save water and energy. Often there is a pre-rinse, which may or may not include detergent, and the water is then drained. This is followed by the main wash with fresh water and detergent. Once the wash is finished, the water is drained; more hot water enters the tub by means of an electromechanical solenoid valve, and the rinse cycle(s) begin. After the rinse process finishes, the water is drained again and the dishes are dried using one of several drying methods. Typically a rinse-aid, a chemical to reduce the surface tension of the water, is used to reduce water spots from hard water or other reasons.

In addition to domestic units, industrial dishwashers are available for use in commercial establishments such as hotels and restaurants, where many dishes must be cleaned. Washing is conducted with temperatures of 65–71 °C (149–160 °F) and sanitation is achieved by either the use of a booster heater that will provide an 82 °C (180 °F) "final rinse" temperature or through the use of a chemical sanitizer.

History

The first mechanical dishwashing device was registered for a patent in 1850 in the United States by Joel Houghton. This device was made of wood and was cranked by hand while water sprayed onto the dishes. The device was both slow and unreliable. Another patent was granted to L.A. Alexander in 1865 that was similar to the first but featured a hand-cranked rack system. Neither device was practical or widely accepted. Some historians cite as an obstacle to adoption the historical attitude that valued women for the effort put into housework rather than the results—making household chores easier was perceived by some to reduce their value.

The most successful of the hand-powered dishwashers was invented in 1886 by Josephine Cochrane together with mechanic George Butters in Cochrane's tool shed in Shelbyville, Illinois when Cochrane (a wealthy socialite) wanted to protect her china while it was being washed. Their invention was unveiled at the 1893 World's Fair in Chicago under the name of Lavadora but was changed to Lavaplatos as another machine invented in 1858 already held that name. Cochrane's inspiration was her frustration at the damage to her good china that occurred when her servants handled it during cleaning.

Europe's first domestic dishwasher with an electric motor was invented and manufactured by Miele in 1929.

In the United Kingdom, William Howard Livens invented a small, non-electric dishwasher suitable for domestic use in 1924. It was the first dishwasher that incorporated most of the design elements that are featured in the models of today; it included a door for loading, a wire rack to hold the dirty crockery and a rotating sprayer. Drying elements were added to his design in 1940. It was the first machine suitable for domestic use, and it came at a time when permanent plumbing and running water in the home were becoming increasingly common.

Despite this, Liven's design did not become a commercial success, and dishwashers were only successfully sold as domestic utilities in the postwar boom of the 1950s, albeit only to the wealthy. Initially, dishwashers were sold as standalone or portable devices, but with the development of the wall-to-wall countertop and standardized height cabinets, dishwashers began to be marketed with standardized sizes and shapes, integrated underneath the kitchen countertop as a modular unit with other kitchen appliances.

By the 1970s, dishwashers had become commonplace in domestic residences in North America and Western Europe. By 2012, over 75 percent of homes in the United States and Germany had dishwashers.

In the late 1990s, manufacturers began offering various new energy conservation features in dishwashers. One feature was use of "soil sensors", which was a computerized tool in the dishwasher which measured food particles coming from dishes. When the dishwasher had cleaned the dishes to the point of not releasing more food particles, the soil sensor would report the dishes as being clean. The sensor operated with another innovation of using variable washing time. If dishes were especially dirty, then the dishwasher would run for a longer time than if the sensor detected them to be clean. In this way, the dishwasher would save energy and water by only being in operation for as long as needed.

Design:

Size and capacity

Dishwashers that are installed into standard kitchen cabinets have a standard width and depth of 60 cm (Europe) or 24 in (61 cm) (US), and most dishwashers must be installed into a hole a minimum of 86 cm (Europe) or 34 in (86 cm) (US) tall. Portable dishwashers exist in 45 and 60 cm (Europe) or 18 and 24 in (46 and 61 cm) (US) widths, with casters and attached countertops. There are also dishwashers available in sizes according to the European gastronorm standard. Dishwashers may come in standard or tall tub designs; standard tub dishwashers have a service kickplate beneath the dishwasher door that allows for simpler maintenance and installation, but tall tub dishwashers have approximately 20% more capacity and better sound dampening from having a continuous front door.

The international standard for the capacity of a dishwasher is expressed as standard place settings. Commercial dishwashers are rated as plates per hour. The rating is based on standard-sized plates of the same size. The same can be said for commercial glass washers, as they are based on standard glasses, normally pint glasses.

Layout

Present-day machines feature a drop-down front panel door, allowing access to the interior, which usually contains two or sometimes three pull-out racks; racks can also be referred to as "baskets". In older U.S. models from the 1950s, the entire tub rolled out when the machine latch was opened, and loading as well as removing washable items was from the top, with the user reaching deep into the compartment for some items. Youngstown Kitchens, which manufactured entire kitchen cabinets and sinks, offered a tub-style dishwasher, which was coupled to a conventional kitchen sink as one unit. Most present-day machines allow for placement of dishes, silverware, tall items and cooking utensils in the lower rack, while glassware, cups and saucers are placed in the upper rack. One notable exception were dishwashers produced by the Maytag Corporation from the late sixties until the early nineties. These machines were designed for loading glassware, cups and saucers in the lower rack, while plates, silverware, and tall items were placed into the upper rack. This unique design allowed for a larger capacity and more flexibility in loading of dishes and pots and pans. Today, "dish drawer" models eliminate the inconvenience of the long reach that was necessary with older full-depth models. "Cutlery baskets" are also common. A drawer dishwasher, first introduced by Fisher & Paykel in 1997, is a variant of the dishwasher in which the baskets slide out with the door in the same manner as a drawer filing cabinet, with each drawer in a double-drawer model being able to operate independently of the other.

The inside of a dishwasher in the North American market is either stainless steel or plastic. Most of them are stainless steel body and plastic made racks. Stainless steel tubs resist hard water, and preserve heat to dry dishes more quickly. They also come at a premium price. Dishwashers can be bought for as expensive as $1,500+, but countertop dishwashers are also available for under $300. Older models used baked enamel tubs, while some used a vinyl coating bonded to a steel tub, which provided protection of the tub from acidic foods and provided some sound attenuation. European-made dishwashers feature a stainless steel interior as standard, even on low-end models. The same is true for a built-in water softener.

Washing elements

European dishwashers almost universally use two or three sprayers which are fed from the bottom and back wall of the dishwasher leaving both racks unimpeded and also such models tend to use inline water heaters, removing the need for exposed elements in the base of the machine that can melt plastic items near to them. Many North American dishwashers tend to use exposed elements in the base of the dishwasher. Some North American machines, primarily those designed by General Electric, use a wash tube, often called a wash-tower, to direct water from the bottom of the dishwasher to the top dish rack. Some dishwashers, including many models from Whirlpool and KitchenAid, use a tube attached to the top rack that connects to a water source at the back of the dishwasher and directs water to a second wash spray beneath the upper rack, this allows full use of the bottom rack. Late-model Frigidaire dishwashers shoot a jet of water from the top of the washer down into the upper wash sprayer, again allowing full use of the bottom rack (but requiring that a small funnel on the top rack be kept clear).

Features

Mid range to higher end North American dishwashers often come with hard food disposal units, which behave like miniature garbage (waste) disposal units that eliminate large pieces of food waste from the wash water. One manufacturer that is known for omitting hard food disposals is Bosch, a German brand; however, Bosch does so in order to reduce noise. If the larger items of food waste are removed before placing in the dishwasher, pre-rinsing is not necessary even without integrated waste disposal units.

Many new dishwashers feature microprocessor-controlled, sensor-assisted wash cycles that adjust the wash duration to the number of dirty dishes (sensed by changes in water temperature) or the amount of dirt in the rinse water (sensed chemically or optically). This can save water and energy if the user runs a partial load. In such dishwashers the electromechanical rotary switch often used to control the washing cycle is replaced by a microprocessor, but most sensors and valves are still required. However, pressure switches (some dishwashers use a pressure switch and flow meter) are not required in most microprocessor controlled dishwashers as they use the motor and sometimes a rotational position sensor to sense the resistance of water; when it senses there is no cavitation it knows it has the optimal amount of water. A bimetal switch or wax motor opens the detergent door during the wash cycle.

Some dishwashers include a child-lockout feature to prevent accidental starting or stopping of the wash cycle by children. A child lock can sometimes be included to prevent young children from opening the door during a wash cycle. This prevents accidents with hot water and strong detergents used during the wash cycle.

Process:

Energy use and water temperatures

In the European Union, the energy consumption of a dishwasher for a standard usage is shown on a European Union energy label. In the United States, the energy consumption of a dishwasher is defined using the energy factor.

Most consumer dishwashers use a 75 °C (167 °F) thermostat in the sanitizing process. During the final rinse cycle, the heating element and wash pump are turned on, and the cycle timer (electronic or electromechanical) is stopped until the thermostat is tripped. At this point, the cycle timer resumes and will generally trigger a drain cycle within a few timer increments.

Most consumer dishwashers use 75 °C (167 °F) rather than 83 °C (181 °F) for reasons of burn risk, energy and water consumption, total cycle time, and possible damage to plastic items placed inside the dishwasher. With new advances in detergents, lower water temperatures (50–55 °C / 122–131 °F) are needed to prevent premature decay of the enzymes used to eat the grease and other build-ups on the dishes.

In the US, residential dishwashers can be certified to a NSF International testing protocol which confirms the cleaning and sanitation performance of the unit.

Superheated steam dishwashers can kill 99% of bacteria on a plate in just 25 seconds.

Drying

The heat inside the dishwasher dries the contents after the final hot rinse. North American dishwashers tend to use heat-assisted drying via an exposed element which tends to be less efficient than other methods. European machines and some high end North American machines use passive methods for drying – a stainless steel interior helps this process and some models use heat exchange technology between the inner and outer skin of the machine to cool the walls of the interior and speed up drying. Some dishwashers employ desiccants such as zeolite which at the beginning of the wash are heated, dry out and creating steam which warms plates, and then are cooled during the dry cycle which absorbs moisture again, saving significant energy.

Plastic and non-stick items form drops with smaller surface area and may not dry properly compared to china and glass, which also store more heat that better evaporate the little water that remains on them. Some dishwashers incorporate a fan to improve drying. Older dishwashers with a visible heating element (at the bottom of the wash cabinet, below the bottom basket) may use the heating element to improve drying; however, this uses more energy.

Most importantly however, the final rinse adds a small amount of rinse-aid to the hot water, this is a mild detergent that improves drying significantly by reducing the inherent surface tension of the water so that water mostly drips off, greatly improving how well all items, including plastic items, dry.

Most dishwashers feature a drying sensor and as such, a dish-washing cycle is always considered complete when a drying indicator, usually in the form of an illuminated "end" light, or in more modern models on a digital display or audible sound, exhibits to the operator that the washing and drying cycle is now over.

US Governmental agencies often recommend air-drying dishes by either disabling or stopping the drying cycle to save energy.

Differences between dishwashers and hand washing

Dishwasher detergent

Dishwashers are designed to work using specially formulated dishwasher detergent. Over time, many regions have banned the use of phosphates in detergent and phosphorus-based compounds. They were previously used because they have properties that aid in effective cleaning. The concern was the increase in algal blooms in waterways caused by increasing phosphate levels.  Seventeen US states have partial or full bans on the use of phosphates in dish detergent, and two US states (Maryland and New York) ban phosphates in commercial dishwashing. Detergent companies claimed it is not cost effective to make separate batches of detergent for the states with phosphate bans, and so most have voluntarily removed phosphates from all dishwasher detergents.

In addition, rinse aids have contained nonylphenol and nonylphenol ethoxylates. These have been banned in the European Union by EU Directive 76/769/EEC.

In some regions depending on water hardness a dishwasher might function better with the use of a dishwasher salt.

Glassware

Glassware washed by dishwashing machines can develop a white haze on the surface over time. This may be caused by any or all of the below processes, of which only the first is reversible:

Deposition of minerals

Calcium carbonate (limescale) in hard water can deposit and build up on surfaces when water dries. The deposits can be dissolved by vinegar or another acid. Dishwashers often include ion exchange device to remove calcium and magnesium ions and replace them with sodium. The resultant sodium salts are water soluble and don't tend to build up.
Silicate filming, etching, and accelerated crack corrosion

This film starts as an iridescence or "oil-film" effect on glassware, and progresses into a "milky" or "cloudy" appearance (which is not a deposit) that cannot be polished off or removed like limescale. It is formed because the detergent is strongly alkaline (basic) and glass dissolves slowly in alkaline aqueous solution. It becomes less soluble in the presence of silicates in the water (added as anti-metal-corrosion agents in the dishwasher detergent). Since the cloudy appearance is due to nonuniform glass dissolution, it is (somewhat paradoxically) less marked if dissolution is higher, i.e. if a silicate-free detergent is used; also, in certain cases, the etching will primarily be seen in areas that have microscopic surface cracks as a result of the items' manufacturing. Limitation of this undesirable reaction is possible by controlling water hardness, detergent load and temperature. The type of glass is an important factor in determining if this effect is a problem. Some dishwashers can reduce this etching effect by automatically dispensing the correct amount of detergent throughout the wash cycle based on the level of water hardness programmed.

Dissolution of lead

Lead in lead crystal can be converted into a soluble form by the high temperatures and strong alkali detergents of dishwashers, which could endanger the health of subsequent users.

Other materials

Other materials besides glass are also harmed by the strong detergents, strong agitation, and high temperatures of dishwashers, especially on a hot wash cycle when temperatures can reach 75 °C (167 °F). Aluminium, brass, and copper items will discolor, and light aluminum containers will mark other items they knock into. Nonstick pan coatings will deteriorate. Glossy, gold-colored, and hand-painted items will be dulled or fade. Fragile items and sharp edges will be dulled or damaged from colliding with other items or thermal stress. Sterling silver and pewter will oxidize and discolour from the heat and from contact with metals lower on the galvanic series such as stainless steel. Pewter has a low melting point and may warp in some dishwashers. Glued items, such as hollow-handle knives or wooden cutting boards, will melt or soften in a dishwasher; high temperatures and moisture damage wood. High temperatures damage many plastics, especially in the bottom rack close to an exposed heating element (many newer dishwashers have a concealed heating element away from the bottom rack entirely). Squeezing plastic items into small spaces may cause the plastic to distort in shape. Cast iron cookware is normally seasoned with oil or grease and heat, which causes the oil or grease to be absorbed into the pores of the cookware, thereby giving a smooth relatively non-stick cooking surface which is stripped off by the combination of alkali based detergent and hot water in a dishwasher.

Knives and other cooking tools that are made of carbon steel, semi-stainless steels like D2, or specialized, highly hardened cutlery steels like ZDP189 corrode in the extended moisture bath of dishwashers, compared to briefer baths of hand washing. Cookware is made of austenitic stainless steels, which are more stable.

Items contaminated by chemicals such as wax, cigarette ash, poisons, mineral oils, wet paints, oiled tools, furnace filters, etc. can contaminate a dishwasher, since the surfaces inside small water passages cannot be wiped clean as surfaces are in hand-washing, so contaminants remain to affect future loads. Objects contaminated by solvents may explode in a dishwasher.

Environmental comparison

Dishwashers use less water, and therefore less fuel to heat the water, than hand washing, except for small quantities washed in wash bowls without running water.

Hand-washing techniques vary by individual. According to a peer-reviewed study in 2003, hand washing and drying of an amount of dishes equivalent to a fully loaded automatic dishwasher (no cookware or bakeware) could use between 20 and 300 litres (5.3 and 79.3 US gal) of water and between 0.1 and 8 kWh of energy, while the numbers for energy-efficient automatic dishwashers were 15–22 litres (4.0–5.8 US gal) and 1 to 2 kWh, respectively. The study concluded that fully loaded dishwashers use less energy, water, and detergent than the average European hand-washer. For the automatic dishwasher results, the dishes were not rinsed before being loaded. The study does not address costs associated with the manufacture and disposal of dishwashers, the cost of possible accelerated wear of dishes from the chemical harshness of dishwasher detergent, the comparison for cleaning cookware, or the value of labour saved; hand washers needed between 65 and 106 minutes. Several points of criticism on this study have been raised. For example, kilowatt hours of electricity were compared against energy used for heating hot water without taking into account possible inefficiencies. Also, inefficient handwashings were compared against optimal usage of a fully loaded dishwasher without manual pre-rinsing that can take up to 100 litres (26 US gal) of water.

A 2009 study showed that the microwave and the dishwasher were both more effective ways to clean domestic sponges than handwashing.

Adoption:

Commercial use

Large heavy-duty dishwashers are available for use in commercial establishments (e.g. hotels, restaurants) where many dishes must be cleaned.

Unlike a residential dishwasher, a commercial dishwasher does not utilize a drying cycle (commercial drying is achieved by heated ware meeting open air once the wash/rinse/sanitation cycles have been completed) and thus are significantly faster than their residential counterparts. Washing is conducted with 65–71 °C / 150–160 °F temperatures and sanitation is achieved by either the use of a booster heater that will provide the machine 82 °C / 180 °F "final rinse" temperature or through the use of a chemical sanitizer. This distinction labels the machines as either "high-temp" or "low-temp".

Some commercial dishwashers work similarly to a commercial car wash, with a pulley system that pulls the rack through a small chamber (known widely as a "rack conveyor" systems). Single-rack washers require an operator to push the rack into the washer, close the doors, start the cycle, and then open the doors to pull out the cleaned rack, possibly through a second opening into an unloading area.

In the UK, the British Standards Institution set standards for dishwashers. In the US, NSF International (an independent not-for-profit organization) sets the standards for wash and rinse time along with minimum water temperature for chemical or hot-water sanitizing methods. There are many types of commercial dishwashers including under-counter, single tank, conveyor, flight type, and carousel machines.

Commercial dishwashers often have significantly different plumbing and operations than a home unit, in that there are often separate sprayers for washing and rinsing/sanitizing. The wash water is heated with an in-tank electric heat element and mixed with a cleaning solution, and is used repeatedly from one load to the next. The wash tank usually has a large strainer basket to collect food debris, and the strainer may not be emptied until the end of the day's kitchen operations.

Water used for rinsing and sanitizing is generally delivered directly through building water supply, and is not reusable. The used rinse water empties into the wash tank reservoir, which dilutes some of the used wash water and causes a small amount to drain out through an overflow tube. The system may first rinse with pure water only and then sanitize with an additive solution that is left on the dishes as they leave the washer to dry.

Additional soap is periodically added to the main wash water tank, from either large soap concentrate tanks or dissolved from a large solid soap block, to maintain wash water cleaning effectiveness.

Alternative uses

Dishwashers can be used to cook foods at low temperatures (e.g. dishwasher salmon). The foods are generally sealed in canning jars or oven bags since even a dishwasher cycle without soap can deposit residual soap and rinse aid from previous cycles on unsealed foods.

Dishwashers also have been documented to be used to clean potatoes, other root vegetables, garden tools, sneakers or trainers, silk flowers, some sporting goods, plastic hairbrushes, baseball caps, plastic toys, toothbrushes, flip-flops, contact lens cases, a mesh filter from a range hood, refrigerator shelves and bins, toothbrush holders, pet bowls and pet toys. Cleaning vegetables and plastics is controversial since vegetables can be contaminated by soap and rinse aid from previous cycles and the heat of most standard dishwashers can cause BPA or phthalates to leach out of plastic products. The use of a dishwasher to clean greasy tools and parts is not recommended as the grease can clog the dishwasher.

13397536_02_Tile_banner1.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1689 2023-03-06 22:37:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1592) Microstate

Summary

A microstate or ministate is a sovereign state having a very small population or very small land area, usually both. However, the meanings of "state" and "very small" are not well-defined in international law.

Details

A microstate or ministate is a sovereign state having a very small population or very small land area, usually both. However, the meanings of "state" and "very small" are not well-defined in international law. Some recent attempts to define microstates have focused on identifying qualitative features that are linked to their size and population, such as partial delegation of their sovereignty to larger states, such as for international defense.

Commonly accepted examples of microstates include Andorra, Liechtenstein, Monaco, Nauru, Palau, San Marino and Tuvalu. The smallest political entity recognized as a sovereign state is Vatican City, with fewer than 1,000 residents and an area of only 49 hectares (120 acres). Some microstates – such as Monaco and Vatican City – are city-states consisting of a single municipality.

Definitions:

Quantitative

Most scholars identify microstates by using a quantitative threshold and applying it to either one variable (such as the size of its territory or population) or a composite of different variables. While it is agreed that microstates are the smallest of all states, there is no consensus on what variable (or variables) or what cut-off point should be used to determine which political units should be labelled as "microstates" (as opposed to small "normal" states). According to some scholars the quantitative approach to defining microstates suffers from such problems as "inconsistency, arbitrariness, vagueness and inability to meaningfully isolate qualitatively distinct political units".

Qualitative

Some academics have suggested defining microstates according to the unique features that are linked to their geographic or demographic smallness. Newer approaches have proposed looking at the behaviour or capacity to operate in the international arena in order to determine which states should deserve the microstate label. Yet, it has been argued that such approaches could lead to either confusing microstates with weak states (or failed states) or relying too much on subjective perceptions.

An alternative approach is to define microstates as "modern protected states". According to the definition proposed by Dumienski (2014): "microstates are modern protected states, i.e. sovereign states that have been able to unilaterally depute certain attributes of sovereignty to larger powers in exchange for benign protection of their political and economic viability against their geographic or demographic constraints." Adopting this approach permits limiting the number of microstates and separating them from both small states and autonomies or dependencies. Examples of microstates understood as modern protected states include such states as Liechtenstein, San Marino, Monaco, Niue, Andorra, the Cook Islands or Palau.

The smallest political unit recognized as a sovereign state is the Vatican City, though its precise status is sometimes disputed, e.g., Maurice Mendelson argued in 1972 that "in two respects it may be doubted whether the territorial entity, the Vatican City, meets the traditional criteria of statehood".

St. Kitts and Nevis in the Caribbean Sea, the smallest independent country in the Americas with 261 sq km (101 sq mi).

Politics

Statistical research has shown that microstates are more likely to be democracies than larger states. In 2012, Freedom House classified 86% of the countries with less than 500,000 inhabitants as "free". This shows that countries with small populations often had a high degree of political freedom and civil liberties, which is one of the hallmarks of democracies. Some scholars have taken the statistical correlation between small size and democracy as a sign that smallness is beneficial to the development of a democratic political system, mentioning social cohesiveness, opportunities for direct communication and homogeneity of interests as possible explanations for why this is the case.

Case study research, however, has led researches to believe that the statistical evidence belies the anti-democratic elements of microstate politics. Due to small populations, family and personal relations are often decisive in microstate politics. In some cases, this impedes neutral and formal decision-making and instead leads to undemocratic political activity, such as clientelism, corruption, particularism and executive dominance. While microstates often have formal institutions that are associated with democracy, the inner workings of politics in microstates are in reality often undemocratic.

The high number of democracies amongst microstates could be explained by their colonial history. Most microstates adopted the same political system as their colonial ruler. Because of the high number of microstates that were British colonies in the past, microstates often have a majoritarian and parliamentary political system similar to the Westminster system. Some microstates with a history as British colony have implemented some aspects of a consensus political system, to adapt to their geographic features or societal make-up. While the colonial history often determines what political systems microstates have, they do implement changes to better accommodate their specific characteristics.

Microstates and international relations

Microstates often rely on other countries in order to survive, as they have a small military capacity and a lack of resources. This had led some researchers to believe that microstates are forced to subordinate themselves to larger states which reduces their sovereignty. Research, however, has shown that microstates strategically engage in patron-client relationships with other countries. This allows them to trade some privileges to countries that can advance their interests the most. Examples of this are microstates that establish a tax haven or sell their support in international committees in exchange for military and economic support.

Historical anomalies and aspirant states

A small number of tiny sovereign political units have been founded on historical anomalies or eccentric interpretations of law. These types of states, often labelled as "microstates," are usually located on small (usually disputed) territorial enclaves, generate limited economic activity founded on tourism and philatelic and numismatic sales, and are tolerated or ignored by the nations from which they claim to have seceded.

The Republic of Indian Stream – now the town of Pittsburg, New Hampshire – was a geographic anomaly left unresolved by the Treaty of Paris that ended the U.S. Revolutionary War, and claimed by both the U.S. and Canada. Between 1832 and 1835, the area's residents refused to acknowledge either claimant.

The Cospaia Republic became independent through a treaty error and survived from 1440 to 1826. Its independence made it important in the introduction of tobacco cultivation to Italy.

Maldives in the Indian Ocean, the smallest independent country in Asia with an area of 298 sq km (115 sq mi)
Couto Misto was disputed by Spain and Portugal, and operated as a sovereign state until the 1864 Treaty of Lisbon partitioned the territory, with the larger part becoming part of Spain.

Jaxa was a small state that existed during the 17th century at the border between Tsardom of Russia and Qing China. Despite its location in East Asia, the state's primary language was Polish.

microstate.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1690 2023-03-07 19:28:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1593) City-state

Summary

A city-state is an independent sovereign city which serves as the center of political, economic, and cultural life over its contiguous territory. They have existed in many parts of the world since the dawn of history, including ancient poleis such as Athens, Sparta, Carthage and Rome, the atlepeme of pre-Columbian Mexico and the Italian city-states during the Middle Ages and Renaissance, such as Florence, Venice, Genoa and Milan.

With the rise of nation states worldwide, only a few modern sovereign city-states exist, with some disagreement as to which qualify; Monaco, Singapore and Vatican City are most commonly accepted as such. Singapore is the clearest example, with full self-governance, its own currency, a robust military and a population of 5.5 million.

Several non-sovereign cities enjoy a high degree of autonomy and are sometimes considered city-states. Hong Kong, Macau, and members of the United Arab Emirates—most notably Dubai and Abu Dhabi—are often cited as such.

Details

A city-state is a political system consisting of an independent city having sovereignty over contiguous territory and serving as a centre and leader of political, economic, and cultural life. The term originated in England in the late 19th century and has been applied especially to the cities of ancient Greece, Phoenicia, and Italy and to the cities of medieval Italy.

The name was initially given to the political form that crystallized during the classical period of Greek civilization. The city-state’s ancient Greek name, polis, was derived from the citadel (acropolis), which marked its administrative centre; and the territory of the polis was usually fairly limited. City-states differed from tribal or national systems in size, exclusiveness, patriotism, and passion for independence. The origin of city-states is disputed. It is probable that earlier tribal systems broke up during a period of economic decline and the splintered groups established themselves between 1000 and 800 BCE as independent nuclei of city-states that covered peninsular Greece, the Aegean islands, and western Asia Minor. As they grew in population and commercial activity, they sent out bands of emigrants who created similar city-states on the coasts of the Mediterranean Sea and the Black Sea, mainly between 750 and 550 BCE.

The thousands of city-states that sprang into existence during these centuries were remarkable for their diversity. Every variety of political experiment from monarchy to communism was practiced, and the fundamental principles of political life were formulated by their philosophers. The vigour and intensity of the citizens’ experience were such that they made unparalleled advances in all fields of human activity, except industry and technology, and laid the basis of Greco-Roman civilization. The particularism of city-states was their glory and their weakness. Incapable of forming any permanent union or federation, they fell victim to the Macedonians, the Carthaginians, and the Roman Empire, under which they lived on as dependent privileged communities (municipia). Rome, which began its republican history as a city-state, pursued policies of foreign expansion and government centralization that led to the annihilation of the city-state as a political form in the ancient world.

The revival of city-states was noticeable by the 11th century, when several Italian towns had reached considerable prosperity. They were mostly in Byzantine territory or had maintained contact with Constantinople (Istanbul) and could thus take full advantage of the revival of eastern trade.

Foremost among them were Venice and Amalfi, the latter reaching the height of its commercial power about the middle of the century; others included Bari, Otranto, and Salerno. Amalfi, for a short time a serious rival of Venice, declined after having submitted to the Normans in 1073. Then Venice received, with the privilege of 1082, exemption from all customs duties within the Byzantine Empire. In the 11th century Pisa, the natural port of Tuscany, began to rise amid struggles with the Arabs, whom it defeated repeatedly; and Genoa, which was to be its rival for centuries, was following suit. Among the inland towns—as yet less conspicuous—Pavia, which had owed much of its early prosperity to its role as capital of the Lombard kingdom, was rapidly outdistanced by Milan; Lucca, on the Via Francigena from Lombardy to Rome and for a long time the residence of the margraves of Tuscany, was the most important Tuscan inland town.

The importance of fortified centres during the Hungarian and Arab incursions contributed to the development of towns. Town walls were rebuilt or repaired, providing security both to citizens and to people from the country; and the latter found further places of refuge in the many fortified castelli with which the countryside began to be covered.

The Norman conquest of southern Italy put an end to the progress of municipal autonomy in that region. Whether it took the form of a conflict with the established authorities or of peaceful transition, the ultimate result of the communal movement in the north was full self-government. Originally the communes were, as a rule, associations of the leading sections of the town population; but they soon became identical with the new city-state. Their first opponents were often, but by no means always, the bishops; in Tuscany, where margravial authority was strong, the Holy Roman emperor Henry IV encouraged rebellion against his rival Matilda by granting extensive privileges to Pisa and to Lucca in 1081; and Matilda’s death made it possible for Florence to achieve independence.

The first organs of the city-state were the general assembly of all its members (parlamento, concio, arengo) and the magistracy of the consuls. At an early date a council began to replace the unwieldy assembly for ordinary political and legislative business; and, with the growing complexity of the constitution, further councils emerged, conditions varying considerably from town to town. During the 12th century, the consular office was usually monopolized by the class that had taken the initiative in the establishment of the commune. This class was usually composed of small feudal or nonfeudal landowners and the wealthier merchants. In Pisa and Genoa the commercial element was predominant, while in parts of Piedmont the commune derived from the associations of the local nobility. Thus the early city-state was predominantly aristocratic. The fortified towers of the leading families, resembling the feudal castles of the countryside, were characteristic of these conditions. In Italy there had in fact never been the same separation between town and countryside as there had been, for instance, in northern France and in Germany; feudal society had penetrated into the towns, while nonnoble citizens were often landowners outside their walls. This link between town and country was to become stronger and more complex in the course of communal history.

From the beginning the conquest of the countryside (contado) became one of the main objectives of city-state policy. The small fortified townships (castelli) and the lesser rural places were now absorbed by the city-states. The divisions and subdivisions of feudal property, partly the result of the Lombard law of inheritance, weakened many feudal houses and thus facilitated the conquest, while the bishops could not prevent the extension of communal control to their lands. The members of the rural nobility were subjected one by one and often forced to become citizens; others did so voluntarily. Only a small number of the more powerful families, such as the house of Este, the Malaspina, the Guidi, and the Aldobrandeschi, succeeded in maintaining their independence—and that not without frequent losses and concessions.

Additional Information

Simply stated, a city-state is an independent country that exists completely within the borders of a single city. Originating in late 19th century England, the term has also been applied to the early world superpower cities such as ancient Rome, Carthage, Athens, and Sparta. Today, Monaco, Singapore, and Vatican City are considered the only true city-states.

Key Takeaways: City State

* A city-state is an independent, self-governing country contained totally within the borders of a single city.

* The ancient empires of Rome, Carthage, Athens, and Sparta are considered early examples of city-states.

* Once numerous, today there are few true city-states. They are small in size and dependent on trade and tourism.

* The only three agreed upon city-states today are Monaco, Singapore, and Vatican City.

City State Definition

The city-state is a usually small, independent country consisting of a single city, the government of which exercises full sovereignty or control over itself and all territories within its borders. Unlike in more traditional multi-jurisdictional countries, where political powers are shared between the national government and various regional governments, the single city of city-state functions as the center of political, economic, and cultural life.

Historically, the first recognized city-states evolved in the classical period of Greek civilization during the 4th and 5th centuries BCE. The Greek term for city-states, “polis,” came from the Acropolis (448 BCE), which served as the governmental center of ancient Athens.

Both the popularity and prevalence of the city-state flourished until the tumultuous downfall of Rome in 476 CE, which led to the near annihilation of the form of government. City-states saw a small revival during the 11th century CE, when several Italian examples, such as Naples and Venice, realized considerable economic prosperity.

Characteristics of City-States

The unique characteristic of a city-state that sets it aside from other types of government is its sovereignty or independence. This means that a city-state has the full right and power to govern itself and its citizens, without any interference from outside governments. For example, the government of the city-state of Monaco, though located totally within France, is not subject to French laws or policies.

By having sovereignty, city-states differ from other forms of government establishments such as “autonomous regions” or territories. While autonomous regions are functionally political subdivisions of a central national government, they retain varying degrees of self-governance or autonomy from that central government. Hong Kong and Macau in the People’s Republic of China and Northern Ireland in the United Kingdom are examples of autonomous regions.

Unlike ancient city-states such as Rome and Athens, which grew powerful enough to conquer and annex vast areas of land around them, modern city-states remain small in land area. Lacking the space necessary for agriculture or industry, the economies of the three modern city-states are dependent on trade or tourism. Singapore, for example, has the second-busiest seaport in the world, and Monaco and Vatican City are two of the world’s most popular tourist destinations.

Modern City-States

While several non-sovereign cities, such as Hong Kong and Macau, along with Dubai and Abu Dhabi in the United Arab Emirates, are sometimes considered city-states, they actually function as autonomous regions. Most geographers and political scientists agree that the three modern true city-states are Monaco, Singapore, and Vatican City.

Monaco

Monaco is a city-state located on France’s Mediterranean coastline. With a land area of 0.78 square miles and an estimated 38,500 permanent residents, it is the world’s second-smallest, but the most densely populated nation. A voting member of the UN since 1993, Monaco employs a constitutional monarchy form of government. Though it maintains a small military, Monaco depends on France for defense. Best known for its upscale casino district of Monte-Carlo, deluxe hotels, Grand Prix motor racing, and yacht-lined harbor, Monaco’s economy depends almost entirely on tourism.   

Singapore

Singapore is an island city-state in Southeast Asia. With about 5.3 million people living within its 270 square miles, it is the second most densely populated country in the world after Monaco. Singapore became an independent republic, a city and a sovereign country in 1965, after being expelled from the Malaysian Federation. Under its constitution, Singapore employs a representative democracy form of government with its own currency and full, highly-trained armed forces. With the fifth-largest per capita GDP in the world and an enviably low unemployment rate, Singapore’s economy thrives from exporting a vast variety of consumer products.

Vatican City

Occupying an area of only about 108 acres inside Rome, Italy, the city-state of Vatican City stands as the world’s smallest independent country. Created by the 1929 Lateran Treaty with Italy, Vatican City’s political system is controlled by the Roman Catholic Church, with the Pope serving as the legislative, judicial, and executive head of government. The city’s permanent population of around 1,000 is made up almost entirely of Catholic clergymen. As a neutral country with no military of its own, Vatican City has never been involved in a war. Vatican City’s economy relies on sales of its postage stamps, historical publications, mementos, donations, investments of its reserves, and museum admission fees. 

VaticanSylvainSonnetTheImageBankGetty2250x1500-56a851b85f9b58b7d0f20ea0.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1691 2023-03-09 00:13:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1594) University

Summary

A university (from Latin universitas 'a whole') is an institution of higher (or tertiary) education and research which awards academic degrees in several academic disciplines. Universities typically offer both undergraduate and postgraduate programs. In the United States, the designation is reserved for colleges that have a graduate school.

The word university is derived from the Latin universitas magistrorum et scholarium, which roughly means "community of teachers and scholars".

The first universities in Europe were established by Catholic Church monks. The University of Bologna (Università di Bologna), Italy, which was founded in 1088, is the first university in the sense of:

* Being a high degree-awarding institute.
* Having independence from the ecclesiastic schools, although conducted by both clergy and non-clergy.
* Using the word universitas (which was coined at its foundation).
* Issuing secular and non-secular degrees: grammar, rhetoric, logic, theology, canon law, notarial law.

Details

A university is an institution of higher education, usually comprising a college of liberal arts and sciences and graduate and professional schools and having the authority to confer degrees in various fields of study. A university differs from a college in that it is usually larger, has a broader curriculum, and offers graduate and professional degrees in addition to undergraduate degrees. Although universities did not arise in the West until the Middle Ages in Europe, they existed in some parts of Asia and Africa in ancient times.

Early universities

The modern Western university evolved from the medieval schools known as studia generalia; they were generally recognized places of study open to students from all parts of Europe. The earliest studia arose out of efforts to educate clerks and monks beyond the level of the cathedral and monastic schools. The inclusion of scholars from foreign countries constituted the primary difference between the studia and the schools from which they grew.

The earliest Western institution that can be called a university was a famous medical school that arose at Salerno, Italy, in the 9th century and drew students from all over Europe. It remained merely a medical school, however. The first true university in the West was founded at Bologna late in the 11th century. It became a widely respected school of canon and civil law. The first university to arise in northern Europe was the University of Paris, founded between 1150 and 1170. It became noted for its teaching of theology, and it served as a model for other universities in northern Europe such as the University of Oxford in England, which was well established by the end of the 12th century. The Universities of Paris and Oxford were composed of colleges, which were actually endowed residence halls for scholars.

These early universities were corporations of students and masters, and they eventually received their charters from popes, emperors, and kings. The University of Naples, founded by Emperor Frederick II (1224), was the first to be established under imperial authority, while the University of Toulouse, founded by Pope Gregory IX (1229), was the first to be established by papal decree. These universities were free to govern themselves, provided they taught neither atheism nor heresy. Students and masters together elected their own rectors (presidents). As the price of independence, however, universities had to finance themselves. So teachers charged fees, and, to assure themselves of a livelihood, they had to please their students. These early universities had no permanent buildings and little corporate property, and they were subject to the loss of dissatisfied students and masters who could migrate to another city and establish a place of study there. The history of the University of Cambridge began in 1209 when a number of disaffected students moved there from Oxford, and 20 years later Oxford profited by a migration of students from the University of Paris.

From the 13th century on, universities were established in many of the principal cities of Europe. Universities were founded at Montpellier (beginning of the 13th century) and Aix-en-Provence (1409) in France, at Padua (1222), Rome (1303), and Florence (1321) in Italy, at Salamanca (1218) in Spain, at Prague (1348) and Vienna (1365) in central Europe, at Heidelberg (1386), Leipzig (1409), Freiburg (1457), and Tübingen (1477) in what is now Germany, at Louvain (1425) in present-day Belgium, and at Saint Andrews (1411) and Glasgow (1451) in Scotland.

Until the end of the 18th century, most Western universities offered a core curriculum based on the seven liberal arts: grammar, logic, rhetoric, geometry, arithmetic, astronomy, and music. Students then proceeded to study under one of the professional faculties of medicine, law, and theology. Final examinations were grueling, and many students failed.

Impact of the Protestant Reformation and the Counter-Reformation on European universities

The Protestant Reformation of the 16th century and the ensuing Counter-Reformation affected the universities of Europe in different ways. In the German states, new Protestant universities were founded and older schools were taken over by Protestants, while many Roman Catholic universities became staunch defenders of the traditional learning associated with the Catholic church. By the 17th century, both Protestant and Catholic universities had become overly devoted to defending correct religious doctrines and hence remained resistant to the new interest in science that had begun to sweep through Europe. The new learning was discouraged, and thus many universities underwent a period of relative decline. New schools continued to be founded during this time, however, including ones at Edinburgh (1583), Leiden (1575), and Strasbourg (university status, 1621).

The first modern university in Europe was that of Halle, founded by Lutherans in 1694. This school was one of the first to renounce religious orthodoxy of any kind in favour of rational and objective intellectual inquiry, and it was the first where teachers lectured in German (i.e., a vernacular language) rather than in Latin. Halle’s innovations were adopted by the University of Göttingen (founded 1737) a generation later and subsequently by most German and many American universities.

In the later 18th and 19th centuries religion was gradually displaced as the dominant force as European universities became institutions of modern learning and research and were secularized in their curriculum and administration. These trends were typified by the University of Berlin (1809), in which laboratory experimentation replaced conjecture; theological, philosophical, and other traditional doctrines were examined with a new rigour and objectivity; and modern standards of academic freedom were pioneered. The German model of the university as a complex of graduate schools performing advanced research and experimentation proved to have a worldwide influence.

First universities in the Western Hemisphere

The first universities in the Western Hemisphere were established by the Spaniards: the University of Santo Domingo (1538) in what is now the Dominican Republic and the University of Michoacán (1539) in Mexico. The earliest American institutions of higher learning were the four-year colleges of Harvard (1636), William and Mary (1693), Yale (1701), Princeton (1746), and King’s College (1754; now Columbia). Most early American colleges were established by religious denominations, and most eventually evolved into full-fledged universities. One of the oldest universities in Canada is that at Toronto, chartered as King’s College in 1827.

As the frontier of the United States moved westward, hundreds of new colleges were founded. American colleges and universities tended to imitate German models, seeking to combine the Prussian ideal of academic freedom with the native tradition of educational opportunity for the many. The growth of such schools in the United States was greatly spurred by the Morrill Act of 1862, which granted each state tracts of land with which to finance new agricultural and mechanical schools. Many “land-grant colleges” arose from this act, and there developed among these the Massachusetts Institute of Technology (MIT), Cornell University, and the state universities of Illinois, Wisconsin, and Minnesota.

Reorganization, secularization, and modernization from the 19th century

Several European countries in the 19th century reorganized and secularized their universities, notably Italy (1870), Spain (1876), and France (1896). Universities in these and other European countries became mostly state-financed. Women began to be admitted to universities in the second half of the 19th century. Meanwhile, universities’ curricula also continued to evolve. The study of modern languages and literatures was added to, and in many cases supplanted, the traditional study of Latin, Greek, and theology. Such sciences as physics, chemistry, biology, and engineering achieved a recognized place in curricula, and by the early 20th century the newer disciplines of economics, political science, psychology, and sociology were also taught.

In the late 19th and 20th centuries Great Britain and France established universities in many of their colonies in South and Southeast Asia and Africa. Most of the independent countries that emerged from these colonies in the mid-20th century expanded their university systems along the lines of their European or American models, often with major technical and economic assistance from former colonial rulers, industrialized countries, and international agencies such as the World Bank. Universities in Japan, China, and Russia also evolved in response to pressures for modernization. In India some preindependence universities, such as Banaras Hindu University (1916) and Rabindranath Tagore’s Visva-Bharati (1921), were founded as alternatives to Western educational principles. The state universities of Moscow (1755) and St. Petersburg (1819) retained their preeminence in Russia. Tokyo (1877) and Kyōto (1897) universities were the most prestigious ones in Japan, as was Peking University (1898) in China.

Modern universities

Modern universities may be financed by national, state, or provincial governments, or they may depend largely on tuition fees paid by their students. The typical modern residential university may enroll 30,000 or more students and educate both undergraduates and graduate students in the entire range of the arts and humanities, mathematics, the social sciences, the physical, biological, and earth sciences, and various fields of technology. Nonresidential, virtual, and open universities, some of which are modeled after Britain’s Open University (1969), may enroll 200,000 or more students, who pursue both degree-credit and noncredit courses of study. Universities are the main providers of graduate-level training in most professional fields.

shutterstock_1860471418.jpg?itok=ufdNyoLV


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1692 2023-03-09 20:32:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1595) Open university

Summary

An open university is a university with an open-door academic policy, with minimal or no entry requirements. Open universities may employ specific teaching methods, such as open supported learning or distance education. However, not all open universities focus on distance education, nor do distance-education universities necessarily have open admission policies.

History

A precursor to the open university was the University of London External study system established in 1858; the university was a degree-awarding examination board and welcomed anyone who could meet its entry requirements and pay the requisite fees, including students from anywhere in the world. Participants could continue to earn a living while they studied, could learn in any way they wished, and could sit their examinations without visiting Britain. A similar establishment was the Royal University of Ireland, founded in 1879 as an examining and degree-awarding university based on the University of London model. Examinations were open to external candidates in addition to those that attended lectures at participant colleges; many schools and convents entered their students for both advanced and degree-level examinations. Many of the early graduates were women, because Trinity College Dublin did not admit women until twenty years later.

The University of the Cape of Good Hope, later to become University of South Africa (UNISA), was created in 1873, and had a similar model to the University of London. It had no students, instead setting academic standards and acting as an examination board for associated university colleges. By 1946, these colleges were becoming independent universities, and UNISA began to offer postal tuition. In apartheid South Africa, it offered educational opportunities to all ethnicities, but students had to meet normal matriculation requirements. There was very little student support, and the drop-out rate was high, particularly among black South Africans. By the new millennium, around 400,000 students in 130 countries were taking its courses, and it had become one of the largest distance learning institutions in the world.

In the Soviet Union in the late 1950s, Nikita Khrushchev significantly extended higher education using a system of correspondence courses with part-time education, in which students took part while remaining in the workplace. By 1965, there were 1.7 million students in this part-time/consultation model, 1.6 million full time students, and 0.5 million students taking evening classes. The support given enabled working-class students, at little cost to themselves, to become useful functionaries and members of the Communist party. With the break-up of the Soviet Union in the 1990s, the state no longer had need of the cadre of functionaries and the system collapsed.

The first European open university was the Open University in the United Kingdom which was established in 1969. It aimed to widen access to the highest standards of scholarship in higher education; it uses a variety of methods for teaching, including written, audio and visual materials, the Internet, disc-based software and television programmes on DVD. Course-based television broadcasts by the BBC continued until 15 December 2006.

In 1974, Asia's first open university was founded in Islamabad, Pakistan as the Allama Iqbal Open University (AIOU). Since its inception, it has become Pakistan's largest university by active enrollment at 1,027,000. AIOU was developed on Open University's model and success. Following this, similar models were implemented in other South Asian countries with the establishment of Indira Gandhi National Open University (1985) and Bangladesh Open University (1992).

The National University of Distance Education (UNED) was established in Spain in 1972. Its distance learning model provided higher education for those who had been excluded from the existing catholic establishments. It was a national university and had a government-imposed curriculum. The language of instruction was Spanish, and UNED faced hostility from the Basque and Catalan regions. In fact Catalonia set up its own distance learning centre in 1995, the Open University of Catalonia, with instruction in both languages.

Details

Open University is a British experiment in higher education for adults. It opened in January 1971 with headquarters at the new town of Milton Keynes, Buckinghamshire. There are no academic prerequisites for enrollment in Open University, the aim of which is to extend educational opportunities to all. Courses, centrally organized by a distinguished faculty, are conducted by various means, including television, correspondence, study groups, and residential courses or seminars held at centres scattered throughout Great Britain. The correspondence course, however, is the principal educational technique. Televised lectures and seminars merely supplement it.

Modern distance learning:

Web-based courses

By the beginning of the 21st century, more than half of all two-year and four-year degree-granting institutions of higher education in the United States offered distance education courses, primarily through the Internet. With more than 100,000 different online courses to choose from, about one-quarter of American students took at least one such course each term. Common target populations for distance learning include professionals seeking recertification, workers updating employment skills, individuals with disabilities, and active military personnel.

Although the theoretical trend beginning in the 1990s seemed to be toward a stronger reliance on video, audio, and other multimedia, in practice most successful programs have predominately utilized electronic texts and simple text-based communications. The reasons for this are partly practical—individual instructors often bear the burden of producing their own multimedia—but also reflect an evolving understanding of the central benefits of distance learning. It is now seen as a way of facilitating communication between teachers and students, as well as between students, by removing the time constraints associated with sharing information in traditional classrooms or during instructors’ office hours. Similarly, self-paced software educational systems, though still used for certain narrow types of training, have limited flexibility in responding and adapting to individual students, who typically demand some interaction with other humans in formal educational settings.

Modern distance learning courses employ Web-based course-management systems that incorporate digital reading materials, podcasts (recorded sessions for electronic listening or viewing at the student’s leisure), e-mail, threaded (linked) discussion forums, chat rooms, and test-taking functionality in virtual (computer-simulated) classrooms. Both proprietary and open-source systems are common. Although most systems are generally asynchronous, allowing students access to most features whenever they wish, synchronous technologies, involving live video, audio, and shared access to electronic documents at scheduled times, are also used. Shared social spaces in the form of blogs, wikis (Web sites that can be modified by all classroom participants), and collaboratively edited documents are also used in educational settings but to a lesser degree than similar spaces available on the Internet for socializing.

Web-based services

Alongside the growth in modern institutional distance learning has come Web-based or facilitated personal educational services, including e-tutoring, e-mentoring, and research assistance. In addition, there are many educational assistance companies that help parents choose and contact local tutors for their children while the companies handle the contracts. The use of distance learning programs and tutoring services has increased particularly among parents who homeschool their children. Many universities have some online tutoring services for remedial help with reading, writing, and basic mathematics, and some even have online mentoring programs to help doctoral candidates through the dissertation process. Finally, many Web-based personal-assistant companies offer a range of services for adults seeking continuing education or professional development.

Open universities

One of the most prominent types of educational institutions that makes use of distance learning is the open university, which is open in the sense that it admits nearly any adult. Since the mid-20th century the open university movement has gained momentum around the world, reflecting a desire for greater access to higher education by various constituencies, including nontraditional students, such as the disabled, military personnel, and prison inmates.

The origin of the movement can be traced to the University of London, which began offering degrees to external students in 1836. This paved the way for the growth of private correspondence colleges that prepared students for the University of London’s examinations and enabled them to study independently for a degree without formally enrolling in the university. In 1946 the University of South Africa, headquartered in Pretoria, began offering correspondence courses, and in 1951 it was reconstituted to provide degree courses for external students only. A proposal in Britain for a “University of the Air” gained support in the early 1960s, which led to the founding of the Open University in 1971 in the so-called new town of Milton Keynes. By the end of the 1970s the university had 25,000 students, and it has since grown to annual enrollments in the hundreds of thousands. Open universities have spread across the world and are characterized as “mega-universities” because their enrollments may exceed hundreds of thousands, or even millions, of students in countries such as India, China, and Israel.

As one of the most successful nontraditional institutions with a research component, the Open University is a major contributor to both the administrative and the pedagogical literature in the field of open universities. The university relies heavily on prepared materials and a tutor system. The printed text was originally the principal teaching medium in most Open University courses, but this changed somewhat with the advent of the Internet and computers, which enabled written assignments and materials to be distributed via the Web. For each course, the student is assigned a local tutor, who normally makes contact by telephone, mail, or e-mail to help with queries related to the academic materials. Students may also attend local face-to-face classes run by their tutor, and they may choose to form self-help groups with other students. Tutor-graded assignments and discussion sessions are the core aspects of this educational model. The tutors and interactions between individual students are meant to compensate for the lack of face-to-face lectures in the Open University. To emphasize the tutorial and individualized-learning aspects of its method, the Open University prefers to describe it as “supported open learning” rather than distance learning.

Academic issues and future directions

From the start, correspondence courses acquired a poor academic reputation, especially those provided by for-profit entities. As early as 1926, as a study commissioned by the Carnegie Corporation found, there was widespread fraud among correspondence schools in the United States, and there were no adequate standards to protect the public. While the situation was later improved by the introduction of accrediting agencies that set standards for the delivery of distance learning programs, there has always been concern about the quality of the learning experience and the verification of student work. Additionally, the introduction of distance learning in traditional institutions raised fears that technology will someday completely eliminate real classrooms and human instructors.

Because many distance learning programs are offered by for-profit institutions, distance learning has become associated with the commercialization of higher education. Generally, critics of this trend point to the potential exploitation of students who do not qualify for admission to traditional colleges and universities, the temptation in for-profit schools to lower academic standards in order to increase revenue, and a corporate administrative approach that emphasizes “market models” in educational curricula, or the designing of courses and curricula to appeal to a larger demographic in order to generate more institutional revenue—all of which point to a lowering of academic standards.

Distance learning, whether at for-profit universities or at traditional ones, utilizes two basic economic models designed to reduce labour costs. The first model involves the substitution of labour with capital, whereas the second is based on the replacement of faculty with cheaper labour. Proponents of the first model have argued that distance learning offers economies of scale by reducing personnel costs after an initial capital investment for such things as Web servers, electronic texts and multimedia supplements, and Internet programs for interacting with students. However, many institutions that have implemented distance learning programs through traditional faculty and administrative structures have found that ongoing expenses associated with the programs may actually make them more expensive for the institution than traditional courses. The second basic approach, a labour-for-labour model, is to divide the faculty role into the functions of preparation, presentation, and assessment and to assign some of the functions to less-expensive workers. Open universities typically do this by forming committees to design courses and hiring part-time tutors to help struggling students and to grade papers, leaving the actual classroom instruction duties, if any, to the professors. These distance learning models suggest that the largest change in education will come in altered roles for faculty and vastly different student experiences.

The emergence of Massive Open Online Courses (MOOCs) in the first and second decades of the 21st century represented a major shift in direction for distance learning. MOOCs are characterized by extremely large enrollments—in the tens of thousands—the use of short videotaped lectures, and peer assessments. The open-online-course format had been used early on by some universities, but it did not become widely popular until the emergence of MOOC providers such as Coursera, edX, Khan Academy, and Udacity. Although the initial purpose of MOOCs was to provide informal learning opportunities, there have been experiments in using this format for degree credit and certifications from universities.

3064.jpg?width=620&quality=45&dpr=2&s=none


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1693 2023-03-10 15:45:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1596) Landlocked country

Summary

A landlocked country is a country that does not have territory connected to an ocean or whose coastlines lie on endorheic basins. There are currently 44 landlocked countries and 4 landlocked de facto states. Kazakhstan is the world's largest landlocked country while Ethiopia is the world's most populous landlocked country.

In 1990, there were only 30 landlocked countries in the world. The dissolutions of the Soviet Union and Czechoslovakia; the breakup of Yugoslavia; the independence referendums of South Ossetia (partially recognized), Eritrea, Montenegro, South Sudan, and the Luhansk People's Republic (partially recognized); and the unilateral declaration of independence of Kosovo (partially recognized) created 15 new landlocked countries and 5 partially recognized landlocked states while the former landlocked country of Czechoslovakia ceased to exist on 1 January 1993. On 5 October 2022, the Luhansk People's Republic was annexed by Russia and ceased to exist as a de facto landlocked state.

Generally, being landlocked creates some political and economic disadvantages that having access to international waters would avoid. For this reason, nations large and small throughout history have sought to gain access to open waters, even at great expense in wealth, bloodshed, and political capital.

The economic disadvantages of being landlocked can be alleviated or aggravated depending on degree of development, surrounding trade routes and freedom of trade, language barriers, and other considerations. Some landlocked countries in Europe are affluent, such as Andorra, Austria, Liechtenstein, Luxembourg, San Marino, Switzerland, and Vatican City, all of which, excluding Luxembourg (a founding member of NATO), frequently employ neutrality in global political issues. However, 32 out of the 44 landlocked countries, including all the landlocked countries in Africa, Asia, and South America, have been classified as the Landlocked Developing Countries (LLDCs) by the United Nations. Nine of the twelve countries with the lowest Human Development Indices (HDI) are landlocked. International initiatives are aimed at reducing inequalities resulting from issues such as these, such as the United Nations Sustainable Development Goal 10, which aims to reduce inequality substantially by 2030.

Landlocked countries by continent

According to the United Nations geoscheme (excluding de facto states), Africa has the most landlocked countries, at 16, followed by Europe (14), Asia (12), and South America (2). However, if Armenia, Artsakh (unrecognized), Azerbaijan, Kazakhstan, and South Ossetia (partially recognized) are counted as parts of Europe, then Europe has the most landlocked countries, at 21 (including all four landlocked de facto states). If these transcontinental or culturally European countries are included in Asia, then both Africa and Europe (including Kosovo and Transnistria) have the most, at 16. Depending on the status of Kazakhstan and the South Caucasusian countries, Asia has between 9 and 14 (including Artsakh and South Ossetia). South America only has two landlocked countries.

Australia and North America are the only inhabited continents with no landlocked countries. Antarctica is uninhabited and has no countries. Oceania (which is usually not considered a continent but a geographical region by the English-speaking countries) also has no landlocked countries. Other than Papua New Guinea, which shares a land border with Indonesia (a transcontinental country), all the other countries in Oceania are countries without a land border.

All landlocked countries besides Bolivia and Paraguay are located in Afro-Eurasia. Though 11 island countries (including Northern Cyprus) share at least one land border with another country, none of them are landlocked.

Details

A landlocked country is an independent sovereign state that does not have direct access to an ocean, such as the Atlantic, or to a sea that is not landlocked, such as the Mediterranean. Countries such as Kazakhstan, in Central Asia, that only have access to a landlocked sea such as the Caspian are considered landlocked. Such inland seas are viewed as large lakes, although this has been a matter of debate.

Characteristics of landlocked countries

There are currently 44 landlocked countries. The largest by area is Kazakhstan, in Central Asia, while the smallest is the Vatican, in Europe, surrounded by the capital of Italy, Rome. The most populous landlocked country is Ethiopia, in Africa, while the least populous one is the Vatican. The latter is one of only three countries, along with Lesotho and San Marino, to be surrounded entirely by another country. Two countries—Liechtenstein and Uzbekistan—are double-landlocked, making them the only ones to be exclusively surrounded by other landlocked countries. The former is located in Europe and is surrounded by Austria and Switzerland, while the latter is in Asia and borders Afghanistan, Kazakhstan, Kyrgyzstan, Tajikistan, and Turkmenistan.

There are also certain territories that are landlocked but are not unanimously recognized as sovereign states. These include South Ossetia, in Georgia; Transdniestria, in Moldova; and Nagorno-Karabakh, surrounded by Azerbaijan.

Economic and security issues of landlocked countries

Being a landlocked country prompts certain disadvantages which can curtail national income, such as the lack of seaports, coastal trading points, and a large-scale fishing industry. As a result, 32 of these states are designated as landlocked developing countries (LLDCs) by the United Nations; 17 of them are considered least developed. LLDCs have to cover considerable transport costs for merchandise to be sent to and received from overseas markets, thereby discouraging investment, decreasing their competitive edge, and isolating them from international trade. Seaborne trade must transit through other countries, which often involves dealing with inappropriate infrastructure and inconvenient border-crossing procedures. Some LLDCs have managed to achieve a positive balance of trade despite being landlocked, as exemplified by Zambia, due to its mining industry.

The obstacles provided by neighbours are exemplified by Burundi’s trade in the early 1990s. Limitations in Tanzania’s transportation infrastructure, strained diplomatic relations with Kenya, and a civil war in Burundi itself led the country to consider the port of Durban in South Africa to export its merchandise. Similarly, Uganda’s exports, heavily reliant on cargo passing through the Kenyan port of Mombasa, were significantly affected by political tension between the two countries in the 2000s. Neighbours can also be essential trade partners for landlocked countries, as illustrated by Bhutan, whose products are bought primarily by India, and by Mongolia, which exports mainly to China.

The economic issues experienced by LLDCs seem like a distant reality for European landlocked countries, such as Luxembourg, Austria, or Switzerland, whose economies are more diversified, less dependent on seaborne trade, and surrounded by more economically developed markets. A relatively shorter distance from the sea also impacts on their wealth. Moreover, as mentioned by British economist Paul Collier, such countries can benefit from cross-border spillover, whereby landlocked countries benefit from the resources of wealthy neighbours, as exemplified by microstates such as Liechtenstein and Andorra, and larger states like Switzerland, which borders high-income countries such as Germany, France, and Italy.

Several countries became landlocked after wars or as a result of independence movements. Examples include Serbia, after Montenegro’s independence; South Sudan, upon seceding from Sudan; and Ethiopia, after Eritrea became independent in 1993. Certain landlocked countries have attempted to regain access to the sea. Bolivia, for instance, has struggled with Chile to recover some 250 miles (400 km) of coastline lost during the War of the Pacific (1879–83). While Chile does allow Bolivia to use the ports of Antofagasta and Arica to export raw materials and commodities, greater opportunities to trade are sought by the landlocked country. Other nations have sought access to the sea by means of joint infrastrucure projects with their neighbours. This is exemplified by the Trans-Afghan Railway project, which would span Uzbekistan and Afghanistan before connecting to Pakistan’s domestic rail network, ultimately reaching ports such as Karachi and Gwadar.

Perhaps surprisingly, some landlocked countries maintain navies as a matter of national pride as well as security. These are typically “brown water” forces such as those employed by Paraguay on the Paraguay and Paraná rivers, by Serbia on the Danube, and by Rwanda on Lake Kivu. These “brown water” navies are largely composed of small shallow-draft patrol boats crewed by a handful of sailors or marines. Azerbaijan and Kazakhstan, however, operate ships as large as frigates on the Caspian Sea. Arguably the most conspicuous landlocked navy belongs to Bolivia. Every March, the country marks the Day of the Sea as a national holiday to commemorate the loss of its Pacific coast, and the Bolivian navy is maintained with the belief that, one day, Bolivia will regain access to the sea. Bolivia’s 5,000 sailors and naval infantry train on Lake Titicaca in an effort to preserve maritime readiness.

Economic diversification, enhanced infrastructure, access to reliable electricity, enhanced licensing processes, and trade facilitation measures are among the initiatives suggested by the United Nations in order to improve the status and wealth of LLDCs. More specifically, Collier suggests increasing spillovers by promoting regional trade, developing transport corridors, attracting aid from donors, fostering greater transparency to attract foreign investment, encouraging migration and remittances, and harnessing emerging technologies.

LLDCs_map_large_0.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1694 2023-03-11 15:29:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1597) International Finance Corporation

Summary

The International Finance Corporation (IFC) is an international financial institution that offers investment, advisory, and asset-management services to encourage private-sector development in less developed countries. The IFC is a member of the World Bank Group and is headquartered in Washington, D.C. in the United States.

It was established in 1956, as the private-sector arm of the World Bank Group, to advance economic development by investing in for-profit and commercial projects for poverty reduction and promoting development. The IFC's stated aim is to create opportunities for people to escape poverty and achieve better living standards by mobilizing financial resources for private enterprise, promoting accessible and competitive markets, supporting businesses and other private-sector entities, and creating jobs and delivering necessary services to those who are poverty stricken or otherwise vulnerable.

Since 2009, the IFC has focused on a set of development goals that its projects are expected to target. Its goals are to increase sustainable agriculture opportunities, improve healthcare and education, increase access to financing for microfinance and business clients, advance infrastructure, help small businesses grow revenues, and invest in climate health.

The IFC is owned and governed by its member countries but has its own executive leadership and staff that conduct its normal business operations. It is a corporation whose shareholders are member governments that provide paid-in capital and have the right to vote on its matters. Originally, it was more financially integrated with the World Bank Group, but later, the IFC was established separately and eventually became authorized to operate as a financially autonomous entity and make independent investment decisions.

It offers an array of debt and equity financing services and helps companies face their risk exposures while refraining from participating in a management capacity. The corporation also offers advice to companies on making decisions, evaluating their impact on the environment and society, and being responsible. It advises governments on building infrastructure and partnerships to further support private sector development.

The corporation is assessed by an independent evaluator each year. In 2011, its evaluation report recognized that its investments performed well and reduced poverty, but recommended that the corporation define poverty and expected outcomes more explicitly to better-understand its effectiveness and approach poverty reduction more strategically. The corporation's total investments in 2011 amounted to $18.66 billion. It committed $820 million to advisory services for 642 projects in 2011, and held $24.5 billion worth of liquid assets. The IFC is in good financial standing and received the highest ratings from two independent credit rating agencies in 2018.

IFC comes under frequent criticism from NGOs that it is not able to track its money because of its use of financial intermediaries. For example, a report by Oxfam International and other NGOs in 2015, "The Suffering of Others," found the IFC was not performing enough due diligence and managing risk in many of its investments in third-party lenders.

Other criticism focuses on IFC working excessively with large companies or wealthy individuals already able to finance their investments without help from public institutions such as IFC, and such investments do not have an adequate positive development impact. An example often cited by NGOs and critical journalists is IFC granting financing to a Saudi prince for a five-star hotel in Ghana.

Details

International Finance Corporation (IFC) is a United Nations (UN) specialized agency affiliated with but legally separate from the International Bank for Reconstruction and Development (World Bank). Founded in 1956 to stimulate the economic development of its members by providing capital for private enterprises, the IFC has targeted its aid toward less-developed countries and has been their largest multilateral source of private-sector equity financing and loans. The IFC is headed by a president, who also serves as president of the World Bank; governors and executive directors of the World Bank also serve at the IFC, though it has its own operational and legal staff. Headquartered in Washington, D.C., its original membership of 31 had grown to about 175 by the beginning of the 21st century.

In financing private enterprises, the IFC makes loans without government guarantee of repayment. Unlike most other organizations of its kind, the IFC cannot stipulate how the proceeds of its loans will be spent. The IFC seeks to diversify its investments, having funded projects in the fields of tourism development, animal feeds, iron and steel, fertilizers, and textiles. Its primary activities include providing direct project financing and technical advice and assistance, mobilizing resources by acting as a catalyst for private investment, and underwriting investment funds.

The IFC operates on a weighted-voting system based on members’ subscription shares, with the United States exercising about 25 percent of the total votes—quadruple that of Japan, the second largest shareholder. After the end of the Cold War, demand for IFC loans increased among countries in eastern Europe and among the former republics of the Soviet Union. In the late 1990s the IFC began considering institutional and procedural reforms, including public disclosure, and devoted more attention to the environmental and social impact of its aid.

Between 1956 and the beginning of the 21st century, the IFC provided more than $25 billion to fund projects in nearly 125 countries and arranged for nearly $18 billion in additional financing. In 2000 alone the IFC invested more than $4 billion for 250 projects in nearly 80 countries.

Additional Information

IFC is the largest global development institution focused on the private sector in developing countries.

IFC, a member of the World Bank Group, advances economic development and improves the lives of people by encouraging the growth of the private sector in developing countries. We apply our financial resources, technical expertise, global experience, and innovative thinking to help our partners overcome financial, operational, and other challenges.

We achieve this by investing in impactful projects, mobilizing other investors, and sharing expertise. In doing so, we create jobs and raise living standards, especially for the poor and vulnerable. Our work supports the World Bank Group’s twin goals of ending extreme poverty and boosting shared prosperity.

The International Finance Corporation (IFC) headquartered in the United States of America, is an international organization with a strong global presence and focus on development, primarily in the private sector. Established in the 1950s, IFC works in over 100 developing countries through the private sector, with a special focus on infrastructure, manufacturing, agribusiness, services and financial markets. The climate investment portfolio of IFC has reached US$ 13 billion, with a track record in wind and solar projects globally. Its experiences in leveraging, mobilizing and intermediating climate funds and programmes for green growth has allowed it to help unlock private climate investment using blended finance. In addition to investments in climate projects, IFC also provides technical assistance or advisory services to private and public sector clients to promote sound environmental, social, governance and industry standards; catalyse investment in clean energy and resource efficiency; and support sustainable supply chains and community investment. With its experience in investing, mobilizing and intermediating climate finance to promote private sector projects at scale for both mitigation and adaptation in developing countries, IFC sought accreditation to the GCF to contribute its experience and capacity to deliver to support the mandate of the GCF to promote a paradigm shift towards low-emission and climate-resilient development.

logo-ifc.png?itok=hBKhx5sW


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1695 2023-03-12 20:07:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1598) Hall

Summary

A hall is a meeting place, entry, or passageway, ranging in size from a large reception room in a public building to a corridor or vestibule of a house. For the feudal society of medieval Europe, the hall was the centre of all secular activities. Originally it was used by large groups of people for cooking and sleeping, as well as for the activities it still shelters when it is used as courtroom, banquet room, or place of entertainment.

Beginning as a rectangular barnlike structure, the hall probably evolved from the prehistoric wood-framed dwellings of northern Europe. Early examples had much in common with contemporary churches, employing a rhythmic structural system of three or more bays. The larger halls were divided by two rows of posts or stone columns into a nave and aisles. The rough stones of the fireplace were set near the centre of an earth floor strewn with a layer of rushes to provide insulation. Smoke found its way out through the open roof framing at the gable ends or by means of a louver, near the centre of the ridgepole, protected by a wooden turret. The doors were opposite the end of the building reserved for the lord and his family. Eventually this area was distinguished by a low platform or dais, and a partial ceiling was constructed between it and the end wall behind it to form a canopy overhead. Dating from the 12th century, the remains of the Bishop’s Palace at Hereford and the timber roof at Leicester Castle are probably the oldest surviving fragments of gabled feudal halls.

As a defense against marauders, halls were typically placed to take advantage of terrain and were often protected by moats or palisades. In Norman castles and English border fortresses the hall was part of the principal stone tower built over a vaulted storage room with wooden beams supporting rooms above. Until the 14th century the medieval town house consisted of an undivided all-purpose living room, or hall, over a street-level shop area. In the country the hall began to evolve into the manor house in the 13th century as smaller rooms were added at the ends of the great central space. A low structure was built against the end wall for cooking and storage of supplies.

A centre door leading to the kitchen was flanked by the hatches or doors to the pantry and buttery. As the outside doors were placed opposite each other in the long walls at this end, a passageway was formed which was provided with porches and wooden screens to protect the rest of the hall from drafts. Behind the dais a two-story structure was annexed with a solar or private room over a storage basement accessible from it. The solar room was entered from an outside ladder or stair and communicated with the hall by means of a window or peepholes. Later, more secure conditions and the desire for privacy and for more easily heated rooms led to the development of living quarters on the lower floor, with entrances directly into the hall. As the end structures were extended, they were linked with scattered service buildings and the gatehouse to form courts on one or both of the long sides of the hall.

From the 14th century, halls were built with uninterrupted interiors spanned by great timber roofs. The aisled type was retained in monastic hospitals where it was convenient to continue to place beds in the side bays. At Westminster Hall the Norman interior supports were removed and a hammer-beam roof installed. A series of halls in northwestern England retained only the pair of columns nearest the doors to support a great wooden arch and light wooden screen walls blocking the aisles. A large freestanding screen like that at Rufford Old Hall provided further protection from drafts. The typical 15th- or 16th-century hall was entered through doors in a screen structure that terminated in the ornamented parapet of a musicians’ gallery installed over the low passageway ceiling. The large fireplace and its chimney were built into a side wall. The dais was extended at one or both ends to provide a large bay which, from the exterior, appeared to balance the porch. It had full-length mullioned windows supplementing the traditional openings high in the side or end walls.

With the development of the separate dining room and the decline of the old social order at the end of the Middle Ages began the descent of the hall in domestic architecture to its present status of entrance and passageway. However, towns, guilds, colleges, and other organizations built halls rivaling those of the barons. The names of many public buildings reflect the fact that a ceremonial reception room is their major feature.

Details

In architecture, a hall is a relatively large space enclosed by a roof and walls. In the Iron Age and early Middle Ages in northern Europe, a mead hall was where a lord and his retainers ate and also slept. Later in the Middle Ages, the great hall was the largest room in castles and large houses, and where the servants usually slept. As more complex house plans developed, the hall remained a large room for dancing and large feasts, often still with servants sleeping there. It was usually immediately inside the main door. In modern British houses, an entrance hall next to the front door remains an indispensable feature, even if it is essentially merely a corridor.

Today, the (entrance) hall of a house is the space next to the front door or vestibule leading to the rooms directly and/or indirectly. Where the hall inside the front door of a house is elongated, it may be called a passage, corridor (from Spanish corredor used in El Escorial and 100 years later in Castle Howard), or hallway.

History

In warmer climates, the houses of the wealthy were often built around a courtyard, but in northern areas manors were built around a great hall. The hall was home to the hearth and was where all the residents of the house would eat, work, and sleep. One common example of this form is the longhouse. Only particularly messy tasks would be done in separate rooms on the periphery of the hall. Still today the term hall is often used to designate a country house such as a hall house, or specifically a Wealden hall house, and manor houses.

In later medieval Europe, the main room of a castle or manor house was the great hall. In a medieval building, the hall was where the fire was kept. As heating technology improved and a desire for privacy grew, tasks moved from the hall to other rooms. First, the master of the house withdrew to private bedrooms and eating areas. Over time servants and children also moved to their own areas, while work projects were also given their own chambers leaving the hall for special functions. With time, its functions as dormitory, kitchen, parlour, and so on were divided into separate rooms or, in the case of the kitchen, a separate building.

Until the early modern era that majority of the population lived in houses with a single room. In the 17th century, even lower classes began to have a second room, with the main chamber being the hall and the secondary room the parlor. The hall and parlor house was found in England and was a fundamental, historical floor plan in parts of the United States from 1620 to 1860.

In Europe, as the wealthy embraced multiple rooms initially the common form was the enfilade, with rooms directly connecting to each other. In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door.

Other uses

Collegiate halls

Many institutions and buildings at colleges and universities are formally titled "_______ Hall", typically being named after the person who endowed it, for example, King's Hall, Cambridge. Others, such as Lady Margaret Hall, Oxford, commemorate respected people. Between these in age, Nassau Hall at Princeton University began as the single building of the then college. In medieval origin, these were the halls in which the members of the university lived together during term time. In many cases, some aspect of this community remains.

Some of these institutions are titled "Hall" instead of "College" because at the time of their foundation they were not recognised as colleges (in some cases because their foundation predated the existence of colleges) and did not have the appropriate Royal Charter. Examples at the University of Oxford are:

* St Edmund Hall
* Hart Hall (now Hertford College)
* Lady Margaret Hall
* The (currently six) Permanent private halls.

In colleges of the universities of Oxford and Cambridge, the term "Hall" is also used for the dining hall for students, with High Table at one end for fellows. Typically, at "Formal Hall", gowns are worn for dinner during the evening, whereas for "informal Hall" they are not. The medieval collegiate dining hall, with a dais for the high table at the upper end and a screen passage at the lower end, is a modified or assimilated form of the Great hall.

Meeting hall

A hall is also a building consisting largely of a principal room, that is rented out for meetings and social affairs. It may be privately or government-owned, such as a function hall owned by one company used for weddings and cotillions (organized and run by the same company on a contractual basis) or a community hall available for rent to anyone, such as a British village hall.

Religious halls

In religious architecture, as in Islamic architecture, the prayer hall is a large room dedicated to the practice of worship. (example: the prayer hall of the Great Mosque of Kairouan in Tunisia). A hall church is a church with a nave and side aisles of approximately equal height. Many churches have an associated church hall used for meetings and other events.

Public buildings

Following a line of similar development, in office buildings and larger buildings (theatres, cinemas etc.), the entrance hall is generally known as the foyer (the French for fireplace). The atrium, a name sometimes used in public buildings for the entrance hall, was the central courtyard of a Roman house.

Types

In architecture, the term "double-loaded" describes corridors that connect to rooms on both sides. Conversely, a single-loaded corridor only has rooms on one side (and possible windows on the other). A blind corridor does not lead anywhere.

* Billiard hall
* City hall, town hall or village hall
* Concert hall
* Concourse (at a large transportation station)
* Convention center (exhibition hall)
* Dance hall
* Dining hall
* Firehall
* Great room or great hall
* Moot hall
* Prayer hall, such as the sanctuary of a synagogue
* Reading room
* Residence hall
* Trades hall (also called union hall, labour hall, etc.)
* Waiting room (in large transportation stations).

WVH-Main-Hall-e1529051185800-1024x684.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1696 2023-03-13 21:06:20

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1599) Port

A port is a maritime facility comprising one or more wharves or loading areas, where ships load and discharge cargo and passengers. Although usually situated on a sea coast or estuary, ports can also be found far inland, such as Hamburg, Manchester and Duluth; these access the sea via rivers or canals. Because of their roles as ports of entry for immigrants as well as soldiers in wartime, many port cities have experienced dramatic multi-ethnic and multicultural changes throughout their histories.

Ports are extremely important to the global economy; 70% of global merchandise trade by value passes through a port. For this reason, ports are also often densely populated settlements that provide the labor for processing and handling goods and related services for the ports. Today by far the greatest growth in port development is in Asia, the continent with some of the world's largest and busiest ports, such as Singapore and the Chinese ports of Shanghai and Ningbo-Zhoushan. As of 2020, the busiest passenger port in Europe is the Port of Helsinki in Finland. Nevertheless, countless smaller ports do exist that may only serve their local tourism or fishing industries.

Ports can have a wide environmental impact on local ecologies and waterways, most importantly water quality, which can be caused by dredging, spills and other pollution. Ports are heavily affected by changing environmental factors caused by climate change as most port infrastructure is extremely vulnerable to sea level rise and coastal flooding. Internationally, global ports are beginning to identify ways to improve coastal management practices and integrate climate change adaptation practices into their construction.

Historical ports

Wherever ancient civilisations engaged in maritime trade, they tended to develop sea ports. One of the world's oldest known artificial harbors is at Wadi al-Jarf on the Red Sea. Along with the finding of harbor structures, ancient anchors have also been found.

Other ancient ports include Guangzhou during Qin Dynasty China and Canopus, the principal Egyptian port for Greek trade before the foundation of Alexandria. In ancient Greece, Athens' port of Piraeus was the base for the Athenian fleet which played a crucial role in the Battle of Salamis against the Persians in 480 BCE. In ancient India from 3700 BCE, Lothal was a prominent city of the Indus valley civilisation, located in the Bhal region of the modern state of Gujarāt. Ostia Antica was the port of ancient Rome with Portus established by Claudius and enlarged by Trajan to supplement the nearby port of Ostia. In Japan, during the Edo period, the island of Dejima was the only port open for trade with Europe and received only a single Dutch ship per year, whereas Osaka was the largest domestic port and the main trade hub for rice.

Post-classical Swahili kingdoms are known to have had trade port islands and trade routes with the Islamic world and Asia. They were described by Greek historians as "metropolises". Famous African trade ports such as Mombasa, Zanzibar, Mogadishu and Kilwa were known to Chinese sailors such as Zheng He and medieval Islamic historians such as the Berber Islamic voyager Abu Abdullah ibn Battuta.

Many of these ancient sites no longer exist or function as modern ports. Even in more recent times, ports sometimes fall out of use. Rye, East Sussex, was an important English port in the Middle Ages, but the coastline changed and it is now 2 miles (3.2 km) from the sea, while the ports of Ravenspurn and Dunwich have been lost to coastal erosion.

Modern ports

Whereas early ports tended to be just simple harbours, modern ports tend to be multimodal distribution hubs, with transport links using sea, river, canal, road, rail and air routes. Successful ports are located to optimize access to an active hinterland, such as the London Gateway. Ideally, a port will grant easy navigation to ships, and will give shelter from wind and waves. Ports are often on estuaries, where the water may be shallow and may need regular dredging. Deep water ports such as Milford Haven are less common, but can handle larger ships with a greater draft, such as super tankers, Post-Panamax vessels and large container ships. Other businesses such as regional distribution centres, warehouses and freight-forwarders, canneries and other processing facilities find it advantageous to be located within a port or nearby. Modern ports will have specialised cargo-handling equipment, such as gantry cranes, reach stackers and forklift trucks.

Ports usually have specialised functions: some tend to cater mainly for passenger ferries and cruise ships; some specialise in container traffic or general cargo; and some ports play an important military role for their nation's navy. Some third world countries and small islands such as Ascension and St Helena still have limited port facilities, so that ships must anchor off while their cargo and passengers are taken ashore by barge or launch (respectively).

In modern times, ports survive or decline, depending on current economic trends. In the UK, both the ports of Liverpool and Southampton were once significant in the transatlantic passenger liner business. Once airliner traffic decimated that trade, both ports diversified to container cargo and cruise ships. Up until the 1950s the Port of London was a major international port on the River Thames, but changes in shipping and the use of containers and larger ships have led to its decline. Thamesport, a small semi-automated container port (with links to the Port of Felixstowe, the UK's largest container port) thrived for some years, but has been hit hard by competition from the emergent London Gateway port and logistics hub.

In mainland Europe, it is normal for ports to be publicly owned, so that, for instance, the ports of Rotterdam and Amsterdam are owned partly by the state and partly by the cities themselves.

Even though modern ships tend to have bow-thrusters and stern-thrusters, many port authorities still require vessels to use pilots and tugboats for manoeuvering large ships in tight quarters. For instance, ships approaching the Belgian port of Antwerp, an inland port on the River Scheldt, are obliged to use Dutch pilots when navigating on that part of the estuary that belongs to the Netherlands.

Ports with international traffic have customs facilities.

Types

The terms "port" and "seaport" are used for different types of facilities handling ocean-going vessels, and river port is used for river traffic, such as barges and other shallow-draft vessels.

Seaport

A seaport is a port located on the shore of a sea or ocean. It is further categorized as a "cruise port" or a "cargo port". Additionally, "cruise ports" are also known as a "home port" or a "port of call". The "cargo port" is also further categorized into a "bulk" or "break bulk port" or as a "container port".

Cargo port

Cargo ports are quite different from cruise ports, because each handles very different cargo, which has to be loaded and unloaded by a variety of mechanical means.

Bulk cargo ports may handle one particular type of cargo or numerous cargoes, such as grains, liquid fuels, liquid chemicals, wood, automobiles, etc. Such ports are known as the "bulk" or "break bulk ports".

Ports that handle containerized cargo are known as container ports.

Most cargo ports handle all sorts of cargo, but some ports are very specific as to what cargo they handle. Additionally, individual cargo ports may be divided into different operating terminals which handle the different types of cargoes, and may be operated by different companies, also known as terminal operators, or stevedores.

Cruise home port

A cruise home port is the port where cruise ship passengers board (or embark) to start their cruise and disembark the cruise ship at the end of their cruise. It is also where the cruise ship's supplies are loaded for the cruise, which includes everything from fresh water and fuel to fruits, vegetables, champagne, and any other supplies needed for the cruise. "Cruise home ports" are very busy places during the day the cruise ship is in port, because off-going passengers debark their baggage and on-coming passengers board the ship in addition to all the supplies being loaded. Cruise home ports tend to have large passenger terminals to handle the large number of passengers passing through the port. The busiest cruise home port in the world is the Port of Miami, Florida.

Smart port

A smart port uses technologies, including the Internet of Things (IoT) and artificial intelligence (AI) to be more efficient at handling goods. Smart ports usually deploy cloud-based software as part of the process of greater automation to help generate the operating flow that helps the port work smoothly. At present, most of the world's ports have somewhat embedded technology, if not for full leadership. However, thanks to global government initiatives and exponential growth in maritime trade, the amount of intelligent ports has gradually increased. This latest report by business intelligence provider Visiongain assesses that Smart Ports Market spending will reach $1.5 bn in 2019.

Port of call

A port of call is an intermediate stop for a ship on its sailing itinerary. At these ports, cargo ships may take on supplies or fuel, as well as unloading and loading cargo while cruise liners have passengers get on or off ship.

Fishing port

A fishing port is a port or harbor for landing and distributing fish. It may be a recreational facility, but it is usually commercial. A fishing port is the only port that depends on an ocean product, and depletion of fish may cause a fishing port to be uneconomical.

Inland port

An inland port is a port on a navigable lake, river (fluvial port), or canal with access to a sea or ocean, which therefore allows a ship to sail from the ocean inland to the port to load or unload its cargo. An example of this is the St. Lawrence Seaway which allows ships to travel from the Atlantic Ocean several thousand kilometers inland to Great Lakes ports like Toronto, Duluth-Superior, and Chicago. The term "inland port" is also used for dry ports.

Warm-water port

A warm-water port (also known as an ice-free port) is one where the water does not freeze in winter. This is mainly used in the context of countries with mostly cold winters where parts of the coastline freezes over every winter. Because they are available year-round, warm-water ports can be of great geopolitical or economic interest. Such settlements as Dalian in China, Murmansk, Novorossiysk, Petropavlovsk-Kamchatsky and Vostochny Port in Russia, Odesa in Ukraine, Kushiro in Japan and Valdez at the terminus of the Alaska Pipeline owe their very existence to being ice-free ports. The Baltic Sea and similar areas have ports available year-round beginning in the 20th century thanks to icebreakers, but earlier access problems prompted Russia to expand its territory to the Black Sea.

Dry port

A dry port is an inland intermodal terminal directly connected by road or rail to a seaport and operating as a centre for the transshipment of sea cargo to inland destinations.

Environmental issues
   
Ports and their operation are often both the cause of environmental issues, such as sediment contamination and spills from ships and are susceptible to larger environmental issues, such as human caused climate change and its effects.

Dredging

Every year 100 million cubic metres of marine sediment are dredged to improve waterways around ports. Dredging, in its practice, disturbs local ecosystems, brings sediments into the water column, and can stir up pollutants captured in the sediments.

Invasive species

Invasive species are often spread by the bilge water and species attached to the hulls of ships. It is estimated that there are over 7000 invasive species transported in bilge water around the world on a daily basis. Invasive species can have direct or in-direct interactions with native sea life. Direct interaction such as predation, is when a native species with no natural predator is all of a sudden prey of an invasive specie. In-direct interaction can be diseases or other health conditions brought by invasive species.

Air pollution

Ports are also a source of increased air pollution both because of the ships and land transportation at the port. Transportation corridors around ports have higher exhaust and emissions and this can have related health effects on the local communities.

Water quality

Water quality around ports is often lower because of both direct and indirect pollution from the shipping, and other challenges caused by the port's community, such as trash washing into the ocean.

Spills, pollution and contamination

Sewage from ships, and leaks of oil and chemicals from shipping vessels can contaminate local water, and cause other effects like nutrient pollution in the water.

Climate change and sea level rise

Ports and their infrastructure are very vulnerable to climate change and sea level rise, because many of them are in low-lying areas designed for status quo water levels. Variable weather, coastal erosion, and sea level rise all put pressure on existing infrastructure, resulting in subsidence, coastal flooding and other direct pressures on the port.

Reducing impact

There are several initiatives to decrease negative environmental impacts of ports. The World Port Sustainability Program points to all of the Sustainable Development Goals as potential ways of addressing port sustainability. These include SIMPYC, the World Ports Climate Initiative, the African Green Port Initiative, EcoPorts and Green Marine.

World's major ports

Africa

The port of Tangier Med is the largest port on the Mediterranean and in Africa by capacity and went into service in July 2007.

The busiest port in Africa is Port Said in Egypt.

Asia

The port of Shanghai is the largest port in the world in both cargo tonnage and activity. It regained its position as the world's busiest port by cargo tonnage and the world's busiest container port in 2009 and 2010, respectively. It is followed by the ports of Singapore, Hong Kong and Kaohsiung, Taiwan, all of which are in East and Southeast Asia.

The port of Singapore is the world's second-busiest port in terms of total shipping tonnage, it also transships a third of the world's shipping containers, half of the world's annual supply of crude oil, and is the world's busiest transshipment port.

Europe

Europe's busiest container port and biggest port by cargo tonnage by far is the Port of Rotterdam, in the Netherlands. It is followed by the Belgian Port of Antwerp or the German Port of Hamburg, depending on which metric is used. In turn, the Spanish Port of Valencia is the busiest port in the Mediterranean basin.

North America

The largest ports include the Port of New York and New Jersey, Los Angeles and South Louisiana in the U.S., Manzanillo in Mexico and Vancouver in Canada.[citation needed] Panama also has the Panama Canal that connects the Pacific and Atlantic Ocean, and is a key conduit for international trade.

Oceania

The largest port in Australia is the Port of Melbourne.

South America

According to ECLAC's "Maritime and Logistics Profile of Latin America and the Caribbean", the largest ports in South America are the Port of Santos in Brazil, Cartagena in Colombia, Callao in Peru, Guayaquil in Ecuador, and the Port of Buenos Aires in Argentina.

Shanghai-Port-Strives-to-Keep-Global-Trade-Moving-as-COVID-1024x682.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1697 2023-03-14 20:47:23

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1600) Jurassic

Summary

The Jurassic is a geologic period and stratigraphic system that spanned from the end of the Triassic Period 201.4 million years ago (Mya) to the beginning of the Cretaceous Period, approximately 145 Mya. The Jurassic constitutes the middle period of the Mesozoic Era and is named after the Jura Mountains, where limestone strata from the period were first identified.

The start of the Jurassic was marked by the major Triassic–Jurassic extinction event, associated with the eruption of the Central Atlantic Magmatic Province. The beginning of the Toarcian Stage started around 183 million years ago and is marked by the Toarcian Oceanic Anoxic Event, a global episode of oceanic anoxia, ocean acidification, and elevated temperatures associated with extinctions, likely caused by the eruption of the Karoo-Ferrar large igneous provinces. The end of the Jurassic, however, has no clear boundary with the Cretaceous and is the only boundary between geological periods to remain formally undefined.

By the beginning of the Jurassic, the supercontinent Pangaea had begun rifting into two landmasses: Laurasia to the north and Gondwana to the south. The climate of the Jurassic was warmer than the present, and there were no ice caps. Forests grew close to the poles, with large arid expanses in the lower latitudes.

On land, the fauna transitioned from the Triassic fauna, dominated jointly by dinosauromorph and pseudosuchian archosaurs, to one dominated by dinosaurs alone. The first birds appeared during the Jurassic, evolving from a branch of theropod dinosaurs. Other major events include the appearance of the earliest lizards and the evolution of therian mammals. Crocodylomorphs made the transition from a terrestrial to an aquatic life. The oceans were inhabited by marine reptiles such as ichthyosaurs and plesiosaurs, while pterosaurs were the dominant flying vertebrates. The first sharks, rays and crabs also first appeared during the period.

Geology

The Jurassic Period is divided into three epochs: Early, Middle, and Late. Similarly, in stratigraphy, the Jurassic is divided into the Lower Jurassic, Middle Jurassic, and Upper Jurassic series. Geologists divide the rocks of the Jurassic into a stratigraphic set of units called stages, each formed during corresponding time intervals called ages.

Stages can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage.

Details

Jurassic Period is second of three periods of the Mesozoic Era. Extending from 201.3 million to 145 million years ago, it immediately followed the Triassic Period (251.9 million to 201.3 million years ago) and was succeeded by the Cretaceous Period (145 million to 66 million years ago). The Morrison Formation of the United States and the Solnhofen Limestone of Germany, both famous for their exceptionally well-preserved fossils, are geologic features that were formed during Jurassic times.

The Jurassic was a time of significant global change in continental configurations, oceanographic patterns, and biological systems. During this period the supercontinent Pangea split apart, allowing for the eventual development of what are now the central Atlantic Ocean and the Gulf of Mexico. Heightened plate tectonic movement led to significant volcanic activity, mountain-building events, and attachment of islands onto continents. Shallow seaways covered many continents, and marine and marginal marine sediments were deposited, preserving a diverse set of fossils. Rock strata laid down during the Jurassic Period have yielded gold, coal, petroleum, and other natural resources.

During the Early Jurassic, animals and plants living both on land and in the seas recovered from one of the largest mass extinctions in Earth history. Many groups of vertebrate and invertebrate organisms important in the modern world made their first appearance during the Jurassic. Life was especially diverse in the oceans—thriving reef ecosystems, shallow-water invertebrate communities, and large swimming predators, including reptiles and squidlike animals. On land, dinosaurs and flying pterosaurs dominated the ecosystems, and birds made their first appearance. Early mammals also were present, though they were still fairly insignificant. Insect populations were diverse, and plants were dominated by the gymnosperms, or “naked-seed” plants.

The Jurassic Period was named early in the 19th century, by the French geologist and mineralogist Alexandre Brongniart, for the Jura Mountains between France and Switzerland. Much of the initial work by geologists in trying to correlate rocks and develop a relative geologic time scale was conducted on Jurassic strata in western Europe.

Additional Information

Dinosaurs, birds, and rodents. Crumbling landmasses and inland seas. Sea monsters, sharks, and blood-red plankton. Forests of ferns, cycads, and conifers. Warm, moist, tropical breezes. This was the Jurassic, which took place 199 to 145 million years ago.

A Shifting Climate and Developing Oceans

At the start of the period, the breakup of the supercontinent Pangaea continued and accelerated. Laurasia, the northern half, broke up into North America and Eurasia. Gondwana, the southern half, began to break up by the mid-Jurassic. The eastern portion—Antarctica, Madagascar, India, and Australia—split from the western half, Africa and South America. New oceans flooded the spaces in between. Mountains rose on the seafloor, pushing sea levels higher and onto the continents.

All this water gave the previously hot and dry climate a humid and drippy subtropical feel. Dry deserts slowly took on a greener hue. Palm tree-like cycads were abundant, as were conifers such as araucaria and pines. Ginkgoes carpeted the mid- to high northern latitudes, and podocarps, a type of conifer, were particularly successful south of the Equator. Tree ferns were also present.

The oceans, especially the newly formed shallow interior seas, teemed with diverse and abundant life. At the top of the food chain were the long-necked and paddle-finned plesiosaurs, giant marine crocodiles, sharks, and rays. Fishlike ichthyosaurs, squidlike cephalopods, and coil-shelled ammonites were abundant. Coral reefs grew in the warm waters, and sponges, snails, and mollusks flourished. Microscopic, free-floating plankton proliferated and may have turned parts of the ocean red.

Huge Dinosaurs

On land, dinosaurs were making their mark in a big way—literally. The plant-eating sauropod Brachiosaurus stood up to 52 feet (16 meters) tall, stretched some 85 feet (26 meters) long, and weighed more than 80 tons. Diplodocus, another sauropod, was 90 feet (27 meters) long. These dinosaurs' sheer size may have deterred attack from Allosaurus, a bulky, meat-eating dinosaur that walked on two powerful legs. But Allosaurus and other fleet-footed carnivores, such as the coelurosaurs, must have had occasional success. Other prey included the heavily armored stegosaurs.

The earliest known bird, Archaeopteryx, took to the skies in the late Jurassic, most likely evolved from an early coelurosaurian dinosaur. Archaeopteryx had to compete for airspace with pterosaurs, flying reptiles that had been buzzing the skies since the late Triassic. Meanwhile, insects such as leafhoppers and beetles were abundant, and many of Earth's earliest mammals scurried around dinosaur feet—ignorant that their kind would come to dominate Earth once the dinosaurs were wiped out at the end of the Cretaceous.

Jurassic-Vinyl-1_1200_1200_81_s.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1698 2023-03-15 17:01:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1601) Lamination

Summary

Lamination, in technology, is the process of building up successive layers of a substance, such as wood or textiles, and bonding them with resin to form a finished product. Laminated board, for example, consists of thin layers of wood bonded together; similarly, laminated fabric consists of two or more layers of cloth joined together with an adhesive, or a layer of fabric bonded to a plastic sheet.

Lamination is the combination of two or more films or sheets of paper, normally held together with adhesives, to make a composite. Laminations can be used in several applications, often for functional purposes, for making a sheet of paper thicker and stronger.

Details

Lamination is the technique/process of manufacturing a material in multiple layers, so that the composite material achieves improved strength, stability, sound insulation, appearance, or other properties from the use of the differing materials, such as plastic. A laminate is a permanently assembled object created using heat, pressure, welding, or adhesives. Various coating machines, machine presses and calendering equipment are used.

Materials

There are different lamination processes, depending primarily on the type or types of materials to be laminated. The materials used in laminates can be identical or different, depending on the process and the object to be laminated.

Textile

Laminated fabric are widely used in different fields of human activity, including medical and military. Woven fabrics (organic and inorganic based) are usually laminated by different chemical polymers to give them useful properties like chemical resistance, dust, grease, windproofness, photoluminescence (glowing and other light-effects e.g. in high-visibility clothing), tear strength, stiffness, thickness etc.  Coated fabrics may be considered as sub type of laminated fabrics.  Nonwoven fabrics (e.g. fiberglass) are also often laminated. According to a 2002 source the nonwovens fabric industry was the biggest single consumer of different polymer binding resins.

Materials used in production of coated and laminated fabrics are generally subjected to heat treatment.

Thermoplastics and thermosetting plastics are equally used in laminating and coating textile industry.  In 2002 primary materials used included polyvinyl acetate, acrylics, polyvinyl chloride (PVC), polyurethanes, and natural and synthetic rubbers.  Copolymers and terpolymers were also in use.

Thin-films of plastics were in wide use as well. Materials varied from polyethylene and PVC to kapton depending on application. In automotive industry for example the PVC/acrylonitrilebutadiene-styrene (ABS) mixtures were often applied for interiors by laminating onto a polyurethane foam to give a soft-touch properties.  Specialty films were used in protective clothing, .e.g polytetrafluoroethylene (PTFE), polyurethane etc.

Glass

An example of a type of laminate using different materials would be the application of a layer of plastic film—the "laminate"—on either side of a sheet of glass—the laminated subject. Vehicle windshields are commonly made as composites created by laminating a tough plastic film between two layers of glass. This is to prevent shards of glass detaching from the windshield in case it breaks.

Wood

Plywood is a common example of a laminate using the same material in each layer combined with epoxy. Glued and laminated dimensional timber is used in the construction industry to make beams (glued laminated timber, or Glulam), in sizes larger and stronger than those that can be obtained from single pieces of wood. Another reason to laminate wooden strips into beams is quality control, as with this method each and every strip can be inspected before it becomes part of a highly stressed component.

Building material

Examples of laminate materials include melamine adhesive countertop surfacing and plywood. Decorative laminates and some modern millwork components are produced with decorative papers with a layer of overlay on top of the decorative paper, set before pressing them with thermoprocessing into high-pressure decorative laminates (HPDL). A new type of HPDL is produced using real wood veneer or multilaminar veneer as top surface. High-pressure laminates consists of laminates "molded and cured at pressures not lower than 1,000 lb per sq in.(70 kg per sq cm) and more commonly in the range of 1,200 to 2,000 lb per sq in. (84 to 140 kg per sq cm). Meanwhile, low pressure laminate is defined as "a plastic laminate molded and cured at pressures in general of 400 pounds per square inch (approximately 27 atmospheres or 2.8 × {10}^6 pascals).

Paper

Corrugated fiberboard boxes are examples of laminated structures, where an inner core provides rigidity and strength, and the outer layers provide a smooth surface. A starch based adhesive is usually used.

Laminating paper products, such as photographs, can prevent them from becoming creased, faded, water damaged, wrinkled, stained, smudged, abraded, or marked by grease or fingerprints. Photo identification cards and credit cards are almost always laminated with plastic film. Boxes and other containers may be laminated using heat seal layers, extrusion coatings, pressure sensitive adhesives, UV coating, etc.

Lamination is also used in sculpture using wood or resin. An example of an artist who used lamination in his work is the American Floyd Shaman.

Laminates can be used to add properties to a surface, usually printed paper, that would not have them otherwise, such as with the use of lamination paper. Sheets of vinyl impregnated with ferro-magnetic material can allow portable printed images to bond to magnets, such as for a custom bulletin board or a visual presentation. Specially surfaced plastic sheets can be laminated over a printed image to allow them to be safely written upon, such as with dry erase markers or chalk. Multiple translucent printed images may be laminated in layers to achieve certain visual effects or to hold holographic images. Printing businesses that do commercial lamination keep a variety of laminates on hand, as the process for bonding different types is generally similar when working with thin materials.

Metal

Electrical equipment such as transformers and motors usually use an electrical steel laminate coatings to form the core of the coils used to produce magnetic fields. The thin lamination reduces the power loss due to eddy currents. Fiber metal laminate is an example of thin metal laminated by, a glass fiber-reinforced and epoxy-glued sheets.

Microelectronics

Lamination is widely used in production of electronic components such as PV solar cells.

Photo laminators

Three types of laminators are used most often in digital imaging:

* Pouch laminators
* Heated roll laminators
* Cold roll laminators

Film types

Laminate plastic film is generally categorized into these five categories:

* Standard thermal laminating films
* Low-temperature thermal laminating films
* Heat set (or heat-assisted) laminating films
* Pressure-sensitive films
* Liquid laminate

28288373194798_450x450.jpg?v=1678785659


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1699 2023-03-16 20:32:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1602) Glass wool

Summary

Glass wool is a thermal insulation material consisting of intertwined and flexible glass fibers, which causes it to "package" air, resulting in a low density that can be varied through compression and binder content (as noted above, these air cells are the actual insulator).

Glass Wool and the Relative Products

Glass wool is a kind of fibrous material made from the melted glass raw materials or cullet. It consists of two types: loose wool and superfine wool. The fiber of the loose wool is 50 ~ 150 mm in length and 12 × {10}^{-3} mm in diameter. By contrast, the fiber of the superfine wool is much thinner in diameter, normally under 4 × {10}^{-3} mm. And it is also called superfine glass wool.

The loose wool can be used to make asphalt-bonded glass blanket and glass wool board. The superfine glass wool can be used to make common superfine glass blanket, glass wool board, alkali free superfine glass blanket, hyperoxic silica superfine glass blanket, and it is also used to preserve heat in the exterior-protected construction and the pipelines.

Details

Glass wool is an insulating material made from glass fiber arranged using a binder into a texture similar to wool. The process traps many small pockets of air between the glass, and these small air pockets result in high thermal insulation properties. Glass wool is produced in rolls or in slabs, with different thermal and mechanical properties. It may also be produced as a material that can be sprayed or applied in place, on the surface to be insulated. The modern method for producing glass wool was invented by Games Slayter while he was working at the Owens-Illinois Glass Co. (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933.

Principles of function

Gases possess poor thermal conduction properties compared to liquids and solids and thus make good insulation material if they can be trapped in materials so that much of the heat that flows through the material is forced to flow through the gas. In order to further augment the effectiveness of a gas (such as air) it may be disrupted into small cells which cannot effectively transfer heat by natural convection. Natural convection involves a larger bulk flow of gas driven by buoyancy and temperature differences, and it does not work well in small gas cells where there is little density difference to drive it, and the high surface area to volume ratios of the small cells retards bulk gas flow inside them by means of viscous drag.

In order to accomplish the formation of small gas cells in man-made thermal insulation, glass and polymer materials can be used to trap air in a foam-like structure. The same principle used in glass wool is used in other man-made insulators such as rock wool, Styrofoam, wet suit neoprene foam fabrics, and fabrics such as Gore-Tex and polar fleece. The air-trapping property is also the insulation principle used in nature in down feathers and insulating hair such as natural wool....

Manufacturing process

Natural sand and recycled glass are mixed and heated to 1,450 °C, to produce glass. The fiberglass is usually produced by a method similar to making cotton candy, by forcing it through a fine mesh by centrifugal force, cooling on contact with the air. Cohesion and mechanical strength are obtained by the presence of a binder that “cements” the fibers together. A drop of binder is placed at each fiber intersection. The fiber mat is then heated to around 200 °C to polymerize the resin and is calendered to give it strength and stability. Finally, the wool mat is cut and packed in rolls or panels, palletized, and stored for use.

Uses

Glass wool is a thermal insulation material consisting of intertwined and flexible glass fibers, which causes it to "package" air, resulting in a low density that can be varied through compression and binder content (as noted above, these air cells are the actual insulator). Glass wool can be a loose-fill material, blown into attics, or together with an active binder, sprayed on the underside of structures, sheets, and panels that can be used to insulate flat surfaces such as cavity wall insulation, ceiling tiles, curtain walls, and ducting. It is also used to insulate piping and for soundproofing.

Fiberglass batts and blankets

Batts are precut, whereas blankets are available in continuous rolls. Compressing the material reduces its effectiveness. Cutting it to accommodate electrical boxes and other obstructions allows air a free path to cross through the wall cavity. One can install batts in two layers across an unfinished attic floor, perpendicular to each other, for increased effectiveness at preventing heat bridging. Blankets can cover joists and studs as well as the space between them. Batts can be challenging and unpleasant to hang under floors between joists; straps, or staple cloth or wire mesh across joists, can hold it up.

Gaps between batts (bypasses) can become sites of air infiltration or condensation (both of which reduce the effectiveness of the insulation) and require strict attention during the installation. By the same token careful weatherization and installation of vapour barriers is required to ensure that the batts perform optimally. Air infiltration can be also reduced by adding a layer of cellulose loose-fill on top of the material.

Health problems

Fiberglass will irritate the eyes, skin, and the respiratory system. Potential symptoms include irritation of eyes, skin, nose, and throat, dyspnea (breathing difficulty), sore throat, hoarseness and cough. Fiberglass used for insulating appliances appears to produce human disease that is similar to asbestosis. Scientific evidence demonstrates that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. Unfortunately these work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied. Fiberglass insulation should never be left exposed in an occupied area, according to the American Lung Association.

In June 2011, the United States' National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products.[8] Similarly, California's Office of Environmental Health Hazard Assessment ("OEHHA"), in November 2011, published a modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." The United States' NTP and California's OEHHA action means that a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under Federal or California law. All fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) in October 2001 as Not Classifiable as to carcinogenicity to humans (Group 3).

Fiberglass itself is resistant to mold. If mold is found in or on fiberglass it is more likely that the binder is the source of the mold, since binders are often organic and more hygroscopic than the glass wool. In tests, glass wool was found to be highly resistant to the growth of mold. Only exceptional circumstances resulted in mold growth: very high relative humidity, 96% and above, or saturated glass wool, although saturated wool glass will only have moderate growth.

glass-wool-insulation-500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1700 2023-03-17 17:36:57

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,078

Re: Miscellany

1603) Hearing aid

Summary

A hearing aid is a device designed to improve hearing by making sound audible to a person with hearing loss. Hearing aids are classified as medical devices in most countries, and regulated by the respective regulations. Small audio amplifiers such as personal sound amplification products (PSAPs) or other plain sound reinforcing systems cannot be sold as "hearing aids".

Early devices, such as ear trumpets or ear horns, were passive amplification cones designed to gather sound energy and direct it into the ear canal. Modern devices are computerised electroacoustic systems that transform environmental sound to make it audible, according to audiometrical and cognitive rules. Modern devices also utilize sophisticated digital signal processing to try and improve speech intelligibility and comfort for the user. Such signal processing includes feedback management, wide dynamic range compression, directionality, frequency lowering, and noise reduction.

Modern hearing aids require configuration to match the hearing loss, physical features, and lifestyle of the wearer. The hearing aid is fitted to the most recent audiogram and is programmed by frequency. This process is called "fitting" can be performed by the user in simple cases, or is performed by a Doctor of Audiology, also called an audiologist (AuD), or by a Hearing Instrument Specialist (HIS) or audioprosthologist. The amount of benefit a hearing aid delivers depends in large part on the quality of its fitting. Almost all hearing aids in use in the US are digital hearing aids, as analog aids are phased out. Devices similar to hearing aids include the osseointegrated auditory prosthesis (formerly called the bone-anchored hearing aid) and cochlear implant.

Uses

Hearing aids are used for a variety of pathologies including sensorineural hearing loss, conductive hearing loss, and single-sided deafness. Hearing aid candidacy was traditionally determined by a Doctor of Audiology, or a certified hearing specialist, who will also fit the device based on the nature and degree of the hearing loss being treated. The amount of benefit experienced by the user of the hearing aid is multi-factorial, depending on the type, severity, and etiology of the hearing loss, the technology and fitting of the device, and on the motivation, personality, lifestyle, and overall health of the user. Over-the-counter hearing aids, which address mild- to moderate- hearing loss, are designed to be adjusted by the user.

Hearing aids are incapable of truly correcting a hearing loss; they are an aid to make sounds more audible. The most common form of hearing loss for which hearing aids are sought is sensorineural, resulting from damage to the hair cells and synapses of the cochlea and auditory nerve. Sensorineural hearing loss reduces the sensitivity to sound, which a hearing aid can partially accommodate by making sound louder. Other decrements in auditory perception caused by sensorineural hearing loss, such as abnormal spectral and temporal processing, and which may negatively affect speech perception, are more difficult to compensate for using digital signal processing and in some cases may be exacerbated by the use of amplification. Conductive hearing losses, which do not involve damage to the cochlea, tend to be better treated by hearing aids; the hearing aid is able to sufficiently amplify sound to account for the attenuation caused by the conductive component. Once the sound is able to reach the cochlea at normal or near-normal levels, the cochlea and auditory nerve are able to transmit signals to the brain normally.

Common issues with hearing aid fitting and use are the occlusion effect, loudness recruitment, and understanding speech in noise. Once a common problem, feedback is generally now well-controlled through the use of feedback management algorithms.

Details

A hearing aid is a device that increases the loudness of sounds in the ear of the wearer. The earliest aid was the ear trumpet, characterized by a large mouth at one end for collecting the sound energy from a large area and a gradually tapering tube to a narrow orifice for insertion in the ear. Modern hearing aids are electronic. Principal components are a microphone that converts sound into a varying electrical current, an amplifier that amplifies this current, and an earphone that converts the amplified current into a sound of greater intensity than the original.

Man wearing an in-the-ear hearing aid, which fits completely inside the outer ear.
Hearing aids have widely differing characteristics; requirements for suitable aids have been extensively investigated. The two characteristics of a hearing aid that most influence the understanding of speech are the amplification of the various components of speech sounds and the loudness with which the sounds are heard by the wearer. As regards the first characteristic, speech sounds contain many components of different frequencies, which are variously amplified by a hearing aid. The variation of amplification with frequency is called the frequency response of the aid. An aid need amplify sounds only within the range of 400 to 4,000 hertz, although the components of speech cover a much wider range. With regard to the second characteristic—the loudness with which sounds are heard—too loud a sound can be as difficult to understand as one that is too faint. The loudness range over which speech is understood best is wide for some users and narrow for others. Hearing aids with automatic volume control vary the amplification of the aid automatically with variations of the input.

Most modern hearing aids use digital signal processing, in which electronic circuits convert analog signals to digital signals that can be manipulated and converted back to analog signals for output. Digital hearing aids are highly flexible with regard to programming, allowing users to match sound amplification to fit their needs. Because of their flexibility in programming, digital hearing aids have largely replaced analog aids, which amplified all sounds in the same way and were limited in programmability.

Early electronic hearing aids were quite large, but when transistors replaced amplifier tubes and smaller magnetic microphones became available in the 1950s, it became possible to build very small hearing aids, some of which were constructed to fit within the frames of eyeglasses and, later, behind the earlobe or within the external ear. Today multiple styles of hearing aids are available, including body aids, behind-the-ear (BTE) aids, mini-BTE aids, in-the-ear (ITE) aids, in-the-canal (ITC) aids, and completely-in-the-canal (CIC) aids.

A binaural hearing aid consists of two separate aids, one for each ear. Such an arrangement can benefit certain users.

hearingAid-513885121-770x533-1-650x428.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB