Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1326 2022-03-24 14:20:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1300) World War I

Summary

World War I, often abbreviated as WWI or WW1, also known as the First World War and contemporaneously known as the Great War and by other names, was an international conflict that began on 28 July 1914 and ended on 11 November 1918. It involved much of Europe, as well as Russia, the United States and Turkey, and was also fought in the Middle East, Africa and parts of Asia. One of the deadliest conflicts in history, an estimated 9 million were killed in combat, while over 5 million civilians died from occupation, bombardment, hunger or disease. The genocides perpetrated by the Ottomans and the 1918 Spanish flu pandemic spread by the movement of combatants during the war caused many millions of additional deaths worldwide.

In 1914, the Great Powers were divided into two opposing alliances: the Triple Entente, consisting of France, Russia, and Britain, and the Triple Alliance, made up of Germany, Austria-Hungary, and Italy. Tensions in the Balkans came to a head on 28 June 1914 following the assassination of Archduke Franz Ferdinand, the Austro-Hungarian heir, by Gavrilo Princip, a Bosnian Serb. Austria-Hungary blamed Serbia and the interlocking alliances involved the Powers in a series of diplomatic exchanges known as the July Crisis. On 28 July, Austria-Hungary declared war on Serbia; Russia came to Serbia's defence and by 4 August, the conflict had expanded to include Germany, France and Britain, along with their respective colonial empires. In November, the Ottoman Empire, Germany and Austria formed the Central Powers, while in April 1915, Italy joined Britain, France, Russia and Serbia as the Allied Powers.

Facing a war on two fronts, German strategy in 1914 was to defeat France, then shift its forces to the East and knock out Russia, commonly known as the Schlieffen Plan. This failed when their advance into France was halted at the Marne; by the end of 1914, the two sides faced each other along the Western Front, a continuous series of trench lines stretching from the Channel to Switzerland that changed little until 1917. By contrast, the Eastern Front was far more fluid, with Austria-Hungary and Russia gaining, then losing large swathes of territory. Other significant theatres included the Middle East, the Alpine Front and the Balkans, bringing Bulgaria, Romania and Greece into the war.

Shortages caused by the Allied naval blockade led Germany to initiate unrestricted submarine warfare in early 1917, bringing the previously neutral United States into the war on 6 April 1917. In Russia, the Bolsheviks seized power in the 1917 October Revolution and made peace in the March 1918 Treaty of Brest-Litovsk, freeing up large numbers of German troops. By transferring these to the Western Front, the German General Staff hoped to win a decisive victory before American reinforcements could impact the war, and launched the March 1918 German spring offensive. Despite initial success, it was soon halted by heavy casualties and ferocious defence; in August, the Allies launched the Hundred Days Offensive and although the German army continued to fight hard, it could no longer halt their advance.

Towards the end of 1918, the Central Powers began to collapse; Bulgaria signed an Armistice on 29 September, followed by the Ottomans on 31 October, then Austria-Hungary on 3 November. Isolated, facing revolution at home and an army on the verge of mutiny, Kaiser Wilhelm abdicated on 9 November and the new German government signed the Armistice of 11 November 1918, bringing the fighting to a close. The 1919 Paris Peace Conference imposed various settlements on the defeated powers, the best known being the Treaty of Versailles. The dissolution of the Russian, German, Ottoman and Austro-Hungarian empires led to numerous uprisings and the creation of independent states, including Poland, Czechoslovakia and Yugoslavia. For reasons that are still debated, failure to manage the instability that resulted from this upheaval during the interwar period ended with the outbreak of World War II in 1939.

Details

It was known as “The Great War”—a land, air and sea conflict so terrible, it left over 8 million military personnel and 6.6 million civilians dead. Nearly 60 percent of those who fought died. Even more went missing or were injured. In just four years between 1914 and 1918, World War I changed the face of modern warfare, becoming one of the deadliest conflicts in world history.

Causes of the Great War

World War I had a variety of causes, but its roots were in a complex web of alliances between European powers. At its core was mistrust between—and militarization in—the informal “Triple Entente” (Great Britain, France, and Russia) and the secret “Triple Alliance” (Germany, the Austro-Hungarian Empire, and Italy).

The most powerful players, Great Britain, Russia, and Germany, presided over worldwide colonial empires they wanted to expand and protect. Over the course of the 19th century, they consolidated their power and protected themselves by forging alliances with other European powers.

In July 1914, tensions between the Triple Entente (also known as the Allies) and the Triple Alliance (also known as the Central Powers) ignited with the assassination of Archduke Franz Ferdinand, heir to the throne of Austria-Hungary, by a Bosnian Serb nationalist during a visit to Sarajevo. Austria-Hungary blamed Serbia for the attack. Russia backed its ally, Serbia. When Austria-Serbia declared war on Serbia a month later, their allies jumped in and the continent was at war.

The spread of war

Soon, the conflict had expanded to the world, affecting colonies and ally countries in Africa, Asia, the Middle East, and Australia. In 1917, the United States entered the war after a long period of non-intervention. By then, the main theater of the war—the Western Front in Luxembourg, the Netherlands, Belgium, and France—was the site of a deadly stalemate.

Despite advances like the use of poison gas and armored tanks, both sides were trapped in trench warfare that claimed enormous numbers of casualties. Battles like the Battle of Verdun and the First Battle of the Somme are among the deadliest in the history of human conflict.

Aided by the United States, the Allies finally broke through with the Hundred Days Offensive, leading to the military defeat of Germany. The war officially ended at 11:11 a.m. on November 11, 1918.

By then, the world was in the grips of an influenza pandemic that would infect a third of the global population. Revolution had broken out in Germany, Russia, and other countries. Much of Europe was in ruins. “Shell shock” and the aftereffects of gas poisoning would claim thousands more lives.

Never again?

Though the world vowed never to allow another war like it to happen, the roots of the next conflict were sown in the Treaty of Versailles, which was viewed by Germans as humiliating and punitive and which helped set the stage for the rise of fascism and World War II. The technology that the war had generated would be used in the next world war just two decades later.

Though it was described at the time as “the war to end all wars,” the scar that World War I left on the world was anything but temporary.

Pershing-troops-Mexico-World-War-I-1917.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1327 2022-03-25 14:00:26

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1301) World War II

Summary

World War II, also called Second World War, was a conflict that involved virtually every part of the world during the years 1939–45. The principal belligerents were the Axis powers—Germany, Italy, and Japan—and the Allies—France, Great Britain, the United States, the Soviet Union, and, to a lesser extent, China. The war was in many respects a continuation, after an uneasy 20-year hiatus, of the disputes left unsettled by World War I. The 40,000,000–50,000,000 deaths incurred in World War II make it the bloodiest conflict, as well as the largest war, in history.

Along with World War I, World War II was one of the great watersheds of 20th-century geopolitical history. It resulted in the extension of the Soviet Union’s power to nations of eastern Europe, enabled a communist movement to eventually achieve power in China, and marked the decisive shift of power in the world away from the states of western Europe and toward the United States and the Soviet Union.

Details

World War II or the Second World War, often abbreviated as WWII or WW2, was a global war that lasted from 1939 to 1945. It involved the vast majority of the world's countries—including all of the great powers—forming two opposing military alliances: the Allies and the Axis powers. In a total war directly involving more than 100 million personnel from more than 30 countries, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. Aircraft played a major role in the conflict, enabling the strategic bombing of population centres and the only two uses of nuclear weapons in war. World War II was by far the deadliest conflict in human history; it resulted in 70 to 85 million fatalities, a majority being civilians. Tens of millions of people died due to genocides (including the Holocaust), starvation, massacres, and disease. In the wake of the Axis defeat, Germany and Japan were occupied, and war crimes tribunals were conducted against German and Japanese leaders.

The exact causes of World War II are debated, but contributing factors included the Second Italo-Ethiopian War, the Spanish Civil War, the Second Sino-Japanese War, the Soviet–Japanese border conflicts and rising European tensions since World War I. World War II is generally considered to have begun on 1 September 1939, when Nazi Germany, under Adolf Hitler, invaded Poland. The United Kingdom and France subsequently declared war on Germany on 3 September. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union had partitioned Poland and marked out their "spheres of influence" across Finland, Estonia, Latvia, Lithuania and Romania. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan (along with other countries later on). Following the onset of campaigns in North Africa and East Africa, and the fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz of the UK, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the Eastern Front, the largest land theatre of war in history.

Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan attacked American and British territories with near-simultaneous offensives against Southeast Asia and the Central Pacific, including an attack on the US fleet at Pearl Harbor which resulted in the United States declaring war against Japan. Therefore the European Axis powers declared war on the United States in solidarity. Japan soon captured much of the western Pacific, but its advances were halted in 1942 after losing the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—including a series of German defeats on the Eastern Front, the Allied invasions of Sicily and the Italian mainland, and Allied offensives in the Pacific—cost the Axis powers their initiative and forced it into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and turned towards Germany and its allies. During 1944 and 1945, Japan suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key western Pacific islands.

The war in Europe concluded with the liberation of German-occupied territories, and the invasion of Germany by the Western Allies and the Soviet Union, culminating in the fall of Berlin to Soviet troops, Hitler's suicide and the German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender on its terms, the United States dropped the first atomic bombs on the Japanese cities of Hiroshima, on 6 August, and Nagasaki, on 9 August. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet’s declared entry into the war against Japan on the eve of invading Manchuria, Japan announced on 15 August its intention to surrender, then signed the surrender document on 2 September 1945, cementing total victory in Asia for the Allies.

World War II changed the political alignment and social structure of the globe. The United Nations (UN) was established to foster international co-operation and prevent future conflicts, with the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—becoming the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political and economic integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities and forge a sense of common identity.

World War II summary: The carnage of World War II was unprecedented and brought the world closest to the term “total warfare.” On average 27,000 people were killed each day between September 1, 1939, until the formal surrender of Japan on September 2, 1945. Western technological advances had turned upon itself, bringing about the most destructive war in human history. The primary combatants were the Axis nations of Nazi Germany, Fascist Italy, Imperial Japan, and the Allied nations, Great Britain (and its Commonwealth nations), the Soviet Union, and the United States. Seven days after the suicide of Adolf Hitler, Germany unconditionally surrendered on May 7, 1945. The Japanese would go on to fight for nearly four more months until their surrender on September 2, which was brought on by the U.S. dropping atomic bombs on the Japanese towns of Nagasaki and Hiroshima. Despite winning the war, Britain largely lost much of its empire, which was outlined in the basis of the Atlantic Charter.  The war precipitated the revival of the U.S. economy, and by the war’s end, the nation would have a gross national product that was nearly greater than all the Allied and Axis powers combined. The USA and USSR emerged from World War II as global superpowers. The fundamentally disparate, one-time allies became engaged in what was to be called the Cold War, which dominated world politics for the latter half of the 20th century.

Casualties in World War II

The most destructive war in all of history, its exact cost in human lives is unknown, but casualties in World War II may have totaled over 60 million service personnel and civilians killed. Nations suffering the highest losses, military and civilian, in descending order, are:
USSR: 42,000,000
Germany: 9,000,000
China: 4,000,000
Japan: 3,000,000

When did World War II begin?

Some say it was simply a continuation of the First World War that had theoretically ended in 1918. Others point to 1931, when Japan seized Manchuria from China. Others to Italy’s invasion and defeat of Abyssinia (Ethiopia) in 1935, Adolf Hitler’s re-militarization of Germany’s Rhineland in 1936, the Spanish Civil War (1936–1939), and Germany’s occupation of Czechoslovakia in 1938 are sometimes cited. The two dates most often mentioned as “the beginning of World War II” are July 7, 1937, when the “Marco Polo Bridge Incident” led to a prolonged war between Japan and China, and September 1, 1939, when Germany invaded Poland, which led Britain and France to declare war on Hitler’s Nazi state in retaliation. From the invasion of Poland until the war ended with Japan’s surrender in September 1945, most nations around the world were engaged in armed combat.

Origins of World War II

No one historic event can be said to have been the origin of World War II. Japan’s unexpected victory over czarist Russia in the Russo-Japanese War (1904-05) left open the door for Japanese expansion in Asia and the Pacific. The United States U.S. Navy first developed plans in preparation for a naval war with Japan in 1890. War Plan Orange, as it was called, would be updated continually as technology advanced and greatly aided the U.S. during World War II.

The years between the first and second world wars were a time of instability. The Great Depression that began on Black Tuesday, 1929 plunged the worldwide recession. Coming to power in 1933, Hitler capitalized on this economic decline and the deep German resentment due to the emasculating Treaty of Versailles, signed following the armistice of 1918. Declaring that Germany needed Lebensraum or “living space,” Hitler began to test the Western powers and their willingness to monitor the treaty’s provision. By 1935 Hitler had established the Luftwaffe, a direct violation of the 1919 treaty. Remilitarizing the Rhineland in 1936 violated Versailles and the Locarno Treaties (which defined the borders of Europe) once again. The Anschluss of Austria and the annexation of the rump of Czechoslovakia was a further extension of Hitler’s desire for Lebensraum. Italy’s desire to create the Third Rome pushed the nation to closer ties with Nazi Germany. Likewise, Japan, angered by their exclusion in Paris in 1919, sought to create a Pan-Asian sphere with Japan in order to create a self-sufficient state.

Competing ideologies further fanned the flames of international tension. The Bolshevik Revolution in czarist Russia during the First World War, followed by the Russian Civil War, had established the Union of Soviet Socialist Republics (USSR), a sprawling communist state. Western republics and capitalists feared the spread of Bolshevism. In some nations, such as Italy, Germany and Romania, ultra-conservative groups rose to power, in part in reaction to communism.

Germany, Italy and Japan signed agreements of mutual support but, unlike the Allied nations they would face, they never developed a comprehensive or coordinated plan of action.

wwii-122-1024x814.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1328 2022-03-26 01:25:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1302) Artificial Intelligence

Summary

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals.

Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however, this definition is rejected by major AI researchers.

AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the first decades of the 21st century, highly mathematical statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.

The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals. To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques—including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability and economics. AI also draws upon computer science, psychology, linguistics, philosophy, and many other fields.

The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored by myth, fiction, and philosophy since antiquity. Science fiction and futurology have also suggested that, with its enormous potential and power, AI may become an existential risk to humanity.

Details

Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

What is intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

Learning

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.

Reasoning

To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is, “Fred must be in either the museum or the café. He is not in the café; therefore he is in the museum,” and of the latter, “Previous accidents of this sort were caused by instrument failure; therefore this accident was caused by instrument failure.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rules.

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

Problem solving

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.

Perception

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.

At present, artificial perception is sufficiently well advanced to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on the open road, and robots to roam through buildings collecting empty soda cans. One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving television eye and a pincer hand, constructed at the University of Edinburgh, Scotland, during the period 1966–73 under the direction of Donald Michie. FREDDY was able to recognize a variety of objects and could be instructed to assemble simple artifacts, such as a toy car, from a random heap of components.

Language

A language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a minilanguage, it being a matter of convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”

An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—is their productivity. A productive language can formulate an unlimited variety of sentences.

It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements. Although none of these programs actually understands language, they may, in principle, reach the point where their command of a language is indistinguishable from that of a normal human. What, then, is involved in genuine understanding, if even a computer that uses language like a native human speaker is not acknowledged to understand? There is no universally agreed upon answer to this difficult question. According to one theory, whether or not one understands depends not only on one’s behaviour but also on one’s history: in order to be said to understand, one must have learned the language and have been trained to take one’s place in the linguistic community by means of interaction with other language users.

Methods and goals in AI:

Symbolic vs. connectionist approaches

AI research follows two distinct, and to some extent competing, methods, the symbolic (or “top-down”) approach, and the connectionist (or “bottom-up”) approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols—whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain’s structure—whence the connectionist label.

To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by “tuning” the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.) In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

In The Fundamentals of Learning (1932), Edward Thorndike, a psychologist at Columbia University, New York City, first suggested that human learning consists of some unknown property of connections between neurons in the brain. In The Organization of Behavior (1949), Donald Hebb, a psychologist at McGill University, Montreal, Canada, suggested that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections. The notion of weighted connections is described in a later section, Connectionism.

In 1957 two vigorous advocates of symbolic AI—Allen Newell, a researcher at the RAND Corporation, Santa Monica, California, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University, Pittsburgh, Pennsylvania—summed up the top-down approach in what they called the physical symbol system hypothesis. This hypothesis states that processing structures of symbols is sufficient, in principle, to produce artificial intelligence in a digital computer and that, moreover, human intelligence is the result of the same type of symbolic manipulations.

During the 1950s and ’60s the top-down and bottom-up approaches were pursued simultaneously, and both achieved noteworthy, if limited, results. During the 1970s, however, bottom-up AI was neglected, and it was not until the 1980s that this approach again became prominent. Nowadays both approaches are followed, and both are acknowledged as facing difficulties. Symbolic techniques work in simplified realms but typically break down when confronted with the real world; meanwhile, bottom-up researchers have been unable to replicate the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic even this worm. Evidently, the neurons of connectionist theory are gross oversimplifications of the real thing.

Strong AI, applied AI, and cognitive simulation

Employing the methods outlined above, AI research attempts to reach one of three goals: strong AI, applied AI, or cognitive simulation. Strong AI aims to build machines that think. (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) The ultimate ambition of strong AI is to produce a machine whose overall intellectual ability is indistinguishable from that of a human being. As is described in the section Early milestones in AI, this goal generated great interest in the 1950s and ’60s, but such optimism has given way to an appreciation of the extreme difficulties involved. To date, progress has been meagre. Some critics doubt whether research will produce even a system with the overall intellectual ability of an ant in the foreseeable future. Indeed, some researchers working in AI’s other two branches view strong AI as not worth pursuing.

Applied AI, also known as advanced information processing, aims to produce commercially viable “smart” systems—for example, “expert” medical diagnosis systems and stock-trading systems. Applied AI has enjoyed considerable success, as described in the section Expert systems.

In cognitive simulation, computers are used to test theories about how the human mind works—for example, theories about how people recognize faces or recall memories. Cognitive simulation is already a powerful tool in both neuroscience and cognitive psychology.

digital_brain_artificial_intelligence.jpg?itok=K_8dk8jX


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1329 2022-03-26 23:26:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1303) Xylem and Phloem

The vascular system is comprised of two main types of tissue: the xylem and the phloem. The xylem distributes water and dissolved minerals upward through the plant, from the roots to the leaves. The phloem carries food downward from the leaves to the roots.

Xylem

Xylem is a plant vascular tissue that conveys water and dissolved minerals from the roots to the rest of the plant and also provides physical support. Xylem tissue consists of a variety of specialized, water-conducting cells known as tracheary elements. Together with phloem (tissue that conducts sugars from the leaves to the rest of the plant), xylem is found in all vascular plants, including the seedless club mosses, ferns, horsetails, as well as all angiosperms (flowering plants) and gymnosperms (plants with seeds unenclosed in an ovary).

The xylem tracheary elements consist of cells known as tracheids and vessel members, both of which are typically narrow, hollow, and elongated. Tracheids are less specialized than the vessel members and are the only type of water-conducting cells in most gymnosperms and seedless vascular plants. Water moving from tracheid to tracheid must pass through a thin modified primary cell wall known as the pit membrane, which serves to prevent the passage of damaging air bubbles. Vessel members are the principal water-conducting cells in angiosperms (though most species also have tracheids) and are characterized by areas that lack both primary and secondary cell walls, known as perforations. Water flows relatively unimpeded from vessel to vessel through these perforations, though fractures and disruptions from air bubbles are also more likely. In addition to the tracheary elements, xylem tissue also features fibre cells for support and parenchyma (thin-walled, unspecialized cells) for the storage of various substances.

Xylem formation begins when the actively dividing cells of growing root and shoot tips (apical meristems) give rise to primary xylem. In woody plants, secondary xylem constitutes the major part of a mature stem or root and is formed as the plant expands in girth and builds a ring of new xylem around the original primary xylem tissues. When this happens, the primary xylem cells die and lose their conducting function, forming a hard skeleton that serves only to support the plant. Thus, in the trunk and older branches of a large tree, only the outer secondary xylem (sapwood) serves in water conduction, while the inner part (heartwood) is composed of dead but structurally strong primary xylem. In temperate or cold climates, the age of a tree may be determined by counting the number of annual xylem rings formed at the base of the trunk (cut in cross section).

Phloem

Phloem is plant vascular tissue that conducts foods made in the leaves during photosynthesis to all other parts of the plant. Phloem is composed of various specialized cells called sieve elements, phloem fibres, and phloem parenchyma cells. Together with xylem (tissue that conducts water and minerals from the roots to the rest of the plant), phloem is found in all vascular plants, including the seedless club mosses, ferns, and horsetails, as well as all angiosperms (flowering plants) and gymnosperms (plants with seeds unenclosed in an ovary).

Sieve tubes, which are columns of sieve tube cells having perforated sievelike areas in their lateral or end walls, provide the main channels in which food substances travel throughout a vascular plant. Phloem parenchyma cells, called transfer cells and border parenchyma cells, are located near the finest branches and terminations of sieve tubes in leaf veinlets, where they also function in the transport of foods. Companion cells, or albuminous cells in non-flowering vascular plants, are another specialized type of parenchyma and carry out the cellular functions of adjacent sieve elements; they typically have a larger number of mitochondria and ribosomes than other parenchyma cells. Phloem, or bast, fibres are flexible long sclerenchyma cells that make up the soft fibres (e.g., flax and hemp) of commerce. These provide flexible tensile strength to the phloem tissues. Sclerids, also formed for sclerenchyma, are hard irregularly shaped cells that add compression strength to the tissues.

Primary phloem is formed by the apical meristems (zones of new cell production) of root and shoot tips; it may be either protophloem, the cells of which are matured before elongation (during growth) of the area in which it lies, or metaphloem, the cells of which mature after elongation. Sieve tubes of protophloem are unable to stretch with the elongating tissues and are torn and destroyed as the plant ages. The other cell types in the phloem may be converted to fibres. The later maturing metaphloem is not destroyed and may function during the rest of the plant’s life in plants such as palms but is replaced by secondary phloem in plants that have a cambium.

Xylem versus Phloem

Xylem and phloem are both transport vessels that combine to form a vascular bundle in higher order plants.

* The vascular bundle functions to connect tissues in the roots, stem and leaves as well as providing structural support

Xylem

* Moves materials via the process of transpiration
* Transports water and minerals from the roots to aerial parts of the plant (unidirectional transport)
* Xylem occupy the inner portion or centre of the vascular bundle and is composed of vessel elements and tracheids
* Vessel wall consists of fused cells that create a continuous tube for the unimpeded flow of materials
* Vessels are composed of dead tissue at maturity, such that vessels are hollow with no cell contents

Phloem

* Moves materials via the process of active translocation
* Transports food and nutrients to storage organs and growing parts of the plant (bidirectional transport)
* Phloem occupy the outer portion of the vascular bundle and are composed of sieve tube elements and companion cells
* Vessel wall consists of cells that are connected at their transverse ends to form porous sieve plates (function as cross walls)
* Vessels are composed of living tissue, however sieve tube elements lack nuclei and have few organelles.

xylem-vs-phloem_med.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1330 2022-03-28 01:05:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1304) Nitroglycerin

Summary

Nitroglycerin (NG), (alternative spelling nitroglycerine) also known as trinitroglycerin (TNG), nitro, glyceryl trinitrate (GTN), or 1,2,3-trinitroxypropane, is a dense, colorless, oily, explosive liquid most commonly produced by nitrating glycerol with white fuming nitric acid under conditions appropriate to the formation of the nitric acid ester. Chemically, the substance is an organic nitrate compound rather than a nitro compound, but the traditional name is retained. Invented in 1847 by Ascanio Sobrero, nitroglycerin has been used ever since as an active ingredient in the manufacture of explosives, namely dynamite, and as such it is employed in the construction, demolition, and mining industries. Since the 1880s, it has been used by the military as an active ingredient and gelatinizer for nitrocellulose in some solid propellants such as cordite and ballistite. It is a major component in double-based smokeless propellants used by reloaders. Combined with nitrocellulose, hundreds of powder combinations are used by rifle, pistol, and shotgun reloaders.

Nitroglycerin has been used for over 130 years in medicine as a potent vasodilator (dilation of the vascular system) to treat heart conditions, such as angina pectoris and chronic heart failure. Though it was previously known that these beneficial effects are due to nitroglycerin being converted to nitric oxide, a potent venodilator, the enzyme for this conversion was only discovered to be mitochondrial aldehyde dehydrogenase (ALDH2) in 2002. Nitroglycerin is available in sublingual tablets, sprays, ointments, and patches.

Details

Nitroglycerin, also called glyceryl trinitrate, is a a powerful explosive and an important ingredient of most forms of dynamite. It is also used with nitrocellulose in some propellants, especially for rockets and missiles, and it is employed as a vasodilator in the easing of cardiac pain.

Pure nitroglycerin is a colourless, oily, somewhat toxic liquid having a sweet, burning taste. It was first prepared in 1846 by the Italian chemist Ascanio Sobrero by adding glycerol to a mixture of concentrated nitric and sulfuric acids. The hazards involved in preparing large quantities of nitroglycerin have been greatly reduced by widespread adoption of continuous nitration processes.

Nitroglycerin, with the molecular formula C3H5(ONO2)3, has a high nitrogen content (18.5 percent) and contains sufficient oxygen atoms to oxidize the carbon and hydrogen atoms while nitrogen is being liberated, so that it is one of the most powerful explosives known. Detonation of nitroglycerin generates gases that would occupy more than 1,200 times the original volume at ordinary room temperature and pressure; moreover, the heat liberated raises the temperature to about 5,000 °C (9,000 °F). The overall effect is the instantaneous development of a pressure of 20,000 atmospheres; the resulting detonation wave moves at approximately 7,700 metres per second (more than 17,000 miles per hour). Nitroglycerin is extremely sensitive to shock and to rapid heating; it begins to decompose at 50–60 °C (122–140 °F) and explodes at 218 °C (424 °F).

The safe use of nitroglycerin as a blasting explosive became possible after the Swedish chemist Alfred B. Nobel developed dynamite in the 1860s by combining liquid nitroglycerin with an inert porous material such as charcoal or diatomaceous earth. Nitroglycerin plasticizes collodion (a form of nitrocellulose) to form blasting gelatin, a very powerful explosive. Nobel’s discovery of this action led to the development of ballistite, the first double-base propellant and a precursor of cordite.

A serious problem in the use of nitroglycerin results from its high freezing point (13 °C [55 °F]) and the fact that the solid is even more shock-sensitive than the liquid. This disadvantage is overcome by using mixtures of nitroglycerin with other polynitrates; for example, a mixture of nitroglycerin and ethylene glycol dinitrate freezes at −29 °C (−20 °F).

Uses

Nitroglycerin extended-release capsules are used to prevent chest pain (angina) in people with a certain heart condition (coronary artery disease). This medication belongs to a class of drugs known as nitrates. Angina occurs when the heart muscle is not getting enough blood. This drug works by relaxing and widening blood vessels so blood can flow more easily to the heart. This medication will not relieve chest pain once it occurs. It is also not intended to be taken just before physical activities (such as exercise, sexual activity) to prevent chest pain. Other medications may be needed in these situations. Consult your doctor for more details.

How to use Nitroglycerin

Take this medication by mouth, usually 3 to 4 times daily or as directed by your doctor. It is important to take the drug at the same times each day. Do not change the dosing times unless directed by your doctor. The dosage is based on your medical condition and response to treatment.

Swallow this medication whole. Do not crush or chew the capsules. Doing so can release all of the drug at once and may increase your risk of side effects.

Use this medication regularly to get the most benefit from it. Do not suddenly stop taking this medication without consulting your doctor. Some conditions may become worse when the drug is suddenly stopped. Your dose may need to be gradually decreased.

Although unlikely, when this medication is used for a long time, it may not work as well and may require different dosing. Tell your doctor if this medication stops working well (for example, you have worsening chest pain or it occurs more often).

Side Effects

Headache, dizziness, lightheadedness, nausea, and flushing may occur as your body adjusts to this medication. If any of these effects persist or worsen, tell your doctor or pharmacist promptly.

Headache is often a sign that this medication is working. Your doctor may recommend treating headaches with an over-the-counter pain reliever (such as acetaminophen, aspirin). If the headaches continue or become severe, tell your doctor promptly.

To reduce the risk of dizziness and lightheadedness, get up slowly when rising from a sitting or lying position.

Remember that this medication has been prescribed because your doctor has judged that the benefit to you is greater than the risk of side effects. Many people using this medication do not have serious side effects.

Tell your doctor right away if any of these unlikely but serious side effects occur: fainting, fast/irregular/pounding heartbeat.

A very serious allergic reaction to this drug is rare. However, seek immediate medical attention if you notice any of the following symptoms of a serious allergic reaction: rash, itching/swelling (especially of the face/tongue/throat), severe dizziness, trouble breathing.

Precautions

Before using this medication, tell your doctor or pharmacist if you are allergic to it; or to similar drugs (such as isosorbide mononitrate); or to nitrites; or if you have any other allergies. This product may contain inactive ingredients, which can cause allergic reactions or other problems. Talk to your pharmacist for more details.

Before using this medication, tell your doctor or pharmacist your medical history, especially of: recent head injury, frequent stomach cramping/watery stools/severe diarrhea (GI hypermotility), lack of proper absorption of nutrients (malabsorption), anemia, low blood pressure, dehydration, other heart problems (such as recent heart attack).

This drug may make you dizzy. Alcohol can make you more dizzy. Do not drive, use machinery, or do anything that needs alertness until you can do it safely. Limit alcoholic beverages.

Before having surgery, tell your doctor or dentist about all the products you use (including prescription drugs, nonprescription drugs, and herbal products).

Older adults may be more sensitive to the side effects of this medication, especially dizziness and lightheadedness, which could increase the risk of falls.

During pregnancy, this medication should be used only when clearly needed. Discuss the risks and benefits with your doctor.

It is not known whether this drug passes into breast milk or if it may harm a nursing infant. Consult your doctor before breast-feeding.

Interactions

Drug interactions may change how your medications work or increase your risk for serious side effects. This document does not contain all possible drug interactions. Keep a list of all the products you use (including prescription/nonprescription drugs and herbal products) and share it with your doctor and pharmacist. Do not start, stop, or change the dosage of any medicines without your doctor's approval.

Some products that may interact with this drug include: pulmonary hypertension (such as sildenafil, tadalafil), certain drugs to treat migraine headaches.

This medication may interfere with certain laboratory tests (including blood cholesterol levels), possibly causing false test results. Make sure laboratory personnel and all your doctors know you use this drug.

Overdose

Symptoms of overdose may include: slow heartbeat, vision changes, severe nausea/vomiting, sweating, cold/clammy skin, bluish fingers/toes/lips.

Storage

Store at room temperature between 59-86 degrees F (15-30 degrees C) away from light and moisture. Do not store in the bathroom. Keep all medicines away from children and pets.

Do not flush medications down the toilet or pour them into a drain unless instructed to do so. Properly discard this product when it is expired or no longer needed. Consult your pharmacist or local waste disposal company for more details about how to safely discard your product.

nitroglycerin.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1331 2022-03-29 00:18:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1305) Alkali

Summary

Alkali is any of the soluble hydroxides of the alkali metals—i.e., lithium, sodium, potassium, rubidium, and cesium. Alkalies are strong bases that turn litmus paper from red to blue; they react with acids to yield neutral salts; and they are caustic and in concentrated form are corrosive to organic tissues. The term alkali is also applied to the soluble hydroxides of such alkaline-earth metals as calcium, strontium, and barium and also to ammonium hydroxide. The term was originally applied to the ashes of burned sodium- or potassium-bearing plants, from which the oxides of sodium and potassium could be leached.

The manufacture of industrial alkali usually refers to the production of soda ash (Na2CO3; sodium carbonate) and caustic soda (NaOH; sodium hydroxide). Other industrial alkalies include potassium hydroxide, potash, and lye. The production of a vast range of consumer goods depends on the use of alkali at some stage. Soda ash and caustic soda are essential to the production of glass, soap, miscellaneous chemicals, rayon and cellophane, paper and pulp, cleansers and detergents, textiles, water softeners, certain metals (especially aluminum), bicarbonate of soda, and gasoline and other petroleum derivatives.

People have been using alkali for centuries, obtaining it first from the leachings (water solutions) of certain desert earths. In the late 18th century the leaching of wood or seaweed ashes became the chief source of alkali. In 1775 the French Académie des Sciences offered monetary prizes for new methods for manufacturing alkali. The prize for soda ash was awarded to the Frenchman Nicolas Leblanc, who in 1791 patented a process for converting common salt (sodium chloride) into sodium carbonate. The Leblanc process dominated world production until late in the 19th century, but following World War I it was completely supplanted by another salt-conversion process that had been perfected in the 1860s by Ernest Solvay of Belgium. Late in the 19th century, electrolytic methods for the production of caustic soda appeared and grew rapidly in importance.

In the Solvay, or ammonia-soda process (q.v.) of soda ash manufacture, common salt in the form of a strong brine is chemically treated to eliminate calcium and magnesium impurities and is then saturated with recycling ammonia gas in towers. The ammoniated brine is then carbonated using carbon dioxide gas under moderate pressure in a different type of tower. These two processes yield ammonium bicarbonate and sodium chloride, the double decomposition of which gives the desired sodium bicarbonate as well as ammonium chloride. The sodium bicarbonate is then heated to decompose it to the desired sodium carbonate. The ammonia involved in the process is almost completely recovered by treating the ammonium chloride with lime to yield ammonia and calcium chloride. The recovered ammonia is then reused in the processes already described.

The electrolytic production of caustic soda involves the electrolysis of a strong salt brine in an electrolytic cell. (Electrolysis is the breaking down of a compound in solution into its constituents by means of an electric current in order to bring about a chemical change.) The electrolysis of sodium chloride yields chlorine and either sodium hydroxide or metallic sodium. Sodium hydroxide in some cases competes with sodium carbonate for the same applications, and in any case the two are interconvertible by rather simple processes. Sodium chloride can be made into an alkali by either of the two processes, the difference between them being that the ammonia-soda process gives the chlorine in the form of calcium chloride, a compound of small economic value, while the electrolytic processes produce elemental chlorine, which has innumerable uses in the chemical industry. For this reason the ammonia-soda process, having displaced the Leblanc process, has found itself being displaced, the older ammonia-soda plants continuing to operate very efficiently while newly built plants use electrolytic processes.

In a few places in the world there are substantial deposits of the mineral form of soda ash, known as natural alkali. The mineral usually occurs as sodium sesquicarbonate, or trona (Na2CO3·NaHCO3·2H2O). The United States produces much of the world’s natural alkali from vast trona deposits in underground mines in Wyoming and from dry lake beds in California.

Details

In chemistry, an alkali (romanized: al-qaly, lit. 'ashes of the saltwort') is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline is commonly, and alkalescent less often, used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases.

Etymology

The word "alkali" is derived from Arabic al qalīy (or alkali), meaning the calcined ashes, referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali.

Common properties of alkalis and bases

Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include:

* Moderately concentrated solutions (over 10^{-3} M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink.
* Concentrated solutions are caustic (causing chemical burns).
* Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin.
* Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution.

Difference between alkali and base

The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering.

There are various more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen.

* A basic salt of an alkali metal or alkaline earth metal (This includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia).)
* Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.)

The second subset of bases is also called an "Arrhenius base".

Alkali salts

Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are:

* Sodium hydroxide (NaOH) – often called "caustic soda"
* Potassium hydroxide (KOH) – commonly called "caustic potash"
* Lye – generic term for either of two previous salts or their mixture
* Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater"
* Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions)

Alkaline soil

Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally, due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer a mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems.

Alkali lakes

In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake.

Examples of alkali lakes:

* Alkali Lake, Lake County, Oregon
* Baldwin Lake, San Bernardino County, California
* Bear Lake[6] on the Utah–Idaho border
* Lake Magadi in Kenya
* Lake Turkana in Kenya
* Mono Lake, near Owens Valley in California
* Redberry Lake, Saskatchewan
* Summer Lake, Lake County, Oregon
* Tramping Lake, Saskatchewan

alkali-b739b940-b04f-4724-a722-4bcf4f94a58-resize-750.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1332 2022-03-30 00:20:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1306) Systems analyst

Summary

A systems analyst, also known as business technology analyst, is an information technology (IT) professional who specializes in analyzing, designing and implementing information systems. Systems analysts assess the suitability of information systems in terms of their intended outcomes and liaise with end users, software vendors and programmers in order to achieve these outcomes. A systems analyst is a person who uses analysis and design techniques to solve business problems using information technology. Systems analysts may serve as change agents who identify the organizational improvements needed, design systems to implement those changes, and train and motivate others to use the systems.

Industry

As of 2015, the sectors employing the greatest numbers of computer systems analysts were state government, insurance, computer system design, professional and commercial equipment, and company and enterprise management. The number of jobs in this field is projected to grow from 487,000 as of 2009 to 650,000 by 2016.

This job ranked third best in a 2010 survey, fifth best in the 2011 survey, 9th best in the 2012 survey and the 10th best in the 2013 survey.

Details

Application analyst

In the USA, an application analyst (also applications systems analyst) is someone whose job is to support a given application or applications. This may entail some computer programming, some system administration skills, and the ability to analyze a given problem, diagnose it and find its root cause, and then either solve it or pass the problem on to the relevant people if it does not lie within the application analyst's area of responsibility. Typically an application analyst will be responsible for supporting bespoke (i.e. custom) applications programmed with a variety of programming languages and using a variety of database systems, middleware systems and the like. It is a form of 3rd level technical support/help desk. The role may or may not involve some customer contact but most often it involves getting some description of the problem from help desk, making a diagnosis and then either creating a fix or passing the problem on to someone who is responsible for the actual problem area.

In some companies, an application analyst is a software architect.

Overview

Depending on the Industry, an application analyst will apply subject matter expertise by verifying design documents, execute testing for new functionality, and defect fixes. Additional responsibilities can include being a liaison between business stakeholders and IT developers. Also, providing clarifications on system requirements and design for integrating-application teams. An application analyst will interface with multiple channels (depending on scope) to provide demos and application walk-throughs and training. An application analyst will align with IT resources to ensure high quality deliverables utilizing agile methodology for rapid delivery technology enablers. Participating in the change request management process to field, document, and communicate responses feedback is also common within an application analyst's responsibilities.

Application systems analysts consult with management and help develop software to fit clients' needs. Application systems analyst must provide accurate, quality analyses of new program applications, as well as conduct testing, locate potential problems, and solve them in an efficient manner. Clients' needs may vary widely (for example, they may work in the medical field or in the securities industry), and staying up to date with software and technology trends in their field are essential.

Companies that require analysts are mostly in the fields of business, accounting, security, and scientific engineering. Application systems analysts work with other analysts and program designers, as well as managers and clients. These analysts generally work in an office setting, but there are exceptions when clients may need services at their office or home. Application systems analysts usually work full time, although they may need to work nights and weekends to resolve emerging issues or when deadlines approach; some companies may require analysts to be on call.

Analyst positions typically require at least a bachelor's degree in computer science or a related field, and a master's degree may be required. Previous information technology experience - preferably in a similar role - is required as well. Application systems analysts must have excellent communication skills, as they often work directly with clients and may be required to train other analysts. Flexibility, the ability to work well in teams, and the ability to work well with minimal supervision are also preferred traits.

Application analyst tasks include:

* Analyze and route issues into the proper ticketing systems and update and close tickets in a timely manner.
* Devise or modify procedures to solve problems considering computer equipment capacity and limitations.
* Establish new users, manage access levels and reset passwords.
* Conduct application testing and provide database management support.
* Create and maintain documentation as necessary for operational and security audits.

Software analyst

In a software development team, a software analyst  is the person who studies the software application domain, prepares software requirements, and specification (Software Requirements Specification) documents. The software analyst is the seam between the software users and the software developers. They convey the demands of software users to the developers.

In the USA, an application analyst (also applications systems analyst) is someone whose job is to support a given application or applications. This may entail some computer programming, some system administration skills, and the ability to analyze a given problem, diagnose it and find its root cause, and then either solve it or pass the problem on to the relevant people if it does not lie within the application analyst's area of responsibility. Typically an application analyst will be responsible for supporting bespoke (i.e. custom) applications programmed with a variety of programming languages and using a variety of database systems, middleware systems and the like. It is a form of 3rd level technical support/help desk. The role may or may not involve some customer contact but most often it involves getting some description of the problem from help desk, making a diagnosis and then either creating a fix or passing the problem on to someone who is responsible for the actual problem area.

In some companies, an application analyst is a software architect.

Overview

Depending on the Industry, an application analyst will apply subject matter expertise by verifying design documents, execute testing for new functionality, and defect fixes. Additional responsibilities can include being a liaison between business stakeholders and IT developers. Also, providing clarifications on system requirements and design for integrating-application teams. An application analyst will interface with multiple channels (depending on scope) to provide demos and application walk-throughs and training. An application analyst will align with IT resources to ensure high quality deliverables utilizing agile methodology for rapid delivery technology enablers. Participating in the change request management process to field, document, and communicate responses feedback is also common within an application analyst's responsibilities.

Application systems analysts consult with management and help develop software to fit clients' needs. Application systems analyst must provide accurate, quality analyses of new program applications, as well as conduct testing, locate potential problems, and solve them in an efficient manner. Clients' needs may vary widely (for example, they may work in the medical field or in the securities industry), and staying up to date with software and technology trends in their field are essential.

Companies that require analysts are mostly in the fields of business, accounting, security, and scientific engineering. Application systems analysts work with other analysts and program designers, as well as managers and clients. These analysts generally work in an office setting, but there are exceptions when clients may need services at their office or home. Application systems analysts usually work full time, although they may need to work nights and weekends to resolve emerging issues or when deadlines approach; some companies may require analysts to be on call.

Analyst positions typically require at least a bachelor's degree in computer science or a related field, and a master's degree may be required. Previous information technology experience - preferably in a similar role - is required as well. Application systems analysts must have excellent communication skills, as they often work directly with clients and may be required to train other analysts. Flexibility, the ability to work well in teams, and the ability to work well with minimal supervision are also preferred traits.

Application analyst tasks include:

* Analyze and route issues into the proper ticketing systems and update and close tickets in a timely manner.
* Devise or modify procedures to solve problems considering computer equipment capacity and limitations.
* Establish new users, manage access levels and reset passwords.
* Conduct application testing and provide database management support.
* Create and maintain documentation as necessary for operational and security audits.

Business analyst

A business analyst (BA) is a person who analyzes and documents the market environment, processes, or systems of businesses. According to Robert Half, the typical roles for business analysts include creating detailed business analysis, budgeting and forecasting, planning and monitoring, variance analysis, pricing, reporting and defining business requirements for stakeholders. Related to business analysts are system analysts, who serve as the intermediary between business and IT, assessing whether the IT solution is suitable for the business.

Areas of business analysis

There are at least four types of business analysis:

* Business developer – to identify the organization's business needs and business' opportunities.
* Business model analysis – to define and model the organization's policies and market approaches.
* Process design – to standardize the organization’s workflows.
* Systems analysis – the interpretation of business rules and requirements for technical systems (generally within IT).

The business analyst is someone who is a part of the business operation and works with Information Technology to improve the quality of the services being delivered, sometimes assisting in the Integration and Testing of new solutions. Business Analysts act as a liaison between management and technical developers.

The BA may also support the development of training material, participates in the implementation, and provides post-implementation support.

Industries

This occupation (Business Analyst) is almost equally distributed between male workers and female workers, which means, although it belongs to an IT industry, the entrance to this job is open equally both for men and women. Also, research found that the annual salary for BA tops in Australia, and is reported that India has the lowest compensation.

Business analysis is a professional discipline of identifying business needs and determining solutions to business problems. Solutions often include a software-systems development component, but may also consist of process improvements, organizational change or strategic planning and policy development. The person who carries out this task is called a business analyst or BA.

Business analysts do not work solely on developing software systems. But work across the organisation, solving business problems in consultation with business stakeholders. Whilst most of the work that business analysts do today relate to software development/solutions, this derives from the ongoing massive changes businesses all over the world are experiencing in their attempts to digitise.

Although there are different role definitions, depending upon the organization, there does seem to be an area of common ground where most business analysts work. The responsibilities appear to be:

* To investigate business systems, taking a holistic view of the situation. This may include examining elements of the organisation structures and staff development issues as well as current processes and IT systems.
* To evaluate actions to improve the operation of a business system. Again, this may require an examination of organisational structure and staff development needs, to ensure that they are in line with any proposed process redesign and IT system development.
* To document the business requirements for the IT system support using appropriate documentation standards.

In line with this, the core business analyst role could be defined as an internal consultancy role that has the responsibility for investigating business situations, identifying and evaluating options for improving business systems, defining requirements and ensuring the effective use of information systems in meeting the needs of the business.

Additional Information

How to become a computer systems analyst.

Getting an IT degree, whether a traditional or online degree, presents such a wide variety of career options, it can be difficult to know which one to pursue. While your choice will depend a great deal on the skills you have and the work environment you prefer, your job outlook is extremely positive no matter what you choose. There are a variety of positions you can choose within the IT industry, from computer scientist to software engineer.

One IT job that is thriving in recent years is that of a computer systems analyst—in fact, recent reports from the Bureau of Labor Statistics say that “employment of computer systems analysts is projected to grow 21 percent from 2014 to 2024, much faster than the average for all occupations.”

What is a computer systems analyst?

You've probably heard of many IT careers, but a computer systems analyst may not be one you’re familiar with. Computer systems analysts are responsible for determining how a business' computer system is serving the needs of the company, and what can be done to make those systems and procedures more effective. They work closely with IT managers to determine what system upgrades are financially feasible and what technologies are available that could increase the company's efficiency.

Computer systems analysts may also design and develop new systems, train users, and configure hardware and software as necessary. The type of system that analysts work with will largely depend upon the needs of their employer.

How do I become a computer systems analyst?

There aren’t specific certification tests you need to pass to become a computer systems analyst, but you will need a bachelor’s or master’s degree in computer science or similar credentials. There are many places you can turn to in order to earn this bachelor’s or master’s degree, including online universities like Western Governors University. Be aware, this career doesn’t just require computer knowledge. Business knowledge is also extremely beneficial for become a computer systems analyst. It’s recommended that you take some business courses, or even pursue an Information Technology MBA to be most successful. Computer systems analysts need to constantly keep themselves up-to-date on the latest innovations in technology, and can earn credentials from the Institute for the Certification of Computing Professionals to set themselves apart from their peers.

What experience do I need to become a computer systems analyst?

Having experience is always helpful when trying to secure a job. While earning an IT degree will give you important training, applying your skills on the job through an internship or other employment will look great on your resume and could lead to career opportunities.

How much money do computer systems analysts make?

According to the Bureau of Labor Statistics, in May 2016, the average annual salary for computer systems analysts was $87,220. This means that roughly half of workers earned more than this, and half earned less. The lowest 10% earned around $53,110 annually, and the highest 10% earned more than $137,690 annually. The top industries for computer systems analysts in 2016 included; computer systems design, finance and insurance, company management, the information sector, and state and local government.

What is the job outlook like for a computer systems analyst?

Again, job outlook for this pursuing a computer systems analyst career is very positive. The Bureau of Labor Statistics reports that employment is expected to grow 21% in the next 7 years, which is much faster than the average for all occupations. As technology continues to flourish, organizations will continue to increase their reliance on IT and analysts will be hired to design and install new systems. Additional job growth in data processing, hosting, and related services is expected.

woman-coding-on-mac.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1333 2022-03-30 17:42:50

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1307) Daylight saving time

Summary

Daylight Saving Time, also called summer time, is a system for uniformly advancing clocks, so as to extend daylight hours during conventional waking time in the summer months. In countries in the Northern Hemisphere, clocks are usually set ahead one hour in late March or in April and are set back one hour in late September or in October.

The practice was first suggested in a whimsical essay by Benjamin Franklin in 1784. In 1907 an Englishman, William Willett, campaigned for setting the clock ahead by 80 minutes in four moves of 20 minutes each during April and the reverse in September. In 1909 the British House of Commons rejected a bill to advance the clock by one hour in the spring and return to Greenwich Mean Time in the autumn.

Several countries, including Australia, Great Britain, Germany, and the United States, adopted summer Daylight Saving Time during World War I to conserve fuel by reducing the need for artificial light. During World War II clocks were kept continuously advanced by an hour in some countries—e.g., in the United States from February 9, 1942, to September 30, 1945; and England used “double summer time” during part of the year, advancing clocks two hours from Standard Time during the summer and one hour during the winter months.

In the United States, Daylight Saving Time formerly began on the last Sunday in April and ended on the last Sunday in October. In 1986 the U.S. Congress passed a law that, beginning the following year, moved up the start of Daylight Saving Time to the first Sunday in April but kept its end date the same. In 2007 Daylight Saving Time changed again in the United States, as the start date was moved to the second Sunday in March and the end date to the first Sunday in November. In most of the countries of western Europe, Daylight Saving Time starts on the last Sunday in March and ends on the last Sunday in October.

Details

Daylight saving time (DST), also known as daylight savings time or daylight time (United States, Canada, and Australia), and summer time (United Kingdom, European Union, and others), is the practice of advancing clocks (typically by one hour) during warmer months so that darkness falls at a later clock time. The typical implementation of DST is to set clocks forward by one hour in the spring ("spring forward"), and to set clocks back by one hour in autumn ("fall back") to return to standard time. As a result, there is one 23-hour day in late winter or early spring and one 25-hour day in autumn.

The idea of aligning waking hours to daylight hours to conserve candles was first proposed in 1784 by US polymath Benjamin Franklin. In a satirical letter to the editor of The Journal of Paris, Franklin suggested that waking up earlier in the summer would economize candle usage and calculated considerable savings. In 1895, New Zealand entomologist and astronomer George Hudson proposed the idea of changing clocks by two hours every spring to the Wellington Philosophical Society, as he wanted to have more daylight hours to devote to collecting and examining insects. In 1907, British resident William Willett presented the idea as a way to save energy. After some serious consideration, it was not implemented.

In 1908 Port Arthur in Ontario, Canada, started using DST. Starting on April 30, 1916, the German Empire and Austria-Hungary each organized the first nationwide implementation in their jurisdictions. Many countries have used DST at various times since then, particularly since the 1970s energy crisis. DST is generally not observed near the Equator, where sunrise and sunset times do not vary enough to justify it. Some countries observe it only in some regions: for example, parts of Australia observe it, while other parts do not. Conversely, it is not observed at some places at high latitudes, because there are wide variations in sunrise and sunset times and a one-hour shift would relatively not make much difference. The United States observes it, except for the states of Hawaii and Arizona (within the latter, however, the Navajo Nation does observe it, conforming to federal practice). A minority of the world's population uses DST; Asia and Africa generally do not.

DST clock shifts sometimes complicate timekeeping and can disrupt travel, billing, record keeping, medical devices, and sleep patterns. Computer software generally adjusts clocks automatically.

Rationale

Industrialized societies usually follow a clock-based schedule for daily activities that do not change throughout the course of the year. The time of day that individuals begin and end work or school, and the coordination of mass transit, for example, usually remain constant year-round. In contrast, an agrarian society's daily routines for work and personal conduct are more likely governed by the length of daylight hours and by solar time, which change seasonally because of the Earth's axial tilt. North and south of the tropics, daylight lasts longer in summer and shorter in winter, with the effect becoming greater the further one moves away from the equator.

After synchronously resetting all clocks in a region to one hour ahead of standard time, individuals following a clock-based schedule will awaken an hour earlier than they would have otherwise—or rather an hour's worth of darkness earlier; they will begin and complete daily work routines an hour of daylight earlier: they will have available to them an extra hour of daylight after their workday activities. They will have one less hour of daylight at the start of the workday, making the policy less practical during winter.

While the times of sunrise and sunset change at roughly equal rates as the seasons change, proponents of daylight saving time argue that most people prefer a greater increase in daylight hours after the typical "nine to five" workday. Supporters have also argued that DST decreases energy consumption by reducing the need for lighting and heating, but the actual effect on overall energy use is heavily disputed.

The shift in apparent time is also motivated by practicality. In American temperate latitudes, for example, the sun rises around 04:30 at the summer solstice and sets around 19:30. Since most people are asleep at 04:30, it is seen as more practical to pretend that 04:30 is actually 05:30, thereby allowing people to wake close to the sunrise and be active in the evening light.

The manipulation of time at higher latitudes (for example Iceland, Nunavut, Scandinavia, and Alaska) has little effect on daily life, because the length of day and night changes more extremely throughout the seasons (in comparison to lower latitudes). Sunrise and sunset times become significantly out of phase with standard working hours regardless of manipulation of the clock.

DST is similarly of little use for locations near the Equator, because these regions see only a small variation in daylight in the course of the year. The effect also varies according to how far east or west the location is within its time zone, with locations farther east inside the time zone benefiting more from DST than locations farther west in the same time zone. Neither is daylight savings of much practicality in such places as China, which—despite its width of thousands of miles—is all located within a single time zone per government mandate.

History

Ancient civilizations adjusted daily schedules to the sun more flexibly than DST does, often dividing daylight into 12 hours regardless of daytime, so that each daylight hour became progressively longer during spring and shorter during autumn. For example, the Romans kept time with water clocks that had different scales for different months of the year; at Rome's latitude, the third hour from sunrise (hora tertia) started at 09:02 solar time and lasted 44 minutes at the winter solstice, but at the summer solstice it started at 06:58 and lasted 75 minutes. From the 14th century onward, equal-length civil hours supplanted unequal ones, so civil time no longer varied by season. Unequal hours are still used in a few traditional settings, such as monasteries of Mount Athos and in Jewish ceremonies.

Benjamin Franklin published the proverb "early to bed and early to rise makes a man healthy, wealthy, and wise," and published a letter in the Journal de Paris during his time as an American envoy to France (1776–1785) suggesting that Parisians economize on candles by rising earlier to use morning sunlight. This 1784 satire proposed taxing window shutters, rationing candles, and waking the public by ringing church bells and firing cannons at sunrise. Despite common misconception, Franklin did not actually propose DST; 18th-century Europe did not even keep precise schedules. However, this changed as rail transport and communication networks required a standardization of time unknown in Franklin's day.

In 1810, the Spanish National Assembly Cortes of Cádiz issued a regulation that moved certain meeting times forward by one hour from May 1 to September 30 in recognition of seasonal changes, but it did not actually change the clocks. It also acknowledged that private businesses were in the practice of changing their opening hours to suit daylight conditions, but they did so of their own volition.

New Zealand entomologist George Hudson first proposed modern DST. His shift-work job gave him leisure time to collect insects and led him to value after-hours daylight. In 1895, he presented a paper to the Wellington Philosophical Society proposing a two-hour daylight-saving shift, and considerable interest was expressed in Christchurch; he followed up with an 1898 paper. Many publications credit the DST proposal to prominent English builder and outdoorsman William Willett, who independently conceived DST in 1905 during a pre-breakfast ride when he observed how many Londoners slept through a large part of a summer day. Willett also was an avid golfer who disliked cutting short his round at dusk. His solution was to advance the clock during the summer months, and he published the proposal two years later. Liberal Party member of parliament Robert Pearce took up the proposal, introducing the first Daylight Saving Bill to the House of Commons on February 12, 1908. A select committee was set up to examine the issue, but Pearce's bill did not become law and several other bills failed in the following years. Willett lobbied for the proposal in the UK until his death in 1915.

Port Arthur, Ontario, Canada, was the first city in the world to enact DST, on July 1, 1908. This was followed by Orillia, Ontario, introduced by William Sword Frost while mayor from 1911 to 1912. The first states to adopt DST (German: Sommerzeit) nationally were those of the German Empire and its World War I ally Austria-Hungary commencing April 30, 1916, as a way to conserve coal during wartime. Britain, most of its allies, and many European neutrals soon followed. Russia and a few other countries waited until the next year, and the United States adopted daylight saving in 1918. Most jurisdictions abandoned DST in the years after the war ended in 1918, with exceptions including Canada, the United Kingdom, France, Ireland, and the United States. It became common during World War II (some countries adopted double summer time), and was widely adopted in America and Europe from the 1970s as a result of the 1970s energy crisis. Since then, the world has seen many enactments, adjustments, and repeals.

It is a common myth in the United States that DST was first implemented for the benefit of farmers. In reality, farmers have been one of the strongest lobbying groups against DST since it was first implemented. The factors that influence farming schedules, such as morning dew and dairy cattle's readiness to be milked, are ultimately dictated by the sun, so the time change introduces unnecessary challenges.

DST was first implemented in the US with the Standard Time Act of 1918, a wartime measure for seven months during World War I in the interest of adding more daylight hours to conserve energy resources. Year-round DST, or "War Time", was implemented again during World War II. After the war, local jurisdictions were free to choose if and when to observe DST until the Uniform Time Act which standardized DST in 1966. Permanent daylight saving time was enacted for the winter of 1974, but there were complaints of children going to school in the dark and working people commuting and starting their work day in pitch darkness during the winter months, and it was repealed a year later.

The United States has begun the process of making daylight saving time the permanent time across all participating states, with the Senate passing the Sunshine Protection Act by unanimous consent on March 15, 2022. If it were to pass through the House of Representatives and signed by President Joe Biden, any state in the United States currently observing daylight saving time would begin to do so year-round starting in November 2023.

Procedure

The relevant authorities usually schedule clock changes to occur at (or soon after) midnight, and on a weekend, in order to lessen disruption to weekday schedules. A one-hour change is usual, but twenty-minute and two-hour changes have been used in the past. In all countries that observe daylight saving time seasonally (i.e. during summer and not winter), the clock is advanced from standard time to daylight saving time in the spring, and they are turned back from daylight saving time to standard time in the autumn. The practice, therefore, reduces the number of civil hours in the day of the springtime change, and it increases the number of civil hours in the day of the autumnal change. For a midnight change in spring, a digital display of local time would appear to jump from 23:59:59.9 to 01:00:00.0. For the same clock in autumn, the local time would appear to repeat the hour preceding midnight, i.e. it would jump from 23:59:59.9 to 23:00:00.0.

In most countries that observe seasonal daylight saving time, the clock observed in winter is legally named "standard time" in accordance with the standardization of time zones to agree with the local mean time near the center of each region. An exception exists in Ireland, where its winter clock has the same offset (UTC±00:00) and legal name as that in Britain (Greenwich Mean Time)—but while its summer clock also has the same offset as Britain's (UTC+01:00), its legal name is Irish Standard Time as opposed to British Summer Time.

While most countries that change clocks for daylight saving time observe standard time in winter and DST in summer, Morocco observes (since 2019) daylight saving time every month but Ramadan. During the holy month (the date of which is determined by the lunar calendar and thus moves annually with regard to the Gregorian calendar), the country's civil clocks observe Western European Time (UTC+00:00, which geographically overlaps most of the nation). At the close of this month, its clocks are turned forward to Western European Summer Time (UTC+01:00), where they remain until the return of the holy month the following year.

The time at which to change clocks differs across jurisdictions. Members of the European Union conduct a coordinated change, changing all zones at the same instant, at 01:00 Coordinated Universal Time (UTC), which means that it changes at 02:00 Central European Time (CET), equivalent to 03:00 Eastern European Time (EET). As a result, the time differences across European time zones remain constant. North America coordination of the clock change differs, in that each jurisdiction change at 02:00 local time, which temporarily creates unusual differences in offsets. For example, Mountain Time is, for one hour in the autumn, zero hours ahead of Pacific Time instead of the usual one hour ahead, and, for one hour in the spring, it is two hours ahead of Pacific Time instead of one. Also, during the autumn shift from daylight saving to standard time, the hour between 01:00 and 01:59:59 occurs twice in any given time zone, whereas—during the late winter or spring shift from standard to daylight saving time—the hour between 02:00 and 02:59:59 disappears.

The dates on which clocks change vary with location and year; consequently, the time differences between regions also vary throughout the year. For example, Central European Time is usually six hours ahead of North American Eastern Time, except for a few weeks in March and October/November, while the United Kingdom and mainland Chile could be five hours apart during the northern summer, three hours during the southern summer, and four hours for a few weeks per year. Since 1996, European Summer Time has been observed from the last Sunday in March to the last Sunday in October; previously the rules were not uniform across the European Union.[56] Starting in 2007, most of the United States and Canada observed DST from the second Sunday in March to the first Sunday in November, almost two-thirds of the year. Moreover, the beginning and ending dates are roughly reversed between the northern and southern hemispheres because spring and autumn are displaced six months. For example, mainland Chile observes DST from the second Saturday in October to the second Saturday in March, with transitions at 24:00 local time. In some countries time is governed by regional jurisdictions within the country such that some jurisdictions change and others do not; this is currently the case in Australia, Canada, Mexico, and the United States (formerly in Brazil, etc.).

From year to year, the dates on which to change clock may also move for political or social reasons. The Uniform Time Act of 1966 formalized the United States' period of daylight saving time observation as lasting six months (it was previously declared locally); this period was extended to seven months in 1986, and then to eight months in 2005. The 2005 extension was motivated in part by lobbyists from the candy industry, seeking to increase profits by including Halloween (October 31) within the daylight saving time period. In recent history, Australian state jurisdictions not only changed at different local times but sometimes on different dates. For example, in 2008 most states there that observed daylight saving time changed clocks forward on October 5, but Western Australia changed on October 26.

Permanent daylight saving time

A move to permanent daylight saving time (staying on summer hours all year with no time shifts) is sometimes advocated and is currently implemented in some jurisdictions such as Argentina, Belarus, Iceland, Kyrgyzstan, Morocco, Namibia, Saskatchewan, Singapore, Turkey, Turkmenistan, Uzbekistan and Yukon. Although Saskatchewan follows Central Standard Time, its capital city Regina experiences solar noon close to 13:00, in effect putting the city on permanent daylight time. Similarly, Yukon is classified as being in the Mountain Time Zone, though in effect it observes permanent Pacific Daylight Time to align with the Pacific time zone in summer, but local solar noon in the capital Whitehorse occurs nearer to 14:00, in effect putting Whitehorse on "double daylight time".

Advocates cite the same advantages as normal DST without the problems associated with the twice yearly time shifts. However, many remain unconvinced of the benefits, citing the same problems and the relatively late sunrises, particularly in winter, that year-round DST entails.

Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular because of the late sunrises in winter, so in 2014, Russia switched permanently back to standard time partially. The United Kingdom and Ireland also experimented with year-round summer time between 1968 and 1971, and put clocks forward by an extra hour during World War II.

In the United States, the Florida, Washington, California, and Oregon legislatures have all passed bills to enact permanent DST, but the bills require Congressional approval in order to take effect. Maine, Massachusetts, New Hampshire, and Rhode Island have also introduced proposals or commissions to that effect. Although 26 states have considered making DST permanent, unless Congress changes federal law, states cannot implement permanent DST—states can only opt out of DST, not standard time.

In September 2018, the European Commission proposed to end seasonal clock changes as of 2019. Member states would have the option of observing either daylight saving time all year round or standard time all year round. In March 2019, the European Parliament approved the commission's proposal, while deferring implementation from 2019 until 2021. As of October 2020, the decision has not been confirmed by the Council of the European Union. The council has asked the commission to produce a detailed impact assessment, but the Commission considers that the onus is on the Member States to find a common position in Council. As a result, progress on the issue is effectively blocked.

Experts in circadian rhythms and sleep caution against permanent daylight saving time, recommending year-round standard time as the preferred option for public health and safety.

The experts, including various chronobiology societies, have published position papers against adopting DST permanently. For example, a paper by The Society for Research on Biological Rhythms states:

Local and national governments around the world are currently considering the elimination of the annual switch to and from Daylight Saving Time (DST). As an international organization of scientists dedicated to studying circadian and other biological rhythms, the Society for Research on Biological Rhythms (SRBR) engaged experts in the field to write a Position Paper on the consequences of choosing to live on DST or Standard Time (ST). The authors take the position that, based on comparisons of large populations living in DST or ST or on western versus eastern edges of time zones, the advantages of permanent ST outweigh switching to DST annually or permanently. Four peer reviewers provided expert critiques of the initial submission, and the SRBR Executive Board approved the revised manuscript as a Position Paper to help educate the public in their evaluation of current legislative actions to end DST.

The World Federation of Societies for Chronobiology stated that "the scientific literature strongly argues against the switching between DST and Standard Time and even more so against adopting DST permanently."[And the American Academy of Sleep Medicine having the position that "seasonal time changes should be abolished in favor of a fixed, national, year-round standard time." In the EU, the European Sleep Research Society has stated that "that the scientific evidence presently available indicates installing permanent Central European Time (CET, standard time or 'wintertime') is the best option for public health."

dst-clocks.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1334 2022-03-31 15:05:23

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1308) Capillarity or Capillary action

Summary

Capillarity is rise or depression of a liquid in a small passage such as a tube of small cross-sectional area, like the spaces between the fibres of a towel or the openings in a porous material. Capillarity is not limited to the vertical direction. Water is drawn into the fibres of a towel, no matter how the towel is oriented.

Liquids that rise in small-bore tubes inserted into the liquid are said to wet the tube, whereas liquids that are depressed within thin tubes below the surface of the surrounding liquid do not wet the tube. Water is a liquid that wets glass capillary tubes; mercury is one that does not. When wetting does not occur, capillarity does not occur.

Capillarity is the result of surface, or interfacial, forces. The rise of water in a thin tube inserted in water is caused by forces of attraction between the molecules of water and the glass walls and among the molecules of water themselves. These attractive forces just balance the force of gravity of the column of water that has risen to a characteristic height. The narrower the bore of the capillary tube, the higher the water rises. Mercury, conversely, is depressed to a greater degree, the narrower the bore.

Details

Capillary action (sometimes called capillarity, capillary motion, capillary effect, or wicking) is the process of a liquid flowing in a narrow space without the assistance of, or even in opposition to, any external forces like gravity. The effect can be seen in the drawing up of liquids between the hairs of a paint-brush, in a thin tube, in porous materials such as paper and plaster, in some non-porous materials such as sand and liquefied carbon fiber, or in a biological cell. It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension (which is caused by cohesion within the liquid) and adhesive forces between the liquid and container wall act to propel the liquid.

Etymology

Capillary comes from the Latin word capillaris, meaning "of or resembling hair." The meaning stems from the tiny, hairlike diameter of a capillary. While capillary is usually used as a noun, the word also is used as an adjective, as in "capillary action," in which a liquid is moved along — even upward, against gravity — as the liquid is attracted to the internal surface of the capillaries.

Transpiration

Mass flow of liquid water from the roots to the leaves is driven in part by capillary action, but primarily driven by water potential differences. If the water potential in the ambient air is lower than the water potential in the leaf airspace of the stomatal pore, water vapor will travel down the gradient and move from the leaf airspace to the atmosphere. This movement lowers the water potential in the leaf airspace and causes evaporation of liquid water from the mesophyll cell walls. This evaporation increases the tension on the water menisci in the cell walls and decrease their radius and thus the tension that is exerted on the water in the cells. Because of the cohesive properties of water, the tension travels through the leaf cells to the leaf and stem xylem where a momentary negative pressure is created as water is pulled up the xylem from the roots. As evaporation occurs at the leaf surface, the properties of adhesion and cohesion work in tandem to pull water molecules from the roots, through xylem tissue, and out of the plant through stomata. In taller plants and trees, the force of gravity can only be overcome by the decrease in hydrostatic (water) pressure in the upper parts of the plants due to the diffusion of water out of stomata into the atmosphere. Water is absorbed at the roots by osmosis, and any dissolved mineral nutrients travel with it through the xylem.

History

The first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action. In 1660, capillary action was still a novelty to the Irish chemist Robert Boyle, when he reported that "some inquisitive French Men" had observed that when a capillary tube was dipped into water, the water would ascend to "some height in the Pipe". Boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. He found that the vacuum had no observable influence on the height of the liquid in the capillary, so the behavior of liquids in capillary tubes was due to some phenomenon different from that which governed mercury barometers.

Others soon followed Boyle's lead. Some (e.g., Honoré Fabri, Jacob Bernoulli) thought that liquids rose in capillaries because air could not enter capillaries as easily as liquids, so the air pressure was lower inside capillaries. Others (e.g., Isaac Vossius, Giovanni Alfonso Borelli, Louis Carré, Francis Hauksbee, Josia Weitbrecht) thought that the particles of liquid were attracted to each other and to the walls of the capillary.

Although experimental studies continued during the 18th century, a successful quantitative treatment of capillary action was not attained until 1805 by two investigators: Thomas Young of the United Kingdom and Pierre-Simon Laplace of France. They derived the Young–Laplace equation of capillary action. By 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action (i.e., the conditions at the liquid-solid interface).[20] In 1871, the British physicist William Thomson, 1st Baron Kelvin determined the effect of the meniscus on a liquid's vapor pressure—a relation known as the Kelvin equation. German physicist Franz Ernst Neumann (1798–1895) subsequently determined the interaction between two immiscible liquids.

Albert Einstein's first paper, which was submitted to Annalen der Physik in 1900, was on capillarity.

Phenomena and physics

Capillary penetration in porous media shares its dynamic mechanism with flow in hollow tubes, as both processes are resisted by viscous forces. Consequently, a common apparatus used to demonstrate the phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water, a concave meniscus forms. Adhesion occurs between the fluid and the solid inner wall pulling the liquid column along until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. The contact length (around the edge) between the top of the liquid column and the tube is proportional to the radius of the tube, while the weight of the liquid column is proportional to the square of the tube's radius. So, a narrow tube will draw a liquid column along further than a wider tube will, given that the inner water molecules cohere sufficiently to the outer ones.

Examples

In the built environment, evaporation limited capillary penetration is responsible for the phenomenon of rising damp in concrete and masonry, while in industry and diagnostic medicine this phenomenon is increasingly being harnessed in the field of paper-based microfluidics.

In physiology, capillary action is essential for the drainage of continuously produced tear fluid from the eye. Two canaliculi of tiny diameter are present in the inner corner of the eyelid, also called the lacrimal ducts; their openings can be seen with the naked eye within the lacrymal sacs when the eyelids are everted.

Wicking is the absorption of a liquid by a material in the manner of a candle wick. Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel. The small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to "wick" sweat away from the skin. These are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks.

Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action. In this case the pores are gaps between very small particles.

Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen.

With some pairs of materials, such as mercury and glass, the intermolecular forces within the liquid exceed those between the solid and the liquid, so a convex meniscus forms and capillary action works in reverse.

In hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas. Differences in soil potential

drive capillary action in soil.

A practical application of capillary action is the capillary action siphon. Instead of utilizing a hollow tube (as in most siphons), this device consists of a length of cord made of a fibrous material (cotton cord or string works well). After saturating the cord with water, one (weighted) end is placed in a reservoir full of water, and the other end placed in a receiving vessel. The reservoir must be higher than the receiving vessel. Due to capillary action and gravity, water will slowly transfer from the reservoir to the receiving vessel. This simple device can be used to water houseplants when nobody is home. This property is also made use of in the lubrication of steam locomotives: wicks of worsted wool are used to draw oil from reservoirs into delivery pipes leading to the bearings.

In plants and animals

Capillary action is seen in many plants. Water is brought high up in trees by branching; evaporation at the leaves creating depressurization; probably by osmotic pressure added at the roots; and possibly at other locations inside the plant, especially when gathering humidity with air roots.

Capillary action for uptake of water has been described in some small animals, such as Ligia exotica and Moloch horridus.

800px-Capillary_Attraction_Repulsion_(PSF).jpg?revision=1&size=bestfit&width=487&height=276


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1335 2022-04-01 13:46:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1309) Litmus

Summary

Litmus is a mixture of coloured organic compounds obtained from several species of lichens that grow in the Netherlands, particularly Lecanora tartarea and Roccella tinctorum. Litmus turns red in acidic solutions and blue in alkaline solutions and is the oldest and most commonly used indicator of whether a substance is an acid or a base.

Treatment of the lichens with ammonia, potash, and lime in the presence of air produces the various coloured components of litmus. By 1840 litmus had been partially separated into several substances named azolitmin, erythrolitmin, spaniolitmin, and erythrolein. These are apparently mixtures of closely related compounds that were identified in 1961 as derivatives of the heterocyclic compound phenoxazine.

Archil (orchil, or orseille) is a mixture of dyes similar to litmus that are obtained from the same lichens by a different method. The manufacture of archil, which produces a violet shade on wool or silk, was introduced into Europe from the Orient about 1300.

Details

Litmus is a water-soluble mixture of different dyes extracted from lichens. It is often absorbed onto filter paper to produce one of the oldest forms of pH indicator, used to test materials for acidity.

History

The word "litmus" comes from the old Norse word for "dye" or "color". Litmus was used for the first time in about 1300 by Spanish physician Arnaldus de Villa Nova. From the 16th century onwards, the blue dye was extracted from some lichens, especially in the Netherlands.

Natural sources

Litmus can be found in different species of lichens. The dyes are extracted from such species as Roccella tinctoria (South American), Roccella fuciformis (Angola and Madagascar), Roccella pygmaea (Algeria), Roccella phycopsis, Lecanora tartarea (Norway, Sweden), Variolaria dealbata, Ochrolechia parella, Parmotrema tinctorum, and Parmelia. Currently, the main sources are Roccella montagnei (Mozambique) and Dendrographa leucophoea (California).

Uses

The main use of litmus is to test whether a solution is acidic or basic, as blue litmus paper turns red under acidic conditions, and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). Neutral litmus paper is purple. Wet litmus paper can also be used to test for water-soluble gases that affect acidity or basicity; the gas dissolves in the water and the resulting solution colors the litmus paper. For instance, ammonia gas, which is alkaline, turns red litmus paper blue. While all litmus paper acts as pH paper, the opposite is not true.

Litmus can also be prepared as an aqueous solution that functions similarly. Under acidic conditions, the solution is red, and under alkaline conditions, the solution is blue.

Chemical reactions other than acid–base can also cause a color change to litmus paper. For instance, chlorine gas turns blue litmus paper white; the litmus dye is bleached because hypochlorite ions are present. This reaction is irreversible, so the litmus is not acting as an indicator in this situation.

Chemistry

The litmus mixture has the CAS number 1393-92-6 and contains 10 to around 15 different dyes. All of the chemical components of litmus are likely to be the same as those of the related mixture known as orcein, but in different proportions. In contrast with orcein, the principal constituent of litmus has an average molecular mass of 3300. Acid-base indicators on litmus owe their properties to a 7-hydroxyphenoxazone chromophore. Some fractions of litmus were given specific names including erythrolitmin (or erythrolein), azolitmin, spaniolitmin, leucoorcein, and leucazolitmin. Azolitmin shows nearly the same effect as litmus.

Mechanism

Red litmus contains a weak diprotic acid. When it is exposed to a basic compound, the hydrogen ions react with the added base. The conjugate base formed from the litmus acid has a blue color, so the wet red litmus paper turns blue in alkaline solution.

litmus-paper-500x500.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1336 2022-04-02 14:06:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1310) Tourism

Summary

Tourism is travel for pleasure or business; also the theory and practice of touring, the business of attracting, accommodating, and entertaining tourists, and the business of operating tours. The World Tourism Organization defines tourism more generally, in terms which go "beyond the common perception of tourism as being limited to holiday activity only", as people "traveling to and staying in places outside their usual environment for not more than one consecutive year for leisure and not less than 24 hours, business and other purposes". Tourism can be domestic (within the traveller's own country) or international, and international tourism has both incoming and outgoing implications on a country's balance of payments.

Tourism numbers declined as a result of a strong economic slowdown (the late-2000s recession) between the second half of 2008 and the end of 2009, and in consequence of the outbreak of the 2009 H1N1 influenza virus, but slowly recovered until the COVID-19 pandemic put an abrupt end to the growth. The United Nations World Tourism Organization estimated that global international tourist arrivals might decrease by 58% to 78% in 2020, leading to a potential loss of US$0.9–1.2 trillion in international tourism receipts.

Globally, international tourism receipts (the travel item in balance of payments) grew to US$1.03 trillion (€740 billion) in 2005, corresponding to an increase in real terms of 3.8% from 2010. International tourist arrivals surpassed the milestone of 1 billion tourists globally for the first time in 2012, emerging source markets such as China, Russia, and Brazil had significantly increased their spending over the previous decade.

Global tourism accounts for c. 8% of global greenhouse-gas emissions. Emissions as well as other significant environmental and social impacts are not always beneficial to local communities and their economies. For this reason, many tourist development organizations have begun to focus on sustainable tourism in order to mitigate negative effects caused by the growing impact of tourism. The United Nations World Tourism Organization emphasized these practices by promoting tourism as part of the Sustainable Development Goals, through programs like the International Year for Sustainable Tourism for Development in 2017, and programs like Tourism for SDGs focusing on how SDG 8, SDG 12 and SDG 14 implicate tourism in creating a sustainable economy.

Details

Tourism is the act and process of spending time away from home in pursuit of recreation, relaxation, and pleasure, while making use of the commercial provision of services. As such, tourism is a product of modern social arrangements, beginning in western Europe in the 17th century, although it has antecedents in Classical antiquity.

Tourism is distinguished from exploration in that tourists follow a “beaten path,” benefit from established systems of provision, and, as befits pleasure-seekers, are generally insulated from difficulty, danger, and embarrassment. Tourism, however, overlaps with other activities, interests, and processes, including, for example, pilgrimage. This gives rise to shared categories, such as “business tourism,” “sports tourism,” and “medical tourism” (international travel undertaken for the purpose of receiving medical care).

The origins of tourism

By the early 21st century, international tourism had become one of the world’s most important economic activities, and its impact was becoming increasingly apparent from the Arctic to Antarctica. The history of tourism is therefore of great interest and importance. That history begins long before the coinage of the word tourist at the end of the 18th century. In the Western tradition, organized travel with supporting infrastructure, sightseeing, and an emphasis on essential destinations and experiences can be found in ancient Greece and Rome, which can lay claim to the origins of both “heritage tourism” (aimed at the celebration and appreciation of historic sites of recognized cultural importance) and beach resorts. The Seven Wonders of the World became tourist sites for Greeks and Romans.

Pilgrimage offers similar antecedents, bringing Eastern civilizations into play. Its religious goals coexist with defined routes, commercial hospitality, and an admixture of curiosity, adventure, and enjoyment among the motives of the participants. Pilgrimage to the earliest Buddhist sites began more than 2,000 years ago, although it is hard to define a transition from the makeshift privations of small groups of monks to recognizably tourist practices. Pilgrimage to Mecca is of similar antiquity. The tourist status of the hajj is problematic given the number of casualties that—even in the 21st century—continued to be suffered on the journey through the desert. The thermal spa as a tourist destination—regardless of the pilgrimage associations with the site as a holy well or sacred spring—is not necessarily a European invention, despite deriving its English-language label from Spa, an early resort in what is now Belgium. The oldest Japanese onsen (hot springs) were catering to bathers from at least the 6th century. Tourism has been a global phenomenon from its origins.

Modern tourism is an increasingly intensive, commercially organized, business-oriented set of activities whose roots can be found in the industrial and postindustrial West. The aristocratic grand tour of cultural sites in France, Germany, and especially Italy—including those associated with Classical Roman tourism—had its roots in the 16th century. It grew rapidly, however, expanding its geographical range to embrace Alpine scenery during the second half of the 18th century, in the intervals between European wars. (If truth is historically the first casualty of war, tourism is the second, although it may subsequently incorporate pilgrimages to graves and battlefield sites and even, by the late 20th century, to concentration camps.) As part of the grand tour’s expansion, its exclusivity was undermined as the expanding commercial, professional, and industrial middle ranks joined the landowning and political classes in aspiring to gain access to this rite of passage for their sons. By the early 19th century, European journeys for health, leisure, and culture became common practice among the middle classes, and paths to the acquisition of cultural capital (that array of knowledge, experience, and polish that was necessary to mix in polite society) were smoothed by guidebooks, primers, the development of art and souvenir markets, and carefully calibrated transport and accommodation systems.

Technology and the democratization of international tourism

Transport innovation was an essential enabler of tourism’s spread and democratization and its ultimate globalization. Beginning in the mid-19th century, the steamship and the railway brought greater comfort and speed and cheaper travel, in part because fewer overnight and intermediate stops were needed. Above all else, these innovations allowed for reliable time-tabling, essential for those who were tied to the discipline of the calendar if not the clock. The gaps in accessibility to these transport systems were steadily closing in the later 19th century, while the empire of steam was becoming global. Railways promoted domestic as well as international tourism, including short visits to the coast, city, and countryside which might last less than a day but fell clearly into the “tourism” category. Rail travel also made grand tour destinations more widely accessible, reinforcing existing tourism flows while contributing to tensions and clashes between classes and cultures among the tourists. By the late 19th century, steam navigation and railways were opening tourist destinations from Lapland to New Zealand, and the latter opened the first dedicated national tourist office in 1901.

After World War II, governments became interested in tourism as an invisible import and as a tool of diplomacy, but prior to this time international travel agencies took the lead in easing the complexities of tourist journeys. The most famous of these agencies was Britain’s Thomas Cook and Son organization, whose operations spread from Europe and the Middle East across the globe in the late 19th century. The role played by other firms (including the British tour organizers Frame’s and Henry Gaze and Sons) has been less visible to 21st-century observers, not least because these agencies did not preserve their records, but they were equally important. Shipping lines also promoted international tourism from the late 19th century onward. From the Norwegian fjords to the Caribbean, the pleasure cruise was already becoming a distinctive tourist experience before World War I, and transatlantic companies competed for middle-class tourism during the 1920s and ’30s. Between the World Wars, affluent Americans journeyed by air and sea to a variety of destinations in the Caribbean and Latin America.

Tourism became even bigger business internationally in the latter half of the 20th century as air travel was progressively deregulated and decoupled from “flag carriers” (national airlines). The airborne package tour to sunny coastal destinations became the basis of an enormous annual migration from northern Europe to the Mediterranean before extending to a growing variety of long-haul destinations, including Asian markets in the Pacific, and eventually bringing postcommunist Russians and eastern Europeans to the Mediterranean. Similar traffic flows expanded from the United States to Mexico and the Caribbean. In each case these developments built on older rail-, road-, and sea-travel patterns. The earliest package tours to the Mediterranean were by motor coach (bus) during the 1930s and postwar years. It was not until the late 1970s that Mediterranean sun and sea vacations became popular among working-class families in northern Europe; the label “mass tourism,” which is often applied to this phenomenon, is misleading. Such holidays were experienced in a variety of ways because tourists had choices, and the destination resorts varied widely in history, culture, architecture, and visitor mix. From the 1990s the growth of flexible international travel through the rise of budget airlines, notably easyJet and Ryanair in Europe, opened a new mix of destinations. Some of these were former Soviet-bloc locales such as Prague and Riga, which appealed to weekend and short-break European tourists who constructed their own itineraries in negotiation with local service providers, mediated through the airlines’ special deals. In international tourism, globalization has not been a one-way process; it has entailed negotiation between hosts and guests.

Day-trippers and domestic tourism
While domestic tourism could be seen as less glamorous and dramatic than international traffic flows, it has been more important to more people over a longer period. From the 1920s the rise of Florida as a destination for American tourists has been characterized by “snowbirds” from the northern and Midwestern states traveling a greater distance across the vast expanse of the United States than many European tourists travel internationally. Key phases in the pioneering development of tourism as a commercial phenomenon in Britain were driven by domestic demand and local journeys. European wars in the late 18th and early 19th centuries prompted the “discovery of Britain” and the rise of the Lake District and Scottish Highlands as destinations for both the upper classes and the aspiring classes. The railways helped to open the seaside to working-class day-trippers and holidaymakers, especially in the last quarter of the 19th century. By 1914 Blackpool in Lancashire, the world’s first working-class seaside resort, had around four million visitors per summer. Coney Island in Brooklyn, New York, had more visitors by this time, but most were day-trippers who came from and returned to locations elsewhere in the New York City area by train the same day. Domestic tourism is less visible in statistical terms and tends to be serviced by regional, local, and small family-run enterprises. The World Tourism Organization, which tries to count tourists globally, is more concerned with the international scene, but across the globe, and perhaps especially in Asia, domestic tourism remains much more important in numerical terms than the international version.

A case study: the beach holiday

Much of the post-World War II expansion of international tourism was based on beach holidays, which have a long history. In their modern, commercial form, beach holidays are an English invention of the 18th century, based on the medical adaptation of popular sea-bathing traditions. They built upon the positive artistic and cultural associations of coastal scenery for societies in the West, appealing to the informality and habits and customs of maritime society. Later beach holiday destinations incorporated the sociability and entertainment regimes of established spa resorts, sometimes including gambling casinos. Beach holidays built on widespread older uses of the beach for health, enjoyment, and religious rites, but it was the British who formalized and commercialized them. From the late 18th and early 19th centuries, beach resorts spread successively across Europe and the Mediterranean and into the United States, then took root in the European-settled colonies and republics of Oceania, South Africa, and Latin America and eventually reached Asia.

ce-travel.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1337 2022-04-03 14:31:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1311) Washer (hardware)

A washer is a thin plate (typically disk-shaped, but sometimes square) with a hole (typically in the middle) that is normally used to distribute the load of a threaded fastener, such as a bolt or nut. Other uses are as a spacer, spring (Belleville washer, wave washer), wear pad, preload indicating device, locking device, and to reduce vibration (rubber washer).

Washers are usually metal or plastic. High-quality bolted joints require hardened steel washers to prevent the loss of pre-load due to brinelling after the torque is applied. Washers are also important for preventing galvanic corrosion, particularly by insulating steel screws from aluminium surfaces. They may also be used in rotating applications, as a bearing. A thrust washer is used when a rolling element bearing is not needed either from a cost-performance perspective or due to space restraints. Coatings can be used to reduce wear and friction, either by hardening the surface or by providing a solid lubricant (i.e. a self-lubricating surface).

The origin of the word is unknown; the first recorded use of the word was in 1346, however, the first time its definition was recorded was in 1611.

Rubber or fiber gaskets used in taps (or faucets, or valves) as seal against water leaks are sometimes referred to colloquially as washers; but, while they may look similar, washers and gaskets are usually designed for different functions and made differently.

Washer is a machine component that is used in conjunction with a screw fastener such as a bolt and nut and that usually serves either to keep the screw from loosening or to distribute the load from the nut or bolt head over a larger area. For load distribution, thin flat rings of soft steel are usual.

To prevent loosening, several other types of washers are used. All act as springs to compensate for any increase in the distance between the head of a bolt and the nut, or between the head of a screw and the object being clamped. In addition to the spring action, some of these washers have teeth that bite into the workpiece and the screwhead and provide a locking action. They are called tooth or shakeproof lock washers and have teeth that are bent and twisted out of the plane of the washer face.

The conical washer has spring action, but the only locking action is provided by friction. The helical spring washer is one of the most commonly used lock washers.

is-washer-250x250.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1338 2022-04-04 14:04:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1312) Pharynx

Summary

The pharynx (plural: pharynges) is the part of the throat behind the mouth and nasal cavity, and above the oesophagus and trachea (the tubes going down to the stomach and the lungs). It is found in vertebrates and invertebrates, though its structure varies across species. The pharynx carries food and air to the esophagus and larynx respectively. The flap of cartilage called the epiglottis stops food from entering the larynx.

In humans, the pharynx is part of the digestive system and the conducting zone of the respiratory system. (The conducting zone—which also includes the nostrils of the nose, the larynx, trachea, bronchi, and bronchioles—filters, warms and moistens air and conducts it into the lungs). The human pharynx is conventionally divided into three sections: the nasopharynx, oropharynx, and laryngopharynx. It is also important in vocalization.

In humans, two sets of pharyngeal muscles form the pharynx and determine the shape of its lumen. They are arranged as an inner layer of longitudinal muscles and an outer circular layer.

Details

Pharynx (Greek: “throat”) is a cone-shaped passageway leading from the oral and nasal cavities in the head to the esophagus and larynx. The pharynx chamber serves both respiratory and digestive functions. Thick fibres of muscle and connective tissue attach the pharynx to the base of the skull and surrounding structures. Both circular and longitudinal muscles occur in the walls of the pharynx; the circular muscles form constrictions that help push food to the esophagus and prevent air from being swallowed, while the longitudinal fibres lift the walls of the pharynx during swallowing.

The pharynx consists of three main divisions. The anterior portion is the nasal pharynx, the back section of the nasal cavity. The nasal pharynx connects to the second region, the oral pharynx, by means of a passage called an isthmus. The oral pharynx begins at the back of the mouth cavity and continues down the throat to the epiglottis, a flap of tissue that covers the air passage to the lungs and that channels food to the esophagus. Triangular-shaped recesses in the walls of this region house the palatine tonsils, two masses of lymphatic tissue prone to infection. The isthmus connecting the oral and nasal regions is extremely beneficial in humans. It allows them to breathe through either the nose or the mouth and, when medically necessary, allows food to be passed to the esophagus by nasal tubes. The third region is the laryngeal pharynx, which begins at the epiglottis and leads down to the esophagus. Its function is to regulate the passage of air to the lungs and food to the esophagus.

Two small tubes (eustachian tubes) connect the middle ears to the pharynx and allow air pressure on the eardrum to be equalized. Head colds sometimes inflame the tubes, causing earaches and hearing difficulties. Other medical afflictions associated with the pharynx include tonsillitis, cancer, and various types of throat paralyses caused by polio, diphtheria, rabies, or nervous-system injuries.

The term pharynx may also be used to describe a differentiated portion of the invertebrate alimentary canal. In some invertebrate species, the structure is thick and muscular. It is occasionally eversible (rotated or turned outward) and may have multiple functions—for example, being both suctorial and peristaltic in nature.

a84f2297a54f2a617d8878a370f09a2b.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1339 2022-04-05 00:59:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1313) Small intestine

Summary

Small intestine is a long, narrow, folded or coiled tube extending from the stomach to the large intestine; it is the region where most digestion and absorption of food takes place. It is about 6.7 to 7.6 metres (22 to 25 feet) long, highly convoluted, and contained in the central and lower abdominal cavity. A thin membranous material, the mesentery, supports and somewhat suspends the intestines. The mesentery contains areas of fat that help retain heat in the organs, as well as an extensive web of blood vessels. Nerves lead to the small intestine from two divisions of the autonomic nervous system: parasympathetic nerves initiate muscular contractions that move food along the tract (peristalsis), and sympathetic nerves suppress intestinal movements.

Three successive regions of the small intestine are customarily distinguished: duodenum, jejunum, and ileum. These regions form one continuous tube, and, although each area exhibits certain characteristic differences, there are no distinctly marked separations between them. The first area, the duodenum, is adjacent to the stomach; it is only 23 to 28 cm (9 to 11 inches) long, has the widest diameter, and is not supported by the mesentery. Ducts from the liver, gallbladder, and pancreas enter the duodenum to provide juices that neutralize acids coming from the stomach and help digest proteins, carbohydrates, and fats. The second region, the jejunum, in the central section of the abdomen, comprises about two-fifths of the remaining tract. The colour of the jejunum is deep red because of its extensive blood supply; its peristaltic movements are rapid and vigorous, and there is little fat in the mesentery that supports this region. The ileum is located in the lower abdomen. Its walls are narrower and thinner than in the previous section, blood supply is more limited, peristaltic movements are slower, and the mesentery has more fatty areas.

The mucous membrane lining the intestinal wall of the small intestine is thrown into transverse folds called plicae circulares, and in higher vertebrates minute fingerlike projections known as villi project into the cavity. These structures greatly increase the area of the secreting and absorbing surface.

The walls of the small intestine house numerous microscopic glands. Secretions from Brunner glands, in the submucosa of the duodenum, function principally to protect the intestinal walls from gastric juices. Lieberkühn glands, occupying the mucous membrane, secrete digestive enzymes, provide outlet ports for Brunner glands, and produce cells that replace surface-membrane cells shed from the tips of villi.

Peristaltic waves move materials undergoing digestion through the small intestine, while churning movements called rhythmic segmentation mechanically break up these materials, mix them thoroughly with digestive enzymes from the pancreas, liver, and intestinal wall, and bring them in contact with the absorbing surface.

Passage of food through the small intestine normally takes three to six hours. Such afflictions as inflammation (enteritis), deformity (diverticulosis), and functional obstruction may impede passage.

Details

The small intestine or small bowel is an organ in the gastrointestinal tract where most of the absorption of nutrients from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion. The small intestine is about 18 feet (6.5 meters) long and folds many times to fit in the abdomen. Although it is longer than the large intestine, it is called the small intestine because it is narrower in diameter.

The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins. The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum.

Structure:

Size

The length of the small intestine can vary greatly, from as short as 3.00 m (9.84 ft) to as long as 10.49 m (34.4 ft), also depending on the measuring technique used. The typical length in a living person is 3m–5m. The length depends both on how tall the person is and how the length is measured. Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty.

It is approximately 1.5 cm in diameter in newborns after 35 weeks of gestational age, and 2.5–3 cm (1 inch) in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm. On CT scans, a diameter of over 2.5 cm is considered abnormally dilated. The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages 30 square meters.

Parts

The small intestine is divided into three structural parts.

* The duodenum is a short structure ranging from 20 cm (7.9 inches) to 25 cm (9.8 inches) in length, and shaped like a "C". It surrounds the head of the pancreas. It receives gastric chyme from the stomach, together with digestive juices from the pancreas (digestive enzymes) and the liver (bile). The digestive enzymes break down proteins and bile emulsifies fats into micelles. The duodenum contains Brunner's glands, which produce a mucus-rich alkaline secretion containing bicarbonate. These secretions, in combination with bicarbonate from the pancreas, neutralize the stomach acids contained in gastric chyme.

* The jejunum is the midsection of the small intestine, connecting the duodenum to the ileum. It is about 2.5 m long, and contains the circular folds, and intestinal villi that increase its surface area. Products of digestion (sugars, amino acids, and fatty acids) are absorbed into the bloodstream here. The suspensory muscle of duodenum marks the division between the duodenum and the jejunum.

* The ileum: The final section of the small intestine. It is about 3 m long, and contains villi similar to the jejunum. It absorbs mainly vitamin B12 and bile acids, as well as any other remaining nutrients. The ileum joins to the cecum of the large intestine at the ileocecal junction.

The jejunum and ileum are suspended in the abdominal cavity by mesentery. The mesentery is part of the peritoneum. Arteries, veins, lymph vessels and nerves travel within the mesentery.

Blood supply

The small intestine receives a blood supply from the celiac trunk and the superior mesenteric artery. These are both branches of the aorta. The duodenum receives blood from the coeliac trunk via the superior pancreaticoduodenal artery and from the superior mesenteric artery via the inferior pancreaticoduodenal artery. These two arteries both have anterior and posterior branches that meet in the midline and anastomose. The jejunum and ileum receive blood from the superior mesenteric artery. Branches of the superior mesenteric artery form a series of arches within the mesentery known as arterial arcades, which may be several layers deep. Straight blood vessels known as vasa recta travel from the arcades closest to the ileum and jejunum to the organs themselves.

Gene and protein expression

About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal duodenum. Some 300 of these genes are more specifically expressed in the duodenum with very few genes expressed only in the small intestine. The corresponding specific proteins are expressed in glandular cells of the mucosa, such as fatty acid binding protein FABP6. Most of the more specifically expressed genes in the small intestine are also expressed in the duodenum, for example FABP2 and the DEFA6 protein expressed in secretory granules of Paneth cells.

Development

The small intestine develops from the midgut of the primitive gut tube. By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine.

* First stage of the development of the intestinal canal and the peritoneum, seen from the side (diagrammatic). From colon 1 the ascending and transverse colon will be formed and from colon 2 the descending and sigmoid colons and the rectum.

* Second stage of development of the intestinal canal and peritoneum, seen from in front (diagrammatic). The liver has been removed and the two layers of the ventral mesogastrium (lesser omentum) have been cut. The vessels are represented in black and the peritoneum in the reddish tint.

* Third state of the development of the intestinal canal and peritoneum.

Function

Food from the stomach is allowed into the duodenum through the pylorus by a muscle called the pyloric sphincter.

Digestion

The small intestine is where most chemical digestion takes place. Many of the digestive enzymes that act in the small intestine are secreted by the pancreas and liver and enter the small intestine via the pancreatic duct. Pancreatic enzymes and bile from the gallbladder enter the small intestine in response to the Hormone cholecystokinin, which is produced in the response to the presence of nutrients. Secretin, another hormone produced in the small intestine, causes additional effects on the pancreas, where it promotes the release of bicarbonate into the duodenum in order to neutralize the potentially harmful acid coming from the stomach.

The three major classes of nutrients that undergo digestion are proteins, lipids (fats) and carbohydrates:

* Proteins are degraded into small peptides and amino acids before absorption. Chemical breakdown begins in the stomach and continues in the small intestine. Proteolytic enzymes, including trypsin and chymotrypsin, are secreted by the pancreas and cleave proteins into smaller peptides. Carboxypeptidase, which is a pancreatic brush border enzyme, splits one amino acid at a time. * Aminopeptidase and dipeptidase free the end amino acid products.
* Lipids (fats) are degraded into fatty acids and glycerol. Pancreatic lipase breaks down triglycerides into free fatty acids and monoglycerides. Pancreatic lipase works with the help of the salts from the bile secreted by the liver and stored in the gall bladder. Bile salts attach to triglycerides to help emulsify them, which aids access by pancreatic lipase. This occurs because the lipase is water-soluble but the fatty triglycerides are hydrophobic and tend to orient towards each other and away from the watery intestinal surroundings. The bile salts emulsify the triglycerides in the watery surroundings until the lipase can break them into the smaller components that are able to enter the villi for absorption.

Some carbohydrates are degraded into simple sugars, or monosaccharides (e.g., glucose). Pancreatic amylase breaks down some carbohydrates (notably starch) into oligosaccharides. Other carbohydrates pass undigested into the large intestine and further handling by intestinal bacteria. Brush border enzymes take over from there. The most important brush border enzymes are dextrinase and glucoamylase, which further break down oligosaccharides. Other brush border enzymes are maltase, sucrase and lactase. Lactase is absent in some adult humans and, for them, lactose (a disaccharide), as well as most polysaccharides, is not digested in the small intestine. Some carbohydrates, such as cellulose, are not digested at all, despite being made of multiple glucose units. This is because the cellulose is made out of beta-glucose, making the inter-monosaccharidal bindings different from the ones present in starch, which consists of alpha-glucose. Humans lack the enzyme for splitting the beta-glucose-bonds, something reserved for herbivores and bacteria from the large intestine.

Absorption

Digested food is now able to pass into the blood vessels in the wall of the intestine through either diffusion or active transport. The small intestine is the site where most of the nutrients from ingested food are absorbed. The inner wall, or mucosa, of the small intestine, is lined with simple columnar epithelial tissue. Structurally, the mucosa is covered in wrinkles or flaps called circular folds, which are considered permanent features in the mucosa. They are distinct from rugae which are considered non-permanent or temporary allowing for distention and contraction. From the circular folds project microscopic finger-like pieces of tissue called villi (Latin for "shaggy hair"). The individual epithelial cells also have finger-like projections known as microvilli. The functions of the circular folds, the villi, and the microvilli are to increase the amount of surface area available for the absorption of nutrients, and to limit the loss of said nutrients to intestinal fauna.

Each villus has a network of capillaries and fine lymphatic vessels called lacteals close to its surface. The epithelial cells of the villi transport nutrients from the lumen of the intestine into these capillaries (amino acids and carbohydrates) and lacteals (lipids). The absorbed substances are transported via the blood vessels to different organs of the body where they are used to build complex substances such as the proteins required by our body. The material that remains undigested and unabsorbed passes into the large intestine.

Absorption of the majority of nutrients takes place in the jejunum, with the following notable exceptions:

* Iron is absorbed in the duodenum.
* Folate (Vitamin B9) is absorbed in the duodenum and jejunum.
* Vitamin B12 and bile salts are absorbed in the terminal ileum.
* Water is absorbed by osmosis and lipids by passive diffusion throughout the small intestine.
* Sodium bicarbonate is absorbed by active transport and glucose and amino acid co-transport
* Fructose is absorbed by facilitated diffusion.

Immunological

The small intestine supports the body's immune system. The presence of gut flora appears to contribute positively to the host's immune system. Peyer's patches, located within the ileum of the small intestine, are an important part of the digestive tract's local immune system. They are part of the lymphatic system, and provide a site for antigens from potentially harmful bacteria or other microorganisms in the digestive tract to be sampled, and subsequently presented to the immune system.

Gastrointestinal-track-of-the-human-body-7.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1340 2022-04-06 01:08:23

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1314) Large intestine

Summary

The large intestine, also known as the large bowel, is the last part of the gastrointestinal tract and of the digestive system in vertebrates. Water is absorbed here and the remaining waste material is stored in the rectum as feces before being removed by defecation. The colon is the longest portion of the large intestine, and the terms are often used interchangeably but most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the the opening where the gastrointestinal tract ends and exits the body canal.

In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about 1.5 metres (5 ft) long, which is about one-fifth of the whole length of the human gastrointestinal tract.

Details

Large intestine is posterior section of the intestine, consisting typically of four regions: the cecum, colon, rectum, and the terminal part. The term colon is sometimes used to refer to the entire large intestine.

The large intestine is wider and shorter than the small intestine (approximately 1.5 metres, or 5 feet, in length as compared with 6.7 to 7.6 metres, or 22 to 25 feet, in length for the small intestine) and has a smooth inner wall. In the proximal, or upper, half of the large intestine, enzymes from the small intestine complete the digestive process, and bacteria produce B vitamins (B12, thiamin, and riboflavin) as well as vitamin K. The primary function of the large intestine, however, is absorption of water and electrolytes from digestive residues (a process that usually takes 24 to 30 hours) and storage of fecal matter until it can be expelled. Churning movements of the large intestine gradually expose digestive residue to the absorbing walls. A progressive and more vigorous type of movement known as the gastrocolic reflex, which occurs only two or three times daily, propels the material toward the the opening where the gastrointestinal tract ends and exits the body.

The large intestine consists of the

* Cecum and ascending (right) colon
* Transverse colon
* Descending (left) colon
* Sigmoid colon (which is connected to the rectum)

The cecum, which is at the beginning of the ascending colon, is the point at which the small intestine joins the large intestine. Projecting from the cecum is the appendix, which is a small finger-shaped tube that serves no known function. The large intestine secretes mucus and is largely responsible for the absorption of water from the stool.

Intestinal contents are liquid when they reach the large intestine but are normally solid by the time they reach the rectum as stool. The many bacteria that inhabit the large intestine can further digest some material, creating gas. Bacteria in the large intestine also make some important substances, such as vitamin K, which plays an important role in blood clotting. These bacteria are necessary for healthy intestinal function, and some diseases and antibiotics can upset the balance between the different types of bacteria in the large intestine.

The intestines are a long, continuous tube running from the stomach to the math. Most absorption of nutrients and water happen in the intestines. The intestines include the small intestine, large intestine, and rectum.

The small intestine (small bowel) is about 20 feet long and about an inch in diameter. Its job is to absorb most of the nutrients from what we eat and drink. Velvety tissue lines the small intestine, which is divided into the duodenum, jejunum, and ileum.


The large intestine (colon or large bowel) is about 5 feet long and about 3 inches in diameter. The colon absorbs water from wastes, creating stool. As stool enters the rectum, nerves there create the urge to defecate.

Intestine Conditions

* Stomach flu (enteritis): Inflammation of the small intestine. Infections (from viruses, bacteria, or parasites) are the common cause.
* Small intestine cancer: Rarely, cancer may affect the small intestine. There are multiple types of small intestine cancer, causing about 1,100 deaths each year.
* Celiac disease: An "allergy" to gluten (a protein in most breads) causes the small intestine not to absorb nutrients properly. Abdominal pain and weight loss are usual symptoms.
* Carcinoid tumor: A benign or malignant growth in the small intestine. Diarrhea and skin flushing are the most common symptoms.
* Intestinal obstruction: A section of either the small or large bowel can become blocked or twisted or just stop working. Belly distension, pain, constipation, and vomiting are symptoms.
* Colitis: Inflammation of the colon. Inflammatory bowel disease or infections are the most common causes.
* Diverticulosis: Small weak areas in the colon's muscular wall allow the colon's lining to protrude through, forming tiny pouches called diverticuli. Diverticuli usually cause no problems, but can bleed or become inflamed.
* Diverticulitis: When diverticuli become inflamed or infected, diverticulitis results. Abdominal pain and constipation are common symptoms.
* Colon bleeding (hemorrhage): Multiple potential colon problems can cause bleeding. Rapid bleeding is visible in the stool, but very slow bleeding might not be.
* Inflammatory bowel disease: A name for either Crohn's disease or ulcerative colitis. Both conditions can cause colon inflammation (colitis).
* Crohn's disease: An inflammatory condition that usually affects the colon and intestines. Abdominal pain and diarrhea (which may be bloody) are symptoms.
* Ulcerative colitis: An inflammatory condition that usually affects the colon and rectum. Like Crohn's disease, bloody diarrhea is a common symptom of ulcerative colitis.
* Diarrhea: Stools that are frequent, loose, or watery are commonly called diarrhea. Most diarrhea is due to self-limited, mild infections of the colon or small intestine.
* Salmonellosis: Salmonella bacteria can contaminate food and infect the intestine. Salmonella causes diarrhea and stomach cramps, which usually resolve without treatment.
* Shigellosis: Shigella bacteria can contaminate food and infect the intestine. Symptoms include fever, stomach cramps, and diarrhea, which may be bloody.
* Traveler's diarrhea: Many different bacteria commonly contaminate water or food in developing countries. Loose stools, sometimes with nausea and fever, are symptoms.
* Colon polyps: Polyps are growths inside the colon.  Colon cancer can often develop in these tumors after many years.
* Colon cancer: Cancer of the colon affects more than 100,000 Americans each year. Most colon cancer is preventable through regular screening.
* Rectal cancer: Colon and rectal cancer are similar in prognosis and treatment. Doctors often consider them together as colorectal cancer.
* Constipation: When bowel movements are infrequent or difficult.
* Irritable bowel syndrome (IBS): Irritable bowel syndrome, also known as IBS, is an intestinal disorder that causes irritable abdominal pain or discomfort, cramping or bloating, and diarrhea or constipation.
* Rectal prolapse: Part or all of the wall of the rectum can move out of position, sometimes coming out of the math, when straining during a bowel movement.
* Intussusception: Occurring mostly in children, the small intestine can collapse into itself like a telescope. It can become life-threatening if not treated.

Suggested

* Intestine Tests
* Capsule endoscopy: A person swallows a capsule that contains a camera. The camera takes pictures of possible problems in the small intestine, sending the images to a receiver worn on the persons belt
* Upper endoscopy, EGD (esophagogastroduodenoscopy): A flexible tube with a camera on its end (endoscope) is inserted through the mouth. The endoscope allows examination of the duodenum, stomach, and esophagus.
* Colonoscopy: An endoscope is inserted into the rectum and advanced through the colon. A doctor can examine the entire colon with a colonoscope.
* Virtual colonoscopy: A test in which an X-ray machine and a computer create  images of the inside of the colon. If problems are found, a traditional colonoscopy is usually needed.
* Fecal occult blood testing: A test for blood in the stool. If blood is found in the stool, a colonoscopy may be needed to look for the source.
* Sigmoidoscopy: An endoscope is inserted into the rectum and advanced through the left side of the colon. Sigmoidoscopy cannot be used to view the middle and right sides of the colon.
* Colon biopsy: During a colonoscopy, a small piece of colon tissue may be removed for testing. A colon biopsy can help diagnose cancer, infection, or inflammation.

Intestine Treatments

* Antidiarrheal agents: Various medicines can slow down diarrhea, reducing discomfort. Reducing diarrhea does not slow down recovery for most diarrheal illnesses.
* Stool softeners: Over-the-counter and prescription medicines can soften the stool and reduce constipation.
* Laxatives: Medicines can relieve constipation by a variety of methods including stimulating the bowel muscles, and bringing in more water.
* Enema: A term for pushing liquid into the colon through the math. Enemas can deliver medicines to treat constipation or other colon conditions.
* Colonoscopy: Using tools passed through an endoscope, a doctor can treat certain colon conditions. Bleeding, polyps, or cancer might be treated by colonoscopy.
* Polypectomy: During colonoscopy, removal of a colon polyp is called polypectomy.
* Colon surgery: Using open or laparoscopic surgery, part or all of the colon may be removed (colectomy). This may be done for severe bleeding, cancer, or ulcerative colitis.

gi-small-intestine-stomach-crop.gif?thn=0&sc_lang=en


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1341 2022-04-07 01:06:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1315) Delirium

Summary

Delirium (also known as acute confusional state) is an organically caused decline from a previous baseline mental functioning, that develops over a short period of time, typically hours to days. Delirium is a syndrome encompassing disturbances in attention, consciousness, and cognition. It may also involve other neurological deficits, such as psychomotor disturbances (e.g. hyperactive, hypoactive, or mixed), impaired sleep-wake cycle, emotional disturbances, and perceptual disturbances (e.g. hallucinations and delusions), although these features are not required for diagnosis.

Delirium is caused by an acute organic process, which is a physically identifiable structural, functional, or chemical problem in the brain that may arise from a disease process outside the brain that nonetheless affects the brain. It may result from an underlying disease process (e.g. infection, hypoxia), side effect of a medication, withdrawal from drugs, over-consumption of alcohol, usage of hallucinogenic deliriants, or from any number of factors affecting one's overall health (e.g. malnutrition, pain, etc.). In contrast, fluctuations in mental status/function due to changes in primarily psychiatric processes or diseases (e.g. schizophrenia, bipolar disorder) do not, by definition, meet the criteria for 'delirium.'

Delirium may be difficult to diagnose without the proper establishment of a person's usual mental function. Without careful assessment and history, delirium can easily be confused with a number of psychiatric disorders or chronic organic brain syndromes because of many overlapping signs and symptoms in common with dementia, depression, psychosis, etc. Delirium may manifest from a baseline of existing mental illness, baseline intellectual disability, or dementia, without being due to any of these problems.

Treatment of delirium requires identifying and managing the underlying causes, managing delirium symptoms, and reducing the risk of complications. In some cases, temporary or symptomatic treatments are used to comfort the person or to facilitate other care (e.g. preventing people from pulling out a breathing tube). Antipsychotics are not supported for the treatment or prevention of delirium among those who are in hospital; however they will be used in cases where a patient has a history of anxiety, hallucinations or if they are a danger to themselves or others. When delirium is caused by alcohol or sedative hypnotic withdrawal, benzodiazepines are typically used as a treatment. There is evidence that the risk of delirium in hospitalized people can be reduced by systematic good general care. In a DSM assessment, delirium was found to affect 14–24% of all hospitalized individuals, with an overall prevalence for the general population as 1–2%, increasing with age, reaching 14% of adults over age 85. Among older adults, delirium was found to occur in 15–53% of those post-surgery, 70–87% of those in the ICU, and in up to 60% of those in nursing homes or post-acute care settings. Among those requiring critical care, delirium is a risk for death within the next year.

Details

Delirium is an abrupt change in the brain that causes mental confusion and emotional disruption. It makes it difficult to think, remember, sleep, pay attention, and more.

You might experience delirium during alcohol withdrawal, after surgery, or with dementia.

Delirium is usually temporary and can often be treated effectively.

Types of delirium

Delirium is categorized by its cause, severity, and characteristics:

* Delirium tremens is a severe form of the condition experienced by people who are trying to stop drinking. Usually, they’ve been drinking large amounts of alcohol for many years.
* Hyperactive delirium is characterized by being highly alert and uncooperative.
* Hypoactive delirium is more common. With this type, you tend to sleep more and become inattentive and disorganized with daily tasks. You might miss meals or appointments.

Some people have a combination of both hyperactive and hypoactive delirium (called mixed delirium), alternating between the two states.

What causes delirium?

Diseases that cause inflammation and infection, such as pneumonia, can interfere with brain function. Additionally, taking certain medications (such as blood pressure medicine) or misusing drugs can disrupt chemicals in the brain.

Alcohol withdrawal and eating or drinking poisonous substances can also cause delirium.

When you have trouble breathing due to asthma or another condition, your brain doesn’t get the oxygen it needs. Any condition or factor that significantly changes your brain function can cause severe mental confusion.

Who’s at risk for delirium?

If you’re over 65 or have numerous health conditions, you’re more at risk for delirium.

Others who have increased risk of delirium include:

* people who’ve had surgery
* people withdrawing from alcohol and drugs
* those who’ve experienced conditions that damage the brain (for example, stroke and dementia)
* people who are under extreme emotional stress

The following factors may also contribute to delirium:

* sleep deprivation
* certain medications (such as sedatives, blood pressure medications, sleeping pills, and pain relievers)
* dehydration
* poor nutrition
* infections such as a urinary tract infection

Symptoms of delirium

Delirium affects your mind, emotions, muscle control, and sleep patterns.

You might have a hard time concentrating or feel confused as to your whereabouts. You may also move more slowly or quickly than usual, and experience mood swings.

Other symptoms may include:

* not thinking or speaking clearly
* sleeping poorly and feeling drowsy
* reduced short-term memory
* loss of muscle control (for example, incontinence)

How is delirium diagnosed?

Confusion assessment method:

Your doctor will observe your symptoms and examine you to see if you can think, speak, and move normally.

Some health practitioners use the Confusion Assessment Method (CAM) to diagnose or rule out delirium. This helps them observe whether or not:

* your behavior changes throughout the day, especially if you’re hospitalized
* you have a hard time paying attention or following others as they speak
* you’re rambling

Tests and exams

Many factors can cause changes in brain chemistry. Your doctor will try to determine the cause of the delirium by running tests relevant to your symptoms and medical history.

One or more of the following tests may be needed to check for imbalances:

* blood chemistry test
* head scans
* drug and alcohol tests
* thyroid tests
* liver tests
* a chest X-ray
* urine tests

How is delirium treated?

Depending on the cause of the delirium, treatment may include taking or stopping certain medications.

In older adults, an accurate diagnosis is important for treatment, as delirium symptoms are similar to dementia, but the treatments are very different.

Medications:

Your doctor will prescribe medications to treat the underlying cause of your delirium. For example, if your delirium is caused by a severe asthma attack, you might need an inhaler or breathing machine to restore your breathing.

If a bacterial infection is causing the delirium symptoms, antibiotics may be prescribed.

In some cases, your doctor may recommend that you stop drinking alcohol or stop taking certain medications (such as codeine or other drugs that depress your system).

If you’re agitated or depressed, you may be given small doses of one of the following medications:

* antidepressants to relieve depression
* sedatives to ease alcohol withdrawal
* dopamine blockers to help with drug poisoning
* thiamine to help prevent confusion

Counseling

If you’re feeling disoriented, counseling may help to anchor your thoughts.

Counseling is also used as a treatment for people whose delirium was brought on by drug or alcohol use. In these cases, the treatment can help you abstain from using the substances that brought on the delirium.

In all cases, counseling is intended to make you feel comfortable and give you a safe place to discuss your thoughts and feelings.

Recovering from delirium

Full recovery from delirium is possible with the right treatment. It can take up to a few weeks for you to think, speak, and feel physically like your old self.

You might have side effects from the medications used to treat this condition. Speak to your doctor about any concerns you may have.

delirium.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1342 2022-04-08 00:34:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1316) Golden Gate Bridge

Summary

The Golden Gate Bridge is a suspension bridge spanning the Golden Gate, the one-mile-wide (1.6 km) strait connecting San Francisco Bay and the Pacific Ocean. The structure links the U.S. city of San Francisco, California—the northern tip of the San Francisco Peninsula—to Marin County, carrying both U.S. Route 101 and California State Route 1 across the strait. It also carries pedestrian and bicycle traffic, and is designated as part of U.S. Bicycle Route 95. Being declared one of the Wonders of the Modern World by the American Society of Civil Engineers, the bridge is one of the most internationally recognized symbols of San Francisco and California. It was initially designed by engineer Joseph Strauss in 1917.

The Frommer's travel guide describes the Golden Gate Bridge as "possibly the most beautiful, certainly the most photographed, bridge in the world." At the time of its opening in 1937, it was both the longest and the tallest suspension bridge in the world, with a main span of 4,200 feet (1,280 m) and a total height of 746 feet (227 m).

Structural specifications

The Golden Gate Bridge is a suspension bridge spanning the Golden Gate, the one-mile-wide (1.6 km) strait connecting San Francisco Bay and the Pacific Ocean. The structure links the U.S. city of San Francisco, California—the northern tip of the San Francisco Peninsula—to Marin County, carrying both U.S. Route 101 and California State Route 1 across the strait. It also carries pedestrian and bicycle traffic, and is designated as part of U.S. Bicycle Route 95. Being declared one of the Wonders of the Modern World by the American Society of Civil Engineers, the bridge is one of the most internationally recognized symbols of San Francisco and California. It was initially designed by engineer Joseph Strauss in 1917.

The Frommer's travel guide describes the Golden Gate Bridge as "possibly the most beautiful, certainly the most photographed, bridge in the world." At the time of its opening in 1937, it was both the longest and the tallest suspension bridge in the world, with a main span of 4,200 feet (1,280 m) and a total height of 746 feet (227 m).

Details

Golden Gate Bridge is a suspension bridge spanning the Golden Gate in California to link San Francisco with Marin county to the north. Upon its completion in 1937, it was the tallest and longest suspension bridge in the world. The Golden Gate Bridge came to be recognized as a symbol of the power and progress of the United States, and it set a precedent for suspension-bridge design around the world. Although other bridges have since surpassed it in size, it remains incomparable in the magnificence of its setting and is said to be the most photographed bridge in the world. It carries both U.S. Route 101 and California State Route 1 (Pacific Coast Highway) across the strait and features a pedestrian walkway.

The bridge’s orange vermilion color, suggested by consulting architect Irving Morrow, has a dual function, both fitting in with the surrounding natural scenery and being clearly visible to ships in fog. At night the bridge is floodlit and shines with a golden luminescence that reflects off the waters of the bay and creates a magical effect.

Its construction, under the supervision of chief engineer Joseph B. Strauss, began in January 1933 and involved many challenges. The strait has rapidly running tides, frequent storms, and fogs that made construction difficult. During one such fog on August 14, 1933, a cargo vessel collided with the access trestle, causing serious damage. Workers also had to contend with the problem of blasting rock under deep water to plant earthquake-proof foundations. A movable safety net, innovated by Strauss, saved a total of 19 men from falling to their deaths during construction. However, the safety net failed on February 17, 1937, when it gave way under the weight of a scaffolding collapse; of the 13 men who were on the scaffolding, one jumped clear, two survived the fall into the water, and 10 were killed. One other worker fell to his death during the construction, for a total of 11 worker deaths over four years.

The bridge opened to vehicular traffic on May 28, 1937, under budget and ahead of schedule. The main span, 1,280 metres (4,200 feet) long, is suspended from two cables hung from towers 227 metres (746 feet) high; at midpoint the roadway is 81 metres (265 feet) above mean high water. Until the completion of the Verrazzano-Narrows Bridge in New York City in 1964, it had the longest main span in the world.

thumb2-golden-gate-bridge-suspension-bridge-san-francisco-golden-gate-strait-evening.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1343 2022-04-09 00:42:45

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1317) Thallophyte

Summary

Thallophytes (Thallophyta, Thallophyto or Thallobionta) are a polyphyletic group of non-motile organisms traditionally described as "thalloid plants", "relatively simple plants" or "lower plants". They form an abandoned division of kingdom Plantae that include lichens and algae and occasionally bryophytes, bacteria and slime moulds. Thallophytes have a hidden reproductive system and hence they are also incorporated into the similarly abandoned Cryptogamae (together with ferns), as opposed to Phanerogamae. Thallophytes are defined by having undifferentiated bodies (thalloid, pseudotissue), as opposed to cormophytes (Cormophyta) with roots and stems. Various groups of thallophytes are major contributors to marine ecosystems.

Details

Thallophyte, which is also referred to as thallobionta or thallophyta, are generally non – mobile organisms that are categorised under polyphyletic group, and are conventionally called as “lower plants” or “relatively small plants” or “thalloid plants”. The plant has a hidden system of reproduction due to which they are included into Cryptogamae. However, unlike the cormophytes, the thallophyte has undifferentiated bodies with stems and roots. In the marine ecosystem, various kinds of or groups of thallophytes contribute in a large extent to its entire sphere and dynamism. Here, we will lear more about thallophyta, its characteristics, division and discuss some important questions.

What is Thallophyta?

Thallophytes are referred to as a polyphyletic group of non-motile organisms that are traditionally described as “thalloid plants”, “relatively simple plants”, or “lower plants”. Thallophytes make up an abandoned division of kingdom Plantae that comprise lichens and algae and occasionally bryophtes, bacteria and slime moulds.

In other words, thellaphyte can be defined as - “it is any of a group of plants or plantlike organisms (such as algae and fungi) that lack differentiated stems, leaves, and roots and that were formerly classified as a primary division (Thallophyta) of the plant kingdom”. The plant body possesses the absence of vascular system that means, there is no availability of the conducting tissues.

Characteristics of Thallophyta

Some of the important characteristics of thallophyta are given below:

* Thallophyta male/female organs are single-celled.
* Once the fertilization is done or over, there is provision of forming embryo.
* It is unlike other plants, meaning there is no presence of phloem or xylem found.
* Vascular tissue is absent in this plant.
* These plant cells don’t have a cellulose cell wall.
* One of the main characteristics of the thallophyta is that there is the release of glucose after photosynthesis and this part is consumed immediately. The rest part of glucose is changed into starch, a kind of complex compound.
* The plant stores food in the form of starch.
* Many members of the groups manufacture their own food. Also, a few of them, for instance: fungi are dependent for their food on other sources.
* They are autotrophic.
* This group member is mostly found in wet or moist areas.
* It doesn’t have vascular tissue and “true roots” which are needed in order to make connections for minerals and water. Hence, they are found in wet and moist places.
* The plant members of this group are one of the most primitive forms of plants. Their body is not differentiated into leaves, stems, and roots. They appear to have undistinguished thallus. The group is commonly called algae.
* Their sexual reproduction took place by the fusion of two gametes.
* There may or may not be any changes of generation present or available. The life cycle may be classified into diplohaplontic, diplontic, or diploid.
* The body of the plants is divided into different parts like leaves, stem, or root.

Division of Thallophyta

The plant class Thallophyta is sub-divided into two subdivisions: Algae and Fungi.

Algae

These are basically thalloids bearing chlorophyll. They are autotrophic plants and rare aquatic plants. In this plant, it has been seen that green algae do a symbiotic relationship with fungi that are prominent in the lush tropical rainforests of South America and Central America. Sloth fur is very fine and easily absorbs water. Hence, sloth fur builds a moist and damp environment for the algae to grow. The algae in return provide the sloth with more nutrition and camouflage from predators. Example: Spirogyra.

The basic characteristics of algae are provided here:

* Algae do not have any leaves, stems, or roots.
* To do photosynthesis, they have chlorophyll as well as other forms of pigments.
* Algae can be both unicellular and multicellular.
* Many of the time in the water, the unicellular algae are found, specifically in plankton.

Fungi

They are achlorophyllous (i,e they do not possess chlorophyll) heterotrophic thallophytes. Many of the time, to overcome this problem, fungi may develop a symbiotic relationship with algae or cyanobacteria. The algae can release food as it has chlorophyll and the fungi in return give a safe environment that protects the algae from UV rays. Lichen is a mode where two organisms together act as a single unit.

Some of the characteristics of fungi are:

* They are non – motile i,e they cannot move
* They are known as the best recycler in the plant kingdom.
* Unlike a plant, instead of cellulose, chitin is used to make up the cell walls.

Things to Remember

* Thallophytes is a polyphyletic group of non-motile organisms that are traditionally described as “thalloid plants”, “relatively simple plants”, or “lower plants”.
* Thallophytes form an abandoned division of kingdom Plantae that comprise lichens and algae and occasionally bryophtes, bacteria and slime moulds.
* It is unlike other plants, meaning there is no presence of phloem or xylem found.
* Vascular tissue is absent in this plant. These plant cells don’t have a cellulose cell wall.
* One of the main characteristics of the thallophyta is that there is the release of glucose after photosynthesis and this part is consumed immediately. The rest part of glucose is changed into starch, a kind of complex compound.
* The plant members of this group are one of the most primitive forms of plants. Their body is not differentiated into leaves, stems, and roots. They appear to have undistinguished thallus. The group is commonly called algae.

anon.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1344 2022-04-10 01:31:01

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1318) Bryophyte

Summary

Bryophytes are a proposed taxonomic division containing three groups of non-vascular land plants (embryophytes): the liverworts, hornworts and mosses. They are characteristically limited in size and prefer moist habitats although they can survive in drier environments. The bryophytes consist of about 20,000 plant species. Bryophytes produce enclosed reproductive structures (gametangia and sporangia), but they do not produce flowers or seeds. They reproduce sexually by spores and asexually by fragmentation or the production of gemmae. Though bryophytes were considered a paraphyletic group in recent years, almost all of the most recent phylogenetic evidence supports the monophyly of this group, as originally classified by Wilhelm Schimper in 1879.

Habitat

Bryophytes exist in a wide variety of habitats. They can be found growing in a range of temperatures (cold arctics and in hot deserts), elevations (sea-level to alpine), and moisture (dry deserts to wet rain forests). Bryophytes can grow where vascularized plants cannot because they do not depend on roots for uptake of nutrients from soil. Bryophytes can survive on rocks and bare soil.

Life cycle

Like all land plants (embryophytes), bryophytes have life cycles with alternation of generations. In each cycle, a haploid gametophyte, each of whose cells contains a fixed number of unpaired chromosomes, alternates with a diploid sporophyte, whose cell contain two sets of paired chromosomes. Gametophytes produce haploid sperm and eggs which fuse to form diploid zygotes that grow into sporophytes. Sporophytes produce haploid spores by meiosis, that grow into gametophytes.

Bryophytes are gametophyte dominant, meaning that the more prominent, longer-lived plant is the haploid gametophyte. The diploid sporophytes appear only occasionally and remain attached to and nutritionally dependent on the gametophyte. In bryophytes, the sporophytes are always unbranched and produce a single sporangium (spore producing capsule), but each gametophyte can give rise to several sporophytes at once.

The sporophyte develops differently in the three groups. Both mosses and hornworts have a meristem zone where cell division occur. In hornworts, the meristem starts at the base where the foot ends, and the division of cells is pushing the sporophyte body upwards. In mosses, the meristem is located between the capsule and the top of the stalk (seta), and produce cells downward, elongating the stalk and elevates the capsule. In liverworts the meristem is absent and the elongation of the sporophyte is caused almost exclusively by cell expansion.

Liverworts, mosses and hornworts spend most of their lives as gametophytes. Gametangia (gamete-producing organs), archegonia and antheridia, are produced on the gametophytes, sometimes at the tips of shoots, in the axils of leaves or hidden under thalli. Some bryophytes, such as the liverwort Marchantia, create elaborate structures to bear the gametangia that are called gametangiophores. Sperm are flagellated and must swim from the antheridia that produce them to archegonia which may be on a different plant. Arthropods can assist in transfer of sperm.

Fertilized eggs become zygotes, which develop into sporophyte embryos inside the archegonia. Mature sporophytes remain attached to the gametophyte. They consist of a stalk called a seta and a single sporangium or capsule. Inside the sporangium, haploid spores are produced by meiosis. These are dispersed, most commonly by wind, and if they land in a suitable environment can develop into a new gametophyte. Thus bryophytes disperse by a combination of swimming sperm and spores, in a manner similar to lycophytes, ferns and other cryptogams.

Details

Bryophyte is traditional name for any nonvascular seedless plant—namely, any of the mosses (division Bryophyta), hornworts (division Anthocerotophyta), and liverworts (division Marchantiophyta). Most bryophytes lack complex tissue organization, yet they show considerable diversity in form and ecology. They are widely distributed throughout the world and are relatively small compared with most seed-bearing plants.

The bryophytes show an alternation of generations between the independent gametophyte generation, which produces the gender organs and sperm and eggs, and the dependent sporophyte generation, which produces the spores. In contrast to vascular plants, the bryophyte sporophyte usually lacks a complex vascular system and produces only one spore-containing organ (sporangium) rather than many. Furthermore, the gametophyte generation of the bryophyte is usually perennial and photosynthetically independent of the sporophyte, which forms an intimate interconnection with the gametophytic tissue, especially at the base, or foot, of the sporophyte. In most vascular plants, however, the gametophyte is dependent on the sporophyte. In bryophytes the long-lived and conspicuous generation is the gametophyte, while in vascular plants it is the sporophyte. Structures resembling stems, roots, and leaves are found on the gametophore of bryophytes, while these structures are found on the sporophytes in the vascular plants. The sporophyte releases spores, from which the gametophytes ultimately develop.

The gametophyte of some bryophyte species reproduces asexually, or vegetatively, by specialized masses of cells (gemmae) that are usually budded off and ultimately give rise to gametophytes. Fragmentation of the gametophyte also results in vegetative reproduction: each living fragment has the potential to grow into a complete gametophyte. The mature gametophyte of most mosses is leafy in appearance, but some liverworts and hornworts have a flattened gametophyte, called a thallus. The thallus tends to be ribbonlike in form and is often compressed against the substratum to which it is generally attached by threadlike structures called rhizoids. Rhizoids also influence water and mineral uptake.

General features

Thallose bryophytes vary in size from a length of 20 cm (8 inches) and a breadth of 5 cm (2 inches; the liverwort Monoclea) to less than 1 mm (0.04 inch) in width and less than 1 mm in length (male plants of the liverwort Sphaerocarpos). The thallus is sometimes one cell layer thick through most of its width (e.g., the liverwort Metzgeria) but may be many cell layers thick and have a complex tissue organization (e.g., the liverwort Marchantia). Branching of the thallus may be forked, regularly frondlike, digitate, or completely irregular. The margin of the thallus is often smooth but is sometimes toothed; it may be ruffled, flat, or curved inward or downward.

Leafy bryophytes grow up to 65 cm (2 feet) in height (the moss Dawsonia) or, if reclining, reach lengths of more than 1 metre (3.3 feet; the moss Fontinalis). They are generally less than 3 to 6 cm (1.2 to 2.4 inches) tall, and reclining forms are usually less than 2 cm (0.8 inch) long. Some, however, are less than 1 mm in size (the moss Ephemerum). Leaflike structures, known as phyllids, are arranged in rows of two or three or more around a shoot or may be irregularly arranged (e.g., the liverwort Takakia). The shoot may or may not appear flattened. The phyllids are usually attached by an expanded base and are mainly one cell thick. Many mosses, however, possess one or more midribs several cells in thickness. The phyllids of bryophytes generally lack vascular tissue and are thus not analogous to the true leaves of vascular plants.

Most gametophytes are green, and all except the gametophyte of the liverwort Cryptothallus have chlorophyll. Many have other pigments, especially in the cellulosic cell walls but sometimes within the cytoplasm of the cells.

Bryophytes form flattened mats, spongy carpets, tufts, turfs, or festooning pendants. These growth forms are usually correlated with the humidity and sunlight available in the habitat.

Distribution and abundance

Bryophytes are distributed throughout the world, from polar and alpine regions to the tropics. Water must, at some point, be present in the habitat in order for the sperm to swim to the egg (see below Natural history). Bryophytes do not live in extremely arid sites or in seawater, although some are found in perennially damp environments within arid regions and a few are found on seashores above the intertidal zone. A few bryophytes are aquatic. Bryophytes are most abundant in climates that are constantly humid and equable. The greatest diversity is at tropical and subtropical latitudes. Bryophytes (especially the moss Sphagnum) dominate the vegetation of peatland in extensive areas of the cooler parts of the Northern Hemisphere.

The geographic distribution patterns of bryophytes are similar to those of the terrestrial vascular plants, except that there are many genera and families and a few species of bryophytes that are almost cosmopolitan. Indeed, a few species show extremely wide distribution. Some botanists explain these broad distribution patterns on the theory that the bryophytes represent an extremely ancient group of plants, while others suggest that the readily dispersible small gemmae and spores enhance wide distribution.

The distribution of some bryophytes, however, is extremely restricted, yet they possess the same apparent dispersibility and ecological plasticity as do widespread bryophytes. Others show broad interrupted patterns that are represented also in vascular plants.

Importance to humans and ecology

The peat moss genus Sphagnum is an economically important bryophyte. The harvesting, processing, and sale of Sphagnum peat is a multimillion-dollar industry. Peat is used in horticulture, as an energy source (fuel), and, to a limited extent, in the extraction of organic products, in whiskey production, and as insulation.

Bryophytes are very important in initiating soil formation on barren terrain, in maintaining soil moisture, and in recycling nutrients in forest vegetation. Indeed, discerning the presence of particular bryophytes is useful in assessing the productivity and nutrient status of forest types. Further, through the study of bryophytes, various biological phenomena have been discovered that have had a profound influence on the development of research in such areas as genetics and cytology.

Natural history

The life cycle of bryophytes consists of an alternation of two stages, or generations, called the sporophyte and the gametophyte. Each generation has a different physical form. When a spore germinates, it usually produces the protonema, which precedes the appearance of the more elaborately organized gametophytic plant, the gametophyte, which produces the gender organs. The protonema is usually threadlike and is highly branched in the mosses but is reduced to only a few cells in most liverworts and hornworts. The protonema stage in liverworts is usually called a sporeling in other bryophytes (see below Form and function).

The gametophyte—the thallose or leafy stage—is generally perennial and produces the male or female gender organs or both. The female gender organ is usually a flask-shaped structure called the archegonium. The archegonium contains a single egg enclosed in a swollen lower portion that is more than one cell thick. The neck of the archegonium is a single cell layer thick and sheathes a single thread of cells that forms the neck canal. When mature and completely moist, the neck canal cells of the archegonium disintegrate, releasing a column of fluid to the neck canal and the surrounding water. The egg remains in the base of the archegonium, ready for fertilization. The male gender organ, the antheridium, is a saclike structure made up of a jacket of sterile cells one cell thick; it encloses many cells, each of which, when mature, produces one sperm. The antheridium is usually attached to the gametophyte by a slender stalk. When wet, the jacket of the mature antheridium ruptures to release the sperm into the water. Each sperm has two flagella and swims in a corkscrew pattern. When a sperm enters the field of the fluid diffused from the neck canal, it swims toward the site of greatest concentration of this fluid, therefore down the neck canal to the egg. Upon reaching the egg, the sperm burrows into its wall, and the egg nucleus unites with the sperm nucleus to produce the diploid zygote. The zygote remains in the archegonium and undergoes many mitotic cell divisions to produce an embryonic sporophyte. The lower cells of the archegonium also divide and produce a protective structure, called the calyptra, that sheathes the growing embryo.

As the sporophyte enlarges, it is dependent on the gametophore for water and minerals and, to a large degree, for nutrients manufactured by the gametophyte. The water and nutrients enter the developing sporophyte through the tissue at its base, or foot, which remains embedded in the gametophyte. Mature bryophytes have a single sporangium (spore-producing structure) on each sporophyte. The sporangium generally terminates an elongate stalk, or seta, when the sporangium is ready to shed its spores. The sporangium rupture usually involves specialized structures that enhance expulsion of the spores away from the parent gametophyte.

Nutrition

Bryophytes generate their nutrient materials through the photosynthetic activity of the chlorophyll pigments in the chloroplasts. In addition, most bryophytes absorb water and dissolved minerals over the surface of the gametophore. Water retention at the surface is assisted by the shape and overlapping of leaves, by an abundance of rhizoids, or by capillary spaces among these structures. Water loss through evaporation is rapid in most bryophytes.

A few bryophytes possess elaborate internal conducting systems (see below Form and function) that transfer water or manufactured nutrients through the gametophore, but most conduction is over the gametophore surface. In most mosses, water and nutrient transfer from the gametophore to the developing sporangium takes place along the seta and also via an internal conducting system. A protective cuticle covers the seta, reducing water loss. The calyptra that covers the developing sporangium prevents water loss in this fragile immature structure. In liverworts the sporangium remains close to the gametophore until it is mature; thus, a conducting system is not formed in the seta. In most hornworts there is also an internal conducting system within the developing horn-shaped sporangium. The internal movement of fluid in all parts of the bryophyte is extremely slow. Storage products include starch and lipids.

Ecology and habitats

Some bryophytes are unusually tolerant of extended periods of dryness and freezing, and, upon the return of moisture, they rapidly resume photosynthesis. The exact mechanism involved remains controversial.

Many bryophytes grow on soil or on the persistent remains of their own growth, as well as on living or decomposing material of other plants. Some grow on bare rock surfaces, and several are aquatic. The main requirements for growth appear to be a relatively stable substratum for attachment, a medium that retains moisture for extended periods, adequate sunlight, favourable temperature, and, for richest luxuriance, a nearly constantly humid atmosphere.

Unusual habitats include decomposing animal waste (many species in the moss family Splachnaceae), somewhat shaded cavern mouths (the liverwort Cyathodium and the mosses Mittenia and Schistostega), leaf surfaces (the moss Ephemeropsis and the liverwort genus Metzgeria and many species of the liverwort family Lejeuneaceae), salt pans (the liverwort Carrpos), bases of quartz pebbles (the moss Aschisma), and copper-rich substrata (the moss Scopelophila).

In humid temperate or subtropical climates, bryophytes often grow profusely, forming deep, soft carpets on forest floors and over rock surfaces, sheathing trunks and branches of trees and shrubs, and festooning branches. In broad-leaved forests of temperate areas, trees and boulders often harbour rich bryophyte stands, but it is near watercourses that bryophytes tend to reach their richest luxuriance and diversity.

In Arctic and Antarctic regions, bryophytes, especially mosses, form extensive cover, especially in wetlands, near watercourses, and in sites where snowmelt moisture is available for an extended part of the growing season. There they can dominate the vegetation cover and control the vegetation pattern and dynamics of associated plants. The same is true for alpine and subalpine environments in which many of the same species are involved.

Bryophytes, especially mosses, are important in nutrient cycling, in some cases making use of limited precipitation and airborne minerals that are thus made unavailable to the seed plant vegetation. Rapid evaporation from the moss mat is probably critical to some vegetation types by impeding moisture penetration to the root systems of seed plants and therefore indirectly controlling the vegetational composition of some forests.

Bryophytes are fundamental to the development of wetland habitats, especially of peatland. The moss genus Sphagnum leads to the development of waterlogged masses of highly acid peatland, in which decomposition is relatively slow. The formation of extensive bogs can control the hydrology of much of the surrounding landscape by behaving like a gigantic sponge that absorbs and holds vast quantities of water and influences the water table. Extension of this saturated living moss mat into living forest can drown the root systems of the forest trees, killing the forest and replacing it with bog. Peatland can also develop on calcareous terrain through the growth of other mosses, including species of the genera Drepanocladus and Calliergon. These mosses also build up a moss mat that, through organic accumulation of its own partially decomposed remains, alters the acidity of the site and makes it attractive to the formation of Sphagnum peatland.

Mosses colonize bare rock surfaces, leading ultimately to the initiation of soil formation. This in turn produces a substratum attractive to seed plant colonists that invade these mossy sites and, through their shading, eliminate the pioneer mosses but create a shaded habitat suitable for other bryophytes. These new colonists, in turn, are important in nutrient cycling in the developing forest vegetation.

moss.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1345 2022-04-11 00:50:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1319) Ligament

Summary

Ligament is a tough fibrous band of connective tissue that serves to support the internal organs and hold bones together in proper articulation at the joints. A ligament is composed of dense fibrous bundles of collagenous fibres and spindle-shaped cells known as fibrocytes, with little ground substance (a gel-like component of the various connective tissues).

Some ligaments are rich in collagenous fibres, which are sturdy and inelastic, whereas others are rich in elastic fibres, which are quite tough even though they allow elastic movement. At joints, ligaments form a capsular sac that encloses the articulating bone ends and a lubricating membrane, the synovial membrane. Sometimes the structure includes a recess, or pouch, lined by synovial tissue; this is called a bursa. Other ligaments fasten around or across bone ends in bands, permitting varying degrees of movement, or act as tie pieces between bones (such as the ribs or the bones of the forearm), restricting inappropriate movement.

Details

A ligament is the fibrous connective tissue that connects bones to other bones. It is also known as articular ligament, articular larua,[1] fibrous ligament, or true ligament. Other ligaments in the body include the:

* Peritoneal ligament: a fold of peritoneum or other membranes.
* Fetal remnant ligament: the remnants of a fetal tubular structure.
* Periodontal ligament: a group of fibers that attach the cementum of teeth to the surrounding alveolar bone.

Ligaments are similar to tendons and fasciae as they are all made of connective tissue. The differences among them are in the connections that they make: ligaments connect one bone to another bone, tendons connect muscle to bone, and fasciae connect muscles to other muscles. These are all found in the skeletal system of the human body. Ligaments cannot usually be regenerated naturally; however, there are periodontal ligament stem cells located near the periodontal ligament which are involved in the adult regeneration of periodontist ligament.

The study of ligaments is known as desmology.

Articular ligaments

"Ligament" most commonly refers to a band of dense regular connective tissue bundles made of collagenous fibers, with bundles protected by dense irregular connective tissue sheaths. Ligaments connect bones to other bones to form joints, while tendons connect bone to muscle. Some ligaments limit the mobility of articulations or prevent certain movements altogether.

Capsular ligaments are part of the articular capsule that surrounds synovial joints. They act as mechanical reinforcements. Extra-capsular ligaments join together in harmony with the other ligaments and provide joint stability. Intra-capsular ligaments, which are much less common,[citation needed] also provide stability but permit a far larger range of motion. Cruciate ligaments are paired ligaments in the form of a cross.

Ligaments are viscoelastic. They gradually strain when under tension and return to their original shape when the tension is removed. However, they cannot retain their original shape when extended past a certain point or for a prolonged period of time. This is one reason why dislocated joints must be set as quickly as possible: if the ligaments lengthen too much, then the joint will be weakened, becoming prone to future dislocations. Athletes, gymnasts, dancers, and martial artists perform stretching exercises to lengthen their ligaments, making their joints more supple.

The term hypermobility refers to the characteristic of people with more-elastic ligaments, allowing their joints to stretch and contort further; this is sometimes still called double-jointedness.

Hypermobile finger

The consequence of a broken ligament can be instability of the joint. Not all broken ligaments need surgery, but, if surgery is needed to stabilise the joint, the broken ligament can be repaired. Scar tissue may prevent this. If it is not possible to fix the broken ligament, other procedures such as the Brunelli procedure can correct the instability. Instability of a joint can over time lead to wear of the cartilage and eventually to osteoarthritis.

Artificial ligaments

One of the most often torn ligaments in the body is the anterior cruciate ligament (ACL). The ACL is one of the ligaments crucial to knee stability and persons who tear their ACL often undergo reconstructive surgery, which can be done through a variety of techniques and materials. One of these techniques is the replacement of the ligament with an artificial material. Artificial ligaments are a synthetic material composed of a polymer, such as polyacrylonitrile fiber, polypropylene, PET (polyethylene terephthalate), or polyNaSS poly(sodium styrene sulfonate).

Examples:

Head and neck

* Cricothyroid ligament
* Periodontal ligament
* Suspensory ligament of the lens

Thorax

* Phrenoesophageal ligament
* Suspensory ligament of the breast

Pelvis

* Anterior sacroiliac ligament
* Posterior sacroiliac ligament
* Sacrotuberous ligament
* Sacrospinous ligament
* Inferior pubic ligament
* Superior pubic ligament
* Suspensory ligament of the male reproductive part

Wrist

* Palmar radiocarpal ligament
* Dorsal radiocarpal ligament
* Ulnar collateral ligament
* Radial collateral ligament
* Scapholunate Ligament

Knee

* Anterior cruciate ligament
* Lateral collateral ligament
* Posterior cruciate ligament
* Medial collateral ligament
* Cranial cruciate ligament — quadruped equivalent of anterior cruciate ligament
* Caudal cruciate ligament — quadruped equivalent of posterior cruciate ligament
* Patellar ligament
* Peritoneal ligaments

Certain folds of peritoneum are referred to as ligaments. Examples include:

* The hepatoduodenal ligament, that surrounds the hepatic portal vein and other vessels as they travel from the duodenum to the liver.
* The broad ligament of the uterus, also a fold of peritoneum.

Fetal remnant ligaments

Certain tubular structures from the fetal period are referred to as ligaments after they close up and turn into cord-like structures.

anterior-cruciate.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1346 2022-04-11 21:31:06

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1320) Tendon

Summary

Tendon is a tissue that attaches a muscle to other body parts, usually bones. Tendons are the connective tissues that transmit the mechanical force of muscle contraction to the bones; the tendon is firmly connected to muscle fibres at one end and to components of the bone at its other end. Tendons are remarkably strong, having one of the highest tensile strengths found among soft tissues. Their great strength, which is necessary for withstanding the stresses generated by muscular contraction, is attributed to the hierarchical structure, parallel orientation, and tissue composition of tendon fibres.

A tendon is composed of dense fibrous connective tissue made up primarily of collagenous fibres. Primary collagen fibres, which consist of bunches of collagen fibrils, are the basic units of a tendon. Primary fibres are bunched together into primary fibre bundles (subfasicles), groups of which form secondary fibre bundles (fasicles). Multiple secondary fibre bundles form tertiary fibre bundles, groups of which in turn form the tendon unit. Primary, secondary, and tertiary bundles are surrounded by a sheath of connective tissue known as endotenon, which facilitates the gliding of bundles against one another during tendon movement. Endotenon is contiguous with epitenon, the fine layer of connective tissue that sheaths the tendon unit. Lying outside the epitenon and contiguous with it is a loose elastic connective tissue layer known as paratenon, which allows the tendon to move against neighbouring tissues. The tendon is attached to the bone by collagenous fibres (Sharpey fibres) that continue into the matrix of the bone.

The primary cell types of tendons are the spindle-shaped tenocytes (fibrocytes) and tenoblasts (fibroblasts). Tenocytes are mature tendon cells that are found throughout the tendon structure, typically anchored to collagen fibres. Tenoblasts are spindle-shaped immature tendon cells that give rise to tenocytes. Tenoblasts typically occur in clusters, free from collagen fibres. They are highly proliferative and are involved in the synthesis of collagen and other components of the extracellular matrix.

The composition of a tendon is similar to that of ligaments and aponeuroses.

Details

A tendon or sinew is a tough, high-tensile-strength band of dense fibrous connective tissue that connects muscle to bone. It is able to efficiently transmit the mechanical forces of muscle contraction to the skeletal system without sacrificing its ability to withstand significant amounts of tension.

Tendons are similar to ligaments; both are made of collagen. Ligaments connect one bone to another, while tendons connect muscle to bone.

Structure

Histologically, tendons consist of dense regular connective tissue. The main cellular component of tendons are specialized fibroblasts called tendon cells (tenocytes). Tenocytes synthesize the extracellular matrix of tendons, abundant in densely packed collagen fibers. The collagen fibers are parallel to each other and organized into tendon fascicles. Individual fascicles are bound by the endotendineum, which is a delicate loose connective tissue containing thin collagen fibrils and elastic fibres. Groups of fascicles are bounded by the epitenon, which is a sheath of dense irregular connective tissue. The whole tendon is enclosed by a fascia. The space between the fascia and the tendon tissue is filled with the paratenon, a fatty areolar tissue. Normal healthy tendons are anchored to bone by Sharpey's fibres.

Extracellular matrix

The dry mass of normal tendons, which makes up 30-45% of their total mass, is composed of:

* 60-85% collagen

** 60-80% collagen I
** 0-10% collagen III
** 2% collagen IV
** small amounts of collagens V, VI, and others

* 15-40% non-collagenous extracellular matrix components, including:

** 3% cartilage oligomeric matrix protein,
** 1-2% elastin,
** 1–5% proteoglycans,
** 0.2% inorganic components such as copper, manganese, and calcium.

While type I collagen makes up most of the collagen in tendon, many minor collagens are present that play vital roles in proper tendon development and function. These include type II collagen in the cartilaginous zones, type III collagen in the reticulin fibres of the vascular walls, type IX collagen, type IV collagen in the basement membranes of the capillaries, type V collagen in the vascular walls, and type X collagen in the mineralized fibrocartilage near the interface with the bone.

Ultrastructure and collagen synthesis

Collagen fibres coalesce into macroaggregates. After secretion from the cell, cleaved by procollagen N- and C-proteases, the tropocollagen molecules spontaneously assemble into insoluble fibrils. A collagen molecule is about 300 nm long and 1–2 nm wide, and the diameter of the fibrils that are formed can range from 50–500 nm. In tendons, the fibrils then assemble further to form fascicles, which are about 10 mm in length with a diameter of 50–300 μm, and finally into a tendon fibre with a diameter of 100–500 μm.

The collagen in tendons are held together with proteoglycan (a compound consisting of a protein bonded to glycosaminoglycan groups, present especially in connective tissue) components including decorin and, in compressed regions of tendon, aggrecan, which are capable of binding to the collagen fibrils at specific locations. The proteoglycans are interwoven with the collagen fibrils – their glycosaminoglycan (GAG) side chains have multiple interactions with the surface of the fibrils – showing that the proteoglycans are important structurally in the interconnection of the fibrils. The major GAG components of the tendon are dermatan sulfate and chondroitin sulfate, which associate with collagen and are involved in the fibril assembly process during tendon development. Dermatan sulfate is thought to be responsible for forming associations between fibrils, while chondroitin sulfate is thought to be more involved with occupying volume between the fibrils to keep them separated and help withstand deformation. The dermatan sulfate side chains of decorin aggregate in solution, and this behavior can assist with the assembly of the collagen fibrils. When decorin molecules are bound to a collagen fibril, their dermatan sulfate chains may extend and associate with other dermatan sulfate chains on decorin that is bound to separate fibrils, therefore creating interfibrillar bridges and eventually causing parallel alignment of the fibrils.

Tenocytes

The tenocytes produce the collagen molecules, which aggregate end-to-end and side-to-side to produce collagen fibrils. Fibril bundles are organized to form fibres with the elongated tenocytes closely packed between them. There is a three-dimensional network of cell processes associated with collagen in the tendon. The cells communicate with each other through gap junctions, and this signalling gives them the ability to detect and respond to mechanical loading. These communications happen by two proteins essentialy : connexin 43, present where the cells processes meet and in cell bodies connexin 32, present only where the processes meet.

Blood vessels may be visualized within the endotendon running parallel to collagen fibres, with occasional branching transverse anastomoses.

The internal tendon bulk is thought to contain no nerve fibres, but the epitenon and paratenon contain nerve endings, while Golgi tendon organs are present at the myotendinous junction between tendon and muscle.

Tendon length varies in all major groups and from person to person. Tendon length is, in practice, the deciding factor regarding actual and potential muscle size. For example, all other relevant biological factors being equal, a man with a shorter tendons and a longer biceps muscle will have greater potential for muscle mass than a man with a longer tendon and a shorter muscle. Successful bodybuilders will generally have shorter tendons. Conversely, in sports requiring athletes to excel in actions such as running or jumping, it is beneficial to have longer than average Achilles tendon and a shorter calf muscle.

Tendon length is determined by genetic predisposition, and has not been shown to either increase or decrease in response to environment, unlike muscles, which can be shortened by trauma, use imbalances and a lack of recovery and stretching. In addiction tendons allow muscles to be at an optimal distance from the site where they actively engage in movement, passing through regions where space is premium, like the carpal tunnel.

Functions

Traditionally, tendons have been considered to be a mechanism by which muscles connect to bone as well as muscles itself, functioning to transmit forces. This connection allows tendons to passively modulate forces during locomotion, providing additional stability with no active work. However, over the past two decades, much research has focused on the elastic properties of some tendons and their ability to function as springs. Not all tendons are required to perform the same functional role, with some predominantly positioning limbs, such as the fingers when writing (positional tendons) and others acting as springs to make locomotion more efficient (energy storing tendons). Energy storing tendons can store and recover energy at high efficiency. For example, during a human stride, the Achilles tendon stretches as the ankle joint dorsiflexes. During the last portion of the stride, as the foot plantar-flexes (pointing the toes down), the stored elastic energy is released. Furthermore, because the tendon stretches, the muscle is able to function with less or even no change in length, allowing the muscle to generate more force.

The mechanical properties of the tendon are dependent on the collagen fiber diameter and orientation. The collagen fibrils are parallel to each other and closely packed, but show a wave-like appearance due to planar undulations, or crimps, on a scale of several micrometers. In tendons, the collagen fibres have some flexibility due to the absence of hydroxyproline and proline residues at specific locations in the amino acid sequence, which allows the formation of other conformations such as bends or internal loops in the triple helix and results in the development of crimps. The crimps in the collagen fibrils allow the tendons to have some flexibility as well as a low compressive stiffness. In addition, because the tendon is a multi-stranded structure made up of many partially independent fibrils and fascicles, it does not behave as a single rod, and this property also contributes to its flexibility.

The proteoglycan components of tendons also are important to the mechanical properties. While the collagen fibrils allow tendons to resist tensile stress, the proteoglycans allow them to resist compressive stress. These molecules are very hydrophilic, meaning that they can absorb a large amount of water and therefore have a high swelling ratio. Since they are noncovalently bound to the fibrils, they may reversibly associate and disassociate so that the bridges between fibrils can be broken and reformed. This process may be involved in allowing the fibril to elongate and decrease in diameter under tension. However, the proteoglycans may also have a role in the tensile properties of tendon. The structure of tendon is effectively a fibre composite material, built as a series of hierarchical levels. At each level of the hierarchy, the collagen units are bound together by either collagen crosslinks, or the proteoglycans, to create a structure highly resistant to tensile load. The elongation and the strain of the collagen fibrils alone have been shown to be much lower than the total elongation and strain of the entire tendon under the same amount of stress, demonstrating that the proteoglycan-rich matrix must also undergo deformation, and stiffening of the matrix occurs at high strain rates. This deformation of the non-collagenous matrix occurs at all levels of the tendon hierarchy, and by modulating the organisation and structure of this matrix, the different mechanical properties required by different tendons can be achieved. Energy storing tendons have been shown to utilise significant amounts of sliding between fascicles to enable the high strain characteristics they require, whilst positional tendons rely more heavily on sliding between collagen fibres and fibrils. However, recent data suggests that energy storing tendons may also contain fascicles which are twisted, or helical, in nature - an arrangement that would be highly beneficial for providing the spring-like behaviour required in these tendons.

Mechanics

Tendons are viscoelastic structures, which means they exhibit both elastic and viscous behaviour. When stretched, tendons exhibit typical "soft tissue" behavior. The force-extension, or stress-strain curve starts with a very low stiffness region, as the crimp structure straightens and the collagen fibres align suggesting negative Poisson's ratio in the fibres of the tendon. More recently, tests carried out in vivo (through MRI) and ex vivo (through mechanical testing of various cadaveric tendon tissue) have shown that healthy tendons are highly anisotropic and exhibit a negative Poisson's ratio (auxetic) in some planes when stretched up to 2% along their length, i.e. within their normal range of motion. After this 'toe' region, the structure becomes significantly stiffer, and has a linear stress-strain curve until it begins to fail. The mechanical properties of tendons vary widely, as they are matched to the functional requirements of the tendon. The energy storing tendons tend to be more elastic, or less stiff, so they can more easily store energy, whilst the stiffer positional tendons tend to be a little more viscoelastic, and less elastic, so they can provide finer control of movement. A typical energy storing tendon will fail at around 12-15% strain, and a stress in the region of 100-150 MPa, although some tendons are notably more extensible than this, for example the superficial digital flexor in the horse, which stretches in excess of 20% when galloping. Positional tendons can fail at strains as low as 6-8%, but can have moduli in the region of 700-1000 MPa.

Several studies have demonstrated that tendons respond to changes in mechanical loading with growth and remodeling processes, much like bones. In particular, a study showed that disuse of the Achilles tendon in rats resulted in a decrease in the average thickness of the collagen fiber bundles comprising the tendon. In humans, an experiment in which people were subjected to a simulated micro-gravity environment found that tendon stiffness decreased significantly, even when subjects were required to perform restiveness exercises. These effects have implications in areas ranging from treatment of bedridden patients to the design of more effective exercises for astronauts.

Healing

The tendons in the foot are highly complex and intricate. Therefore, the healing process for a broken tendon is long and painful. Most people who do not receive medical attention within the first 48 hours of the injury will suffer from severe swelling, pain, and a burning sensation where the injury occurred.

It was believed that tendons could not undergo matrix turnover and that tenocytes were not capable of repair. However, it has since been shown that, throughout the lifetime of a person, tenocytes in the tendon actively synthesize matrix components as well as enzymes such as matrix metalloproteinases (MMPs) can degrade the matrix. Tendons are capable of healing and recovering from injuries in a process that is controlled by the tenocytes and their surrounding extracellular matrix.

The three main stages of tendon healing are inflammation, repair or proliferation, and remodeling, which can be further divided into consolidation and maturation. These stages can overlap with each other. In the first stage, inflammatory cells such as neutrophils are recruited to the injury site, along with erythrocytes. Monocytes and macrophages are recruited within the first 24 hours, and phagocytosis of necrotic materials at the injury site occurs. After the release of vasoactive and chemotactic factors, angiogenesis and the proliferation of tenocytes are initiated. Tenocytes then move into the site and start to synthesize collagen III. After a few days, the repair or proliferation stage begins. In this stage, the tenocytes are involved in the synthesis of large amounts of collagen and proteoglycans at the site of injury, and the levels of GAG and water are high. After about six weeks, the remodeling stage begins. The first part of this stage is consolidation, which lasts from about six to ten weeks after the injury. During this time, the synthesis of collagen and GAGs is decreased, and the cellularity is also decreased as the tissue becomes more fibrous as a result of increased production of collagen I and the fibrils become aligned in the direction of mechanical stress. The final maturation stage occurs after ten weeks, and during this time there is an increase in crosslinking of the collagen fibrils, which causes the tissue to become stiffer. Gradually, over about one year, the tissue will turn from fibrous to scar-like.

Matrix metalloproteinases (MMPs) have a very important role in the degradation and remodeling of the ECM during the healing process after a tendon injury. Certain MMPs including MMP-1, MMP-2, MMP-8, MMP-13, and MMP-14 have collagenase activity, meaning that, unlike many other enzymes, they are capable of degrading collagen I fibrils. The degradation of the collagen fibrils by MMP-1 along with the presence of denatured collagen are factors that are believed to cause weakening of the tendon ECM and an increase in the potential for another rupture to occur. In response to repeated mechanical loading or injury, cytokines may be released by tenocytes and can induce the release of MMPs, causing degradation of the ECM and leading to recurring injury and chronic tendinopathies.

A variety of other molecules are involved in tendon repair and regeneration. There are five growth factors that have been shown to be significantly upregulated and active during tendon healing: insulin-like growth factor 1 (IGF-I), platelet-derived growth factor (PDGF), vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), and transforming growth factor beta (TGF-β). These growth factors all have different roles during the healing process. IGF-1 increases collagen and proteoglycan production during the first stage of inflammation, and PDGF is also present during the early stages after injury and promotes the synthesis of other growth factors along with the synthesis of DNA and the proliferation of tendon cells. The three isoforms of TGF-β (TGF-β1, TGF-β2, TGF-β3) are known to play a role in wound healing and scar formation. VEGF is well known to promote angiogenesis and to induce endothelial cell proliferation and migration, and VEGF mRNA has been shown to be expressed at the site of tendon injuries along with collagen I mRNA. Bone morphogenetic proteins (BMPs) are a subgroup of TGF-β superfamily that can induce bone and cartilage formation as well as tissue differentiation, and BMP-12 specifically has been shown to influence formation and differentiation of tendon tissue and to promote fibrogenesis.

Effects of activity on healing

In animal models, extensive studies have been conducted to investigate the effects of mechanical strain in the form of activity level on tendon injury and healing. While stretching can disrupt healing during the initial inflammatory phase, it has been shown that controlled movement of the tendons after about one week following an acute injury can help to promote the synthesis of collagen by the tenocytes, leading to increased tensile strength and diameter of the healed tendons and fewer adhesions than tendons that are immobilized. In chronic tendon injuries, mechanical loading has also been shown to stimulate fibroblast proliferation and collagen synthesis along with collagen realignment, all of which promote repair and remodeling. To further support the theory that movement and activity assist in tendon healing, it has been shown that immobilization of the tendons after injury often has a negative effect on healing. In rabbits, collagen fascicles that are immobilized have shown decreased tensile strength, and immobilization also results in lower amounts of water, proteoglycans, and collagen crosslinks in the tendons.

Several mechanotransduction mechanisms have been proposed as reasons for the response of tenocytes to mechanical force that enable them to alter their gene expression, protein synthesis, and cell phenotype, and eventually cause changes in tendon structure. A major factor is mechanical deformation of the extracellular matrix, which can affect the actin cytoskeleton and therefore affect cell shape, motility, and function. Mechanical forces can be transmitted by focal adhesion sites, integrins, and cell-cell junctions. Changes in the actin cytoskeleton can activate integrins, which mediate “outside-in” and “inside-out” signaling between the cell and the matrix. G-proteins, which induce intracellular signaling cascades, may also be important, and ion channels are activated by stretching to allow ions such as calcium, sodium, or potassium to enter the cell.

Society and culture

Sinew was widely used throughout pre-industrial eras as a tough, durable fiber. Some specific uses include using sinew as thread for sewing, attaching feathers to arrows, lashing tool blades to shafts, etc. It is also recommended in survival guides as a material from which strong cordage can be made for items like traps or living structures. Tendon must be treated in specific ways to function usefully for these purposes. Inuit and other circumpolar people utilized sinew as the only cordage for all domestic purposes due to the lack of other suitable fiber sources in their ecological habitats. The elastic properties of particular sinews were also used in composite recurved bows favoured by the steppe nomads of Eurasia, and Native Americans. The first stone throwing artillery also used the elastic properties of sinew.

Sinew makes for an excellent cordage material for three reasons: It is extremely strong, it contains natural glues, and it shrinks as it dries, doing away with the need for knots.

Culinary uses

Tendon (in particular, beef tendon) is used as a food in some Asian cuisines (often served at yum cha or dim sum restaurants). One popular dish is suan bao niu jin, in which the tendon is marinated in garlic. It is also sometimes found in the Vietnamese noodle dish phở.

Clinical significance:

Injury

Tendons are subject to many types of injuries. There are various forms of tendinopathies or tendon injuries due to overuse. These types of injuries generally result in inflammation and degeneration or weakening of the tendons, which may eventually lead to tendon rupture. Tendinopathies can be caused by a number of factors relating to the tendon extracellular matrix (ECM), and their classification has been difficult because their symptoms and histopathology often are similar.

The first category of tendinopathy is paratenonitis, which refers to inflammation of the paratenon, or paratendinous sheet located between the tendon and its sheath. Tendinosis refers to non-inflammatory injury to the tendon at the cellular level. The degradation is caused by damage to collagen, cells, and the vascular components of the tendon, and is known to lead to rupture. Observations of tendons that have undergone spontaneous rupture have shown the presence of collagen fibrils that are not in the correct parallel orientation or are not uniform in length or diameter, along with rounded tenocytes, other cell abnormalities, and the ingrowth of blood vessels. Other forms of tendinosis that have not led to rupture have also shown the degeneration, disorientation, and thinning of the collagen fibrils, along with an increase in the amount of glycosaminoglycans between the fibrils. The third is paratenonitis with tendinosis, in which combinations of paratenon inflammation and tendon degeneration are both present. The last is tendinitis, which refers to degeneration with inflammation of the tendon as well as vascular disruption.

Tendinopathies may be caused by several intrinsic factors including age, body weight, and nutrition. The extrinsic factors are often related to sports and include excessive forces or loading, poor training techniques, and environmental conditions.

Other animals

In some organisms, notably birds, and ornithischian dinosaurs, portions of the tendon can become ossified. In this process, osteocytes infiltrate the tendon and lay down bone as they would in sesamoid bone such as the patella. In birds, tendon ossification primarily occurs in the hindlimb, while in ornithischian dinosaurs, ossified axial muscle tendons form a latticework along the neural and haemal spines on the tail, presumably for support.

image-result-for-patellar-tendonitis-2.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1347 2022-04-13 02:47:59

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1321) Mitochondrion

Summary

A mitochondrion (pl. mitochondria) is a double-membrane-bound organelle found in most eukaryotic organisms. Mitochondria generate most of the cell's supply of adenosine triphosphate (ATP), subsequently utilized as a source of chemical energy, using the energy of oxygen released in aerobic respiration at the inner mitochondrial membrane. They were first discovered by Albert von Kölliker in 1857 in the voluntary muscles of insects. The term mitochondrion was coined by Carl Benda in 1898. The mitochondrion is popularly nicknamed the "powerhouse of the cell", a phrase coined by Philip Siekevitz in a 1957 article of the same name.

Some cells in some multicellular organisms lack mitochondria (for example, mature mammalian red blood cells). A large number of unicellular organisms, such as microsporidia, parabasalids and diplomonads, have reduced or transformed their mitochondria into other structures. One eukaryote, Monocercomonoides, is known to have completely lost its mitochondria, and one multicellular organism, Henneguya salminicola, is known to have retained mitochondrion-related organelles in association with a complete loss of their mitochondrial genome.

Mitochondria are commonly between 0.75 and 3 μm2 in area, but vary considerably in size and structure. Unless specifically stained, they are not visible. In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes. Mitochondria have been implicated in several human disorders and conditions, such as mitochondrial diseases, cardiac dysfunction, heart failure and autism.

The number of mitochondria in a cell can vary widely by organism, tissue, and cell type. A mature red blood cell has no mitochondria, whereas a liver cell can have more than 2000. The mitochondrion is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, intermembrane space, inner membrane, cristae and matrix.

Although most of a cell's DNA is contained in the cell nucleus, the mitochondrion has its own genome ("mitogenome") that is substantially similar to bacterial genomes. Mitochondrial proteins (proteins transcribed from mitochondrial DNA) vary depending on the tissue and the species. In humans, 615 distinct types of proteins have been identified from cardiac mitochondria, whereas in rats, 940 proteins have been reported. The mitochondrial proteome is thought to be dynamically regulated.

Details

Mitochondrion is a membrane-bound organelle found in the cytoplasm of almost all eukaryotic cells (cells with clearly defined nuclei), the primary function of which is to generate large quantities of energy in the form of adenosine triphosphate (ATP). Mitochondria are typically round to oval in shape and range in size from 0.5 to 10 μm. In addition to producing energy, mitochondria store calcium for cell signaling activities, generate heat, and mediate cell growth and death.

The number of mitochondria per cell varies widely—for example, in humans, erythrocytes (red blood cells) do not contain any mitochondria, whereas liver cells and muscle cells may contain hundreds or even thousands. The only eukaryotic organism known to lack mitochondria is the oxymonad Monocercomonoides species. Mitochondria are unlike other cellular organelles in that they have two distinct membranes and a unique genome and reproduce by binary fission; these features indicate that mitochondria share an evolutionary past with prokaryotes (single-celled organisms).

Most of the proteins and other molecules that make up mitochondria originate in the cell nucleus. However, 37 genes are contained in the human mitochondrial genome, 13 of which produce various components of the electron transport chain (ETC). In many organisms, the mitochondrial genome is inherited maternally. This is because the mother’s egg cell donates the majority of cytoplasm to the embryo, and mitochondria inherited from the father’s sperm are usually destroyed.

Role in energy production

The outer mitochondrial membrane is freely permeable to small molecules and contains special channels capable of transporting large molecules. In contrast, the inner membrane is far less permeable, allowing only very small molecules to cross into the gel-like matrix that makes up the organelle’s central mass. The matrix contains the deoxyribonucleic acid (DNA) of the mitochondrial genome and the enzymes of the tricarboxylic acid (TCA) cycle (also known as the citric acid cycle, or Krebs cycle), which metabolizes nutrients into by-products the mitochondrion can use for energy production.

The three processes of ATP production include glycolysis, the tricarboxylic acid cycle, and oxidative phosphorylation. In eukaryotic cells the latter two processes occur within mitochondria. Electrons that are passed through the electron transport chain ultimately generate free energy capable of driving the phosphorylation of ADP.

The processes that convert these by-products into energy occur primarily on the inner membrane, which is bent into folds known as cristae that house the protein components of the main energy-generating system of cells, the ETC. The ETC uses a series of oxidation-reduction reactions to move electrons from one protein component to the next, ultimately producing free energy that is harnessed to drive the phosphorylation of ADP (adenosine diphosphate) to ATP. This process, known as chemiosmotic coupling of oxidative phosphorylation, powers nearly all cellular activities, including those that generate muscle movement and fuel brain functions.

Role in disease

Mitochondrial DNA (mtDNA) is highly susceptible to mutations, largely because it does not possess the robust DNA repair mechanisms common to nuclear DNA. In addition, the mitochondrion is a major site for the production of reactive oxygen species (ROS; or free radicals) due to the high propensity for aberrant release of free electrons. While several different antioxidant proteins within the mitochondria scavenge and neutralize these molecules, some ROS may inflict damage to mtDNA. In addition, certain chemicals and infectious agents, as well as alcohol abuse, can damage mtDNA. In the latter instance, excessive ethanol intake saturates detoxification enzymes, causing highly reactive electrons to leak from the inner membrane into the cytoplasm or into the mitochondrial matrix, where they combine with other molecules to form numerous radicals.

Some diseases and disorders associated with mitochondrial dysfunction are caused by mutations in mtDNA. Disorders resulting from mutations in mtDNA demonstrate an alternative form of non-Mendelian inheritance, known as maternal inheritance, in which the mutation and disorder are passed from mothers to all of their children. The mutations generally affect the function of the mitochondrion, compromising, among other processes, the production of cellular ATP. Severity can vary widely for disorders resulting from mutations in mtDNA, a phenomenon generally thought to reflect the combined effects of heteroplasmy (i.e., mixed populations of both normal and mutant mitochondrial DNA in a single cell) and other confounding genetic or environmental factors. Although mtDNA mutations play a role in some mitochondrial diseases, the majority of the conditions actually are the result of mutations in genes in the nuclear genome, which encodes a number of proteins that are exported and transported to mitochondria in the cell.

There are numerous inherited and acquired mitochondrial diseases, many of which can emerge at any age and are enormously diverse in their clinical and molecular features. They range in severity from relatively mild disease that affects just a single organ to debilitating and sometimes fatal illness that affects multiple organs. Both inherited and acquired mitochondrial dysfunction is implicated in several diseases, including Alzheimer disease and Parkinson disease. The accumulation of mtDNA mutations throughout an organism’s life span are suspected to play an important role in aging, as well as in the development of certain cancers and other diseases. Because mitochondria also are a central component of apoptosis (programmed cell death), which is routinely used to rid the body of cells that are no longer useful or functioning properly, mitochondrial dysfunction that inhibits cell death can contribute to the development of cancer.

Research on human evolution

The maternal inheritance of mtDNA has proved vital to research on human evolution and migration. Maternal transmission allows similarities inherited in generations of offspring to be traced down a single line of ancestors for many generations. Research has shown that fragments of the mitochondrial genome carried by all humans alive today can be traced to a single woman ancestor living an estimated 150,000 to 200,000 years ago. Scientists suspect that this woman lived among other women but that the process of genetic drift (chance fluctuations in gene frequency that affect the genetic constitution of small populations) caused her mtDNA to randomly supersede that of other women as the population evolved.

Variations in mtDNA inherited by subsequent generations of humans have helped researchers decipher the geographical origins, as well as the chronological migrations of different human populations. For example, studies of the mitochondrial genome indicate that humans migrating from Asia to the Americas 30,000 years ago may have been stranded on Beringia, a vast area that included a land bridge in the present-day Bering Strait, for as long as 15,000 years before arriving in the Americas.

Mitochondrial_Illustration_v3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1348 2022-04-14 01:20:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1322) Sydney Opera House

Summary

The Sydney Opera House is a multi-venue performing arts centre in Sydney. Located on the banks of Sydney Harbour, it is widely regarded as one of the world's most famous and distinctive buildings and a masterpiece of 20th century architecture.

Designed by Danish architect Jørn Utzon, but completed by an Australian architectural team headed by Peter Hall, the building was formally opened on 20 October 1973 after a gestation beginning with Utzon's 1957 selection as winner of an international design competition. The Government of New South Wales, led by the premier, Joseph Cahill, authorised work to begin in 1958 with Utzon directing construction. The government's decision to build Utzon's design is often overshadowed by circumstances that followed, including cost and scheduling overruns as well as the architect's ultimate resignation.

The building and its surrounds occupy the whole of Bennelong Point on Sydney Harbour, between Sydney Cove and Farm Cove, adjacent to the Sydney central business district and the Royal Botanic Gardens, and near to the Sydney Harbour Bridge.

The building comprises multiple performance venues, which together host well over 1,500 performances annually, attended by more than 1.2 million people. Performances are presented by numerous performing artists, including three resident companies: Opera Australia, the Sydney Theatre Company and the Sydney Symphony Orchestra. As one of the most popular visitor attractions in Australia, the site is visited by more than eight million people annually, and approximately 350,000 visitors take a guided tour of the building each year. The building is managed by the Sydney Opera House Trust, an agency of the New South Wales State Government.

On 28 June 2007, the Sydney Opera House became a UNESCO World Heritage Site, having been listed on the (now defunct) Register of the National Estate since 1980, the National Trust of Australia register since 1983, the City of Sydney Heritage Inventory since 2000, the New South Wales State Heritage Register since 2003, and the Australian National Heritage List since The Opera House was also a finalist in the New7Wonders of the World campaign list.

Details

Sydney Opera House is an opera house located on Port Jackson (Sydney Harbour), New South Wales, Australia. Its unique use of a series of gleaming white sail-shaped shells as its roof structure makes it one of the most-photographed buildings in the world.

The Sydney Opera House is situated on Bennelong Point (originally called Cattle Point), a promontory on the south side of the harbour just east of the Sydney Harbour Bridge. It was named for Bennelong, one of two Aboriginal people (the other man was named Colebee) who served as liaisons between Australia’s first British settlers and the local population. The small building where Bennelong lived once occupied the site. In 1821 Fort Macquarie was built there (razed 1902). In 1947 the resident conductor of the Sydney Symphony Orchestra, Eugene Goossens, identified the need of Australia’s leading city for a musical facility that would be a home not only to the symphony orchestra but also to opera and chamber music groups. The New South Wales government, agreeing that the city should aspire to recognition as a world cultural capital, gave official approval and in 1954 convened an advisory group, the Opera House Committee, to choose a site. Early the following year the committee recommended Bennelong Point.

In 1956 the state government sponsored an international competition for a design that was to include a building with two halls—one primarily for concerts and other large musical and dance productions and the other for dramatic presentations and smaller musical events. Architects from some 30 countries submitted 233 entries. In January 1957 the judging committee announced the winning entry, that of Danish architect Jørn Utzon, who won with a dramatic design showing a complex of two main halls side by side facing out to the harbour on a large podium. Each hall was topped with a row of sail-shaped interlocking panels that would serve as both roof and wall, to be made of precast concrete.

His winning entry brought Utzon international fame. Construction, however, which began in 1959, posed a variety of problems, many resulting from the innovative nature of the design. The opening of the Opera House was originally planned for Australia Day (January 26) in 1963, but cost overruns and structural engineering difficulties in executing the design troubled the course of the work, which faced many delays. The project grew controversial, and public opinion turned against it for a time. Amid continuing disagreements with the government authorities overseeing the project, Utzon resigned in 1966. Construction continued until September 1973 under the supervision of the structural engineering firm Ove Arup and Partners and three Sydney architects—Peter Hall, David Littlemore, and Lionel Todd.

In 1999 Utzon agreed to return as the building’s architect, overseeing an improvement project. He redesigned the former Reception Hall, and it was reopened in 2004 as the Utzon Room. It has an eastern view of Sydney Harbour and is used for receptions, seminars and other meetings, and chamber music performances. Two years later a new colonnade was completed, marking the first alteration to the Opera House’s exterior since 1973.

The Opera House is Sydney’s best-known landmark. It is a multipurpose performing arts facility whose largest venue, the 2,679-seat Concert Hall, is host to symphony concerts, choir performances, and popular music shows. Opera and dance performances, including ballet, take place in the Opera Theatre (renamed the Joan Sutherland Theatre in 2012 as a tribute to the celebrated Australian operatic soprano), which seats just over 1,500. There are also three theatres of different sizes and configurations for stage plays, film screenings, and smaller musical performances. The Forecourt, on the southeastern end of the complex, is used for outdoor performances. The building also houses restaurants and a professional recording studio. In 2007 the Opera House was designated a UNESCO World Heritage site.

sydney-opera-house.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1349 2022-04-14 22:05:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1323) Vein

Summary

Vein, in human physiology, is any of the vessels that, with four exceptions, carry oxygen-depleted blood to the right upper chamber (atrium) of the heart. The four exceptions—the pulmonary veins transport oxygenated blood from the lungs to the left upper chamber of the heart. The oxygen-depleted blood transported by most veins is collected from the networks of microscopic vessels called capillaries by thread-sized veins called venules.

As in the arteries, the walls of veins have three layers, or coats: an inner layer, or tunica intima; a middle layer, or tunica media; and an outer layer, or tunica adventitia. Each coat has a number of sublayers. The tunica intima differs from the inner layer of an artery: many veins, particularly in the arms and legs, have valves to prevent backflow of blood, and the elastic membrane lining the artery is absent in the vein, which consists primarily of endothelium and scant connective tissue. The tunica media, which in an artery is composed of muscle and elastic fibres, is thinner in a vein and contains less muscle and elastic tissue, and proportionately more collagen fibres (collagen, a fibrous protein, is the main supporting element in connective tissue). The outer layer (tunica adventitia) consists chiefly of connective tissue and is the thickest layer of the vein. As in arteries, there are tiny vessels called vasa vasorum that supply blood to the walls of the veins and other minute vessels that carry blood away. Veins are more numerous than arteries and have thinner walls owing to lower blood pressure. They tend to parallel the course of arteries.

Details

Veins are blood vessels in humans, and most other animals that carry blood towards the heart. Most veins carry deoxygenated blood from the tissues back to the heart; exceptions are the pulmonary and umbilical veins, both of which carry oxygenated blood to the heart. In contrast to veins, arteries carry blood away from the heart.

Veins are less muscular than arteries and are often closer to the skin. There are valves (called pocket valves) in most veins to prevent backflow.

Structure

Veins are present throughout the body as tubes that carry blood back to the heart. Veins are classified in a number of ways, including superficial vs. deep, pulmonary vs. systemic, and large vs. small.

* Superficial veins are those closer to the surface of the body, and have no corresponding arteries.
* Deep veins are deeper in the body and have corresponding arteries.
* Perforator veins drain from the superficial to the deep veins. These are usually referred to in the lower limbs and feet.
* Communicating veins are veins that directly connect superficial veins to deep veins.
* Pulmonary veins are a set of veins that deliver oxygenated blood from the lungs to the heart.
* Systemic veins drain the tissues of the body and deliver deoxygenated blood to the heart.

Most veins are equipped with one-way valves, similar to a Duckbill valve, to prevent blood flowing in the reverse direction.

Veins are translucent, so the color a vein appears from an organism's exterior is determined in large part by the color of venous blood, which is usually dark red as a result of its low oxygen content. Veins appear blue because of the low oxygen level in the vein. The color of a vein can be affected by the characteristics of a person's skin, how much oxygen is being carried in the blood, and how big and deep the vessels are. When a vein is drained of blood and removed from an organism, it appears grey-white.

Venous system

The largest veins in the human body are the venae cavae. These are two large veins which enter the right atrium of the heart from above and below. The superior vena cava carries blood from the arms and head to the right atrium of the heart, while the inferior vena cava carries blood from the legs and abdomen to the heart. The inferior vena cava is retroperitoneal and runs to the right and roughly parallel to the abdominal aorta along the spine. Large veins feed into these two veins, and smaller veins into these. Together this forms the venous system.

Whilst the main veins hold a relatively constant position, the position of veins person to person can display quite a lot of variation.

The pulmonary veins carry relatively oxygenated blood from the lungs to the heart. The superior and inferior venae cavae carry relatively deoxygenated blood from the upper and lower systemic circulations, respectively.

The portal venous system is a series of veins or venules that directly connect two capillary beds. Examples of such systems include the hepatic portal vein and hypophyseal portal system.

The peripheral veins carry blood from the limbs and hands and feet.

Microanatomy

Microscopically, veins have a thick outer layer made of connective tissue, called the tunica externa or tunica adventitia. During procedures requiring venous access such as venipuncture, one may notice a subtle "pop" as the needle penetrates this layer. The middle layer of bands of smooth muscle are called tunica media and are, in general, much thinner than those of arteries, as veins do not function primarily in a contractile manner and are not subject to the high pressures of systole, as arteries are. The interior is lined with endothelial cells called tunica intima. The precise location of veins varies much more from person to person than that of arteries.

Function

Veins serve to return blood from organs to the heart. Veins are also called "capacitance vessels" because most of the blood volume (60%) is contained within veins. In systemic circulation oxygenated blood is pumped by the left ventricle through the arteries to the muscles and organs of the body, where its nutrients and gases are exchanged at capillaries. After taking up cellular waste and carbon dioxide in capillaries, blood is channeled through vessels that converge with one another to form venules, which continue to converge and form the larger veins. The de-oxygenated blood is taken by veins to the right atrium of the heart, which transfers the blood to the right ventricle, where it is then pumped through the pulmonary arteries to the lungs. In pulmonary circulation the pulmonary veins return oxygenated blood from the lungs to the left atrium, which empties into the left ventricle, completing the cycle of blood circulation.

The return of blood to the heart is assisted by the action of the muscle pump, and by the thoracic pump action of breathing during respiration. Standing or sitting for a prolonged period of time can cause low venous return from venous pooling (vascular) shock. Fainting can occur but usually baroreceptors within the aortic sinuses initiate a baroreflex such that angiotensin II and norepinephrine stimulate vasoconstriction and heart rate increases to return blood flow. Neurogenic and hypovolaemic shock can also cause fainting. In these cases, the smooth muscles surrounding the veins become slack and the veins fill with the majority of the blood in the body, keeping blood away from the brain and causing unconsciousness. Jet pilots wear pressurized suits to help maintain their venous return and blood pressure.

The arteries are perceived as carrying oxygenated blood to the tissues, while veins carry deoxygenated blood back to the heart. This is true of the systemic circulation, by far the larger of the two circuits of blood in the body, which transports oxygen from the heart to the tissues of the body. However, in pulmonary circulation, the arteries carry deoxygenated blood from the heart to the lungs, and veins return blood from the lungs to the heart. The difference between veins and arteries is their direction of flow (out of the heart by arteries, returning to the heart for veins), not their oxygen content. In addition, deoxygenated blood that is carried from the tissues back to the heart for reoxygenation in the systemic circulation still carries some oxygen, though it is considerably less than that carried by the systemic arteries or pulmonary veins.

Although most veins take blood back to the heart, there is an exception. Portal veins carry blood between capillary beds. Capillary beds are a network of blood vessels that link the venules to the arterioles and allow for the exchange of materials across the membrane from the blood to tissues, and vice versa. For example, the hepatic portal vein takes blood from the capillary beds in the digestive tract and transports it to the capillary beds in the liver. The blood is then drained in the gastrointestinal tract and spleen, where it is taken up by the hepatic veins, and blood is taken back into the heart. Since this is an important function in mammals, damage to the hepatic portal vein can be dangerous. Blood clotting in the hepatic portal vein can cause portal hypertension, which results in a decrease of blood fluid to the liver.

Cardiac veins

The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein, the smallest cardiac veins, and the anterior cardiac veins. Coronary veins carry blood with a poor level of oxygen, from the myocardium to the right atrium. Most of the blood of the coronary veins returns through the coronary sinus. The anatomy of the veins of the heart is very variable, but generally it is formed by the following veins: heart veins that go into the coronary sinus: the great cardiac vein, the middle cardiac vein, the small cardiac vein, the posterior vein of the left ventricle, and the vein of Marshall. Heart veins that go directly to the right atrium: the anterior cardiac veins, the smallest cardiac veins (Thebesian veins).

Clinical significance:

Diseases

Venous insufficiency

Venous insufficiency is the most common disorder of the venous system, and is usually manifested as spider veins or varicose veins. Several varieties of treatments are used, depending on the patient's particular type and pattern of veins and on the physician's preferences. Treatment can include Endovenous Thermal Ablation using radiofrequency or laser energy, vein stripping, ambulatory phlebectomy, foam sclerotherapy, lasers, or compression.

Postphlebitic syndrome is venous insufficiency that develops following deep vein thrombosis.

Deep vein thrombosis

Deep vein thrombosis is a condition in which a blood clot forms in a deep vein. This is usually the veins of the legs, although it can also occur in the veins of the arms. Immobility, active cancer, obesity, traumatic damage and congenital disorders that make clots more likely are all risk factors for deep vein thrombosis. It can cause the affected limb to swell, and cause pain and an overlying skin rash. In the worst case, a deep vein thrombosis can extend, or a part of a clot can break off and land in the lungs, called pulmonary embolism.

The decision to treat deep vein thrombosis depends on its size, a person's symptoms, and their risk factors. It generally involves anticoagulation to prevents clots or to reduce the size of the clot.

Portal hypertension

The portal veins are found within the abdomen and carry blood through to the liver. Portal hypertension is associated with cirrhosis or disease of the liver, or other conditions such as an obstructing clot (Budd Chiari syndrome) or compression from tumours or tuberculosis lesions. When the pressure increases in the portal veins, a collateral circulation develops, causing visible veins such as oesophageal varices.

Other

Thrombophlebitis is an inflammatory condition of the veins related to blood clots.

Imaging

Ultrasound, particularly duplex ultrasound, is a common way that veins can be seen.

Veins of clinical significance

The Batson Venous plexus, or simply Batson's Plexus, runs through the inner vertebral column connecting the thoracic and pelvic veins. These veins get their notoriety from the fact that they are valveless, which is believed to be the reason for metastasis of certain cancers.

The great saphenous vein is the most important superficial vein of the lower limb. First described by the Persian physician Avicenna, this vein derives its name from the word safina, meaning "hidden". This vein is "hidden" in its own fascial compartment in the thigh and exits the fascia only near the knee. Incompetence of this vein is an important cause of varicose veins of lower limbs.

The Thebesian veins within the myocardium of the heart are valveless veins that drain directly into the chambers of the heart. The coronary veins all empty into the coronary sinus which empties into the right atrium.

The dural venous sinuses within the dura mater surrounding the brain receive blood from the brain and also are a point of entry of cerebrospinal fluid from arachnoid villi absorption. Blood eventually enters the internal jugular vein.

Phlebology

Phlebology is the medical specialty devoted to the diagnosis and treatment of venous disorders. A medical specialist in phlebology is termed a phlebologist. A related image is called a phlebograph.

The American Medical Association added phlebology to their list of self-designated practice specialties in 2005. In 2007 the American Board of Phlebology (ABPh), subsequently known as the American Board of Venous & Lymphatic Medicine (ABVLM), was established to improve the standards of phlebologists and the quality of their patient care by establishing a certification examination, as well as requiring maintenance of certification. Although As of 2017 not a Member Board of the American Board of Medical Specialties (ABMS), the American Board of Venous & Lymphatic Medicine uses a certification exam based on ABMS standards.

The American Vein and Lymphatic Society (AVLS), formerly the American College of Phlebology (ACP) one of the largest medical societies in the world for physicians and allied health professionals working in the field of phlebology, has 2000 members. The AVLS encourages education and training to improve the standards of medical practitioners and the quality of patient care.

The American Venous Forum (AVF) is a medical society for physicians and allied health professionals dedicated to improving the care of patients with venous and lymphatic disease. The majority of its members manage the entire spectrum of venous and lymphatic diseases – from varicose veins to congenital abnormalities to deep vein thrombosis to chronic venous diseases. Founded in 1987, the AVF encourages research, clinical innovation, hands-on education, data collection and patient outreach.

Artery-vs-Vein.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1350 2022-04-15 22:02:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,384

Re: Miscellany

1324) BAFTA : British Academy of Film and Television Arts.

The British Academy of Film and Television Arts (BAFTA) is an independent charity that supports, develops, and promotes the art forms of the moving image (film, television and games) in the United Kingdom. In addition to its annual award ceremonies, BAFTA has an international programme of learning events and initiatives offering access to talent through workshops, masterclasses, scholarships, lectures, and mentoring schemes in the United Kingdom and the United States.

BAFTA's annual film awards ceremony, the British Academy Film Awards, has taken place since 1949, while their annual television awards ceremony, the British Academy Television Awards, has taken place since 1955.

Origins

BAFTA started out as the British Film Academy, founded in 1947 by a group of directors: David Lean, Alexander Korda, Roger Manvell, Laurence Olivier, Emeric Pressburger, Michael Powell, Michael Balcon, Carol Reed, and other major figures of the British film industry.

David Lean was the founding chairman of the academy. The first Film Awards ceremony took place in May 1949, honouring the films The Best Years of Our Lives, Odd Man Out and The World Is Rich.

The Guild of Television Producers and Directors was set up in 1953 with the first awards ceremony in October 1954, and in 1958 merged with the British Film Academy to form the Society of Film and Television Arts, whose inaugural meeting was held at Buckingham Palace and presided over by the Duke of Edinburgh.

In 1976, Queen Elizabeth, The Duke of Edinburgh, The Princess Royal and The Earl Mountbatten of Burma officially opened the organisation's headquarters at 195 Piccadilly, London, and in March the society became the British Academy of Film and Television Arts.

Abbreviation  :  BAFTA
Formation  :  1947; 75 years ago (as British Film Academy)
Type  :  Film, television show and games organisation
Purpose  :  Supports, promotes and develops the art forms of the moving image – film, television and video games – by identifying and rewarding excellence, inspiring practitioners and benefiting the public
Headquarters  :  Piccadilly London, W1J, United Kingdom
Region served  :  United Kingdom, United States
Membership  :  Approximately 6,500
Official language  :  English

Charitable mission

BAFTA is a membership organization comprising approximately 8,000 individuals worldwide who are creatives and professionals working in and making a contribution to the film, television and games industries in the UK. In 2005, it placed an overall cap on worldwide voting membership which stood at approximately 6,500 as of 2017.

BAFTA does not receive any funding from the government; it relies on income from membership subscriptions, individual donations, trusts, foundations and corporate partnerships to support its ongoing outreach work.

BAFTA has offices in Scotland and Wales in the UK, in Los Angeles and New York in the United States and runs events in Hong Kong and mainland China.

Learning events and initiatives

In addition to its high-profile awards ceremonies, BAFTA manages a year-round programme of educational events and initiatives including film screenings and Q&As, tribute evenings, interviews, lectures, and debates with major industry figures. With over 250 events a year, BAFTA's stated aim is to inspire and inform the next generation of talent by providing a platform for some of the world's most talented practitioners to pass on their knowledge and experience.

Many of these events are free to watch online at BAFTA Guru and via its official channel on YouTube.

Scholarships

BAFTA runs a number of scholarship programmes across the UK, US and Asia.

Launched in 2012, the UK programme enables talented British citizens who are in need of financial support to take an industry-recognised course in film, television or games in the UK. Each BAFTA Scholar receives up to £12,000 towards their annual course fees, and mentoring support from a BAFTA member and free access to BAFTA events around the UK. Since 2013, three students every year have received one of the Prince William Scholarships in Film, Television and Games, supported by BAFTA and Warner Bros. These scholarships are awarded in the name of Prince William, Duke of Cambridge in his role as president of BAFTA.

In the US, BAFTA Los Angeles offers financial support and mentorship to British graduate students studying in the US, as well as scholarships to provide financial aid to local LA students from the inner city. BAFTA New York's Media Studies Scholarship Program, set up in 2012, supports students pursuing media studies at undergraduate and graduate level institutions within the New York City area and includes financial aid and mentoring opportunities.

Since 2015, BAFTA has been offering scholarships for British citizens to study in China, and vice versa.

Awards

BAFTA presents awards for film, television and games, including children's entertainment, at a number of annual ceremonies across the UK and in Los Angeles, USA.

BAFTA awards

The BAFTA award trophy is a mask, designed by American sculptor Mitzi Cunliffe. When the Guild merged with the British Film Academy to become the Society of Film and Television Arts, later the British Academy of Film and Television Arts, the first 'BAFTA award' was presented to Sir Charles Chaplin on his Academy Fellowship that year.

A BAFTA award – including the bronze mask and marble base – weighs 3.7 kg (8.2 lb) and measures 27 cm (11 in) high x 14 cm (5.5 in) wide x 8 cm (3.1 in) deep; the mask itself measures 16 cm (6.3 in) high x 14 cm (5.5 in) wide. They are made of phosphor bronze and cast in a Middlesex foundry.

In 2017, the British Academy of Film and Television Arts introduced new entry rules for British films starting from the 2018/19 season to foster diversity.

Awards ceremonies:

Film Awards

BAFTA's annual film awards ceremony is known as the British Academy Film Awards, or "the BAFTAs", and reward the best work of any nationality seen on British cinema screens during the preceding year. In 1949 the British Film Academy, as it was then known, presented the first awards for films made in 1947 and 1948. Since 2008 the ceremony has been held at the Royal Opera House in London's Covent Garden. It had been held in the Odeon cinema on Leicester Square since 2000.

Since 2017, the BAFTA ceremony has been held at the Royal Albert Hall. The ceremony had been performed during April or May of each year, but since 2002 it has been held in February to precede the academy of Motion Picture Arts and Sciences' (AMPAS) Academy Awards, or Oscars even though it is not a precursor to the Academy.

In order for a film to be considered for a BAFTA nomination, its first public exhibition must be displayed in a cinema and it must have a UK theatrical release for no fewer than seven days of the calendar year that corresponds to the upcoming awards. A movie must be of feature-length and movies from all countries are eligible in all categories, with the exception of the Alexander Korda Award for Outstanding British Film and Outstanding Debut which are for British films or individuals only.

Television Awards and Television Craft Awards

The British Academy Television Awards ceremony usually takes place during April or May, with its sister ceremony, the British Academy Television Craft Awards, usually occurring within a few weeks of it.

The Television Awards, celebrating the best TV programmes and performances of the past year, are also often referred to simply as "the BAFTAs" or, to differentiate them from the movie awards, the "BAFTA Television Awards". They have been awarded annually since 1954. The first ever ceremony consisted of six categories. Until 1958, they were awarded by the Guild of Television Producers and Directors.

From 1968 until 1997, BAFTA's Film and Television Awards were presented together, but from 1998 onwards they were presented at two separate ceremonies.

The Television Craft Awards celebrate the talent behind the programmes, such as individuals working in visual effects, production, and costume design.

Only British programmes are eligible – with the potential exception of the publicly voted Audience Award – but any cable, satellite, terrestrial or digital television stations broadcasting in the UK are eligible to submit entries, as are independent production companies who have produced programming for the channels. Individual performances can either be entered by the performers themselves or by the broadcasters. The programmes being entered must have been broadcast on or between 1 January and 31 December of the year preceding the awards ceremony.

Since 2014 the "BAFTA Television Awards" have been open to TV programmes which are only broadcast online.

Games Awards

The British Academy Games Awards ceremony traditionally takes place in March, shortly after the Film Awards ceremony in February.

BAFTA first recognised video games and other interactive media at its inaugural BAFTA Interactive Entertainment Awards ceremony during 1998, the first major change of its rules since the admittance of television thirty years earlier. Among the first winning games were GoldenEye 007, Gran Turismo and interactive comedy MindGym, sharing the spotlight with the BBC News Online website which won the news category four years consecutively. These awards allowed the academy to recognise new forms of entertainment that were engaging new audiences and challenging traditional expressions of creativity.

During 2003, the sheer ubiquity of interactive forms of entertainment and the breadth of genres and types of video games outgrew the combined ceremony, and the event was divided into the BAFTA Video Games Awards and the BAFTA Interactive Awards Despite making headlines with high-profile winners like Halo 2 and Half-Life 2 the interactive division was discontinued and disappeared from BAFTA's publicity material after only two ceremonies.

During 2006, BAFTA announced its decision "to give video games equal status with film and television", and the academy now advertises video games as its third major topic in recognition of its importance as an art form of moving images. The same year the ceremony was performed at The Roundhouse by Chalk Farm Road in North London on 5 October and was televised for the first time on 17 October and was broadcast on the digital channel E4.

Between 2009 and 2019, the ceremonies have been performed at the London Hilton Park Lane and Tobacco Dock, London, and have been hosted by Dara Ó Briain and Rufus Hound. In 2020, as a result of the COVID-19 pandemic, it was announced that the ceremony was changing format from a live red carpet ceremony at the Queen Elizabeth Hall in London to an online show.[30] The online show was presented by Dara Ó Briain from his home and was watched by 720,000 globally. In 2021 the 17th British Academy Games Awards was hosted by arts and entertainment presenter Elle Osili-Wood and was watched by a global audience of 1.5 million.

Children's Awards

The British Academy Children's Awards are presented annually during November to reward excellence in the art forms of the moving image intended for children. They have been awarded annually since 1969.

The academy has a long history of recognising and rewarding children's programming, presenting two awards at the 1969 ceremony – The Flame of Knowledge Award for Schools Programmes and the Harlequin Award for Children's Programmes.

As of 2010 the Awards ceremony includes 19 categories across movies, television, video games and online content.

Since 2007 the Children's Awards have included a Kids Vote award, voted by children between seven and 14. BAFTA Kids Vote is the annual competition for children aged between seven and 12 which is now part of BAFTA's year-round BAFTA Kids programme of activity helping children "discover, explore and find out more about the worlds of films, television and games by providing content, information and experiences". The CBBC Me and My Movie award, a children's filmmaking initiative to inspire and enable children to make their own movies and tell their own stories, has been discontinued.

BAFTA Student Film Awards

BAFTA also hosts the annual BAFTA Student Film Awards as showcase for rising industry talent. The animation award was sponsored in 2017 and 2018 by animation studio Laika.

BAFTA.jpg?w=681&h=383&crop=1


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB