You are not logged in.
1604) Crown glass (Optics)
Summary
Crown glass is a type of optical glass used in lenses and other optical components. It has relatively low refractive index (≈1.52) and low dispersion (with Abbe numbers around 60). Crown glass is produced from alkali-lime silicates containing approximately 10% potassium oxide and is one of the earliest low dispersion glasses.
As well as the specific material named crown glass, there are other optical glasses with similar properties that are also called crown glasses. Generally, this is any glass with Abbe numbers in the range 50 to 85. For example, the borosilicate glass Schott BK7 (Schott designates it as 517642. The first three digits tell you its refractive index (1.517) and the last three tell you its Abbé number (64.2)) is an extremely common crown glass, used in precision lenses. Borosilicates contain about 10% boric oxide, have good optical and mechanical characteristics, and are resistant to chemical and environmental damage. Other additives used in crown glasses include zinc oxide, phosphorus pentoxide, barium oxide, fluorite and lanthanum oxide.
BAK-4 barium crown glass (Schott designates it as 569560. The first three digits tell you its refractive index (1.569) and the last three tell you its Abbé number (56.0)),has a higher index of refraction than BK7, and is used for prisms in high-end binoculars. In that application, it gives better image quality and a round exit pupil.
A concave lens of flint glass is commonly combined with a convex lens of crown glass to produce an achromatic doublet. The dispersions of the glasses partially compensate for each other, producing reduced chromatic aberration compared to a singlet lens with the same focal length.
Details
Crown glass is handmade glass of soda-lime composition for domestic glazing or optical uses. The technique of crown glass remained standard from the earliest times: a bubble of glass, blown into a pear shape and flattened, was transferred to the glassmaker’s pontil (a solid iron rod), reheated and rotated at speed, until centrifugal force formed a large circular plate of up to 60 inches in diameter. The finished “table” of glass was thin, lustrous, highly polished (by “fire-polish”), and had concentric ripple lines, the result of spinning; crown glass was slightly convex, and in the centre of the crown was the bull’s eye, a thickened part where the pontil was attached. This was often cut out as a defect, but later it came to be prized as evidence of antiquity. Nevertheless, and despite the availability of cheaper cylinder glass (cast and rolled glass had been invented in the 17th century), crown glass was particularly popular for its superior quality and clarity. The crown process, which may have been Syrian in origin, was in use in Europe since at least the 14th century, when the industry was centred in Normandy, where a few families of glassblowers monopolized the trade and enjoyed a kind of aristocratic status. From about the mid-17th century the crown glass process was gradually replaced by easier methods of manufacturing larger glass sheets. Window glass of note, however, was made by this method in the U.S. by the Boston Crown Glass Company from 1793 to about 1827.
Crown glass has optical properties that complement those of the denser flint glass when the two kinds are used together to form lenses corrected for chromatic aberration. Special ingredients may be added to crown glass to achieve particular optical qualities.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1605) Ferry
Summary
A ferry is a ship, watercraft or amphibious vehicle used to carry passengers, and sometimes vehicles and cargo, across a body of water. A small passenger ferry with many stops, such as in Venice, Italy, is sometimes called a water bus or water taxi.
Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels. Ship connections of much larger distances (such as over long distances in water bodies like the Mediterranean Sea) may also be called ferry services, and many carry vehicles.
Details
A ferry is a place where passengers, freight, or vehicles are carried by boat across a river, lake, arm of the sea, or other body of water. The term applies both to the place where the crossing is made and to the boat used for the purpose. By extension of the original meaning, ferry also denotes a short overwater flight by an airplane carrying passengers or freight or the flying of planes from one point to another as a means of delivering them.
Perhaps the most prominent early use of the term appears in Greek mythology, where Charon the ferryman carried the souls of the dead across the River Styx. Ferries were of great importance in ancient and medieval history, and their importance has persisted into the modern era. Before engineers learned to build permanent bridges over large bodies of water or construct tunnels under them, ferries offered the only means of crossing. Ferries include a wide variety of vessels, from the simplest canoes or rafts to large motor-driven ferries capable of carrying trucks and railway cars across vast expanses of water. The term is frequently used in combination with other words, as in the expressions train ferry, car ferry, and channel ferry.
In the early history of the United States, the colonists found that the coasts of the New World were broken by great bays and inlets and that the interior of the continent was divided by rivers that defied bridging for many generations. Crossing these rivers and bays was a necessity, however. At first, small boats propelled by oars or poles were the most common form of ferry. They were replaced later by large flatboats propelled by a form of long oar called a sweep. Sails were used when conditions were favourable and in some rivers the current itself provided the means of propulsion.
Horses were used on some ferries to walk a treadmill geared to paddle wheels; in others, horses were driven in a circle around a capstan that hauled in ropes and towed the ferry along its route. The first steam ferryboat in the United States was operated by John Fitch on the Delaware River in 1790, but it was not financially successful. The advent of steam power greatly improved ferryboats; they became larger, faster, and more reliable and began to take on a design different from other steamers. At cities divided by a river and where hundreds of people and many horse-drawn wagons had to cross the river daily, the typical U.S. ferryboat took shape. It was a double-ended vessel with side paddle wheels and a rudder and pilothouse on both ends. The pilothouses were on an upper deck, and the lower deck was arranged to hold as many vehicles as possible. A narrow passageway ran along each side of the lower deck with stairways to give passengers access to the upper deck. The engine was of the walking beam type with the beam mounted on a pedestal so high that it was visible above the upper deck.
Terminals to accommodate such ferries were built at each end of their routes. In order to dock promptly and permit wheeled vehicles to move on and off quickly, a platform with one end supported by a pivot on land and the other end supported by floats in the water was sometimes provided. As roads improved and the use of automobiles and large motor trucks increased, ferries became larger and faster, but the hull arrangement remained the same. High-speed steam engines with propellers on both ends of the ferry were used. Steam engines gave way to diesel engines, diesel-electric drives, and, in some cases, hovercraft. Several states organized commissions which took over ferries from private ownership and operated them for the public; these commissions frequently also operated bridges, public roads, and vehicular tunnels. Increase in the use of motor vehicles so overtaxed many ferries that they could not handle the load. As a result, more bridges and tunnels were built, and ferries began to disappear, but their use on some inland rivers and lakes still continues. Commuter ferries remained popular in densely populated coastal communities.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1606) Scholarship
Summary
Scholarship: learning; knowledge acquired by study; the academic attainments of a scholar.
2. a sum of money or other aid granted to a student, because of merit, need, etc., to pursue his or her studies.
3. the position or status of such a student.
4. a foundation to provide financial assistance to students.
Details
A scholarship is a form of financial aid awarded to students for further education. Generally, scholarships are awarded based on a set of criteria such as academic merit, diversity and inclusion, athletic skill, and financial need.
Scholarship criteria usually reflect the values and goals of the donor of the award, and while scholarship recipients are not required to repay scholarships, the awards may require that the recipient continue to meet certain requirements during their period of support, such maintaining a minimum grade point average or engaging in a certain activity (e.g., playing on a school sports team for athletic scholarship holders).
Scholarships also range in generosity; some range from covering partial tuition ranging all the way to a 'full-ride', covering all tuition, accommodation, housing and others.
Some prestigious, highly competitive scholarships are well-known even outside the academic community, such as Fulbright Scholarship and the Rhodes Scholarships at the graduate level, and the Robertson, Morehead-Cain and Jefferson Scholarships at the undergraduate level.
Scholarships vs. grants
While the terms scholarship and grant are frequently used interchangeably, they are distinctly different. Where grants are offered based exclusively on financial need, scholarships may have a financial need component but rely on other criteria as well.
* Academic scholarships typically use a minimum grade-point average or standardized test score such as the ACT or SAT to narrow down awardees.
* Athletic scholarships are generally based on athletic performance of a student and used as a tool to recruit high-performing athletes for their school's athletic teams.
* Merit scholarships can be based on a number of criteria, including performance in a particular school subject or club participation or community service.
A federal Pell Grant can be awarded to someone planning to receive their undergraduate degree and is solely based on their financial needs.
Types
A Navy Rear Admiral presents a Midshipman with a ceremonial cheque symbolizing her $180,000 Navy Reserve Officers Training Candidate scholarship.
The most common scholarships may be classified as:
* Merit-based: These awards are based on a student's academic, artistic, athletic, or other abilities, and often a factor in an applicant's extracurricular activities and community service record. Most such merit-based scholarships are paid directly by the institution the student attends, rather than issued directly to the student.
* Need-based: Some private need-based awards are confusingly called scholarships, and require the results of a FAFSA (the family's expected family contribution). However, scholarships are often merit-based, while grants tend to be need-based.
* Student-specific: These are scholarships for which applicants must initially qualify based upon gender, race, religion, family, and medical history, or many other student-specific factors. Minority scholarships are the most common awards in this category.[citation needed] For example, students in Canada may qualify for a number of Indigenous scholarships, whether they study at home or abroad. The Gates Millennium Scholars Program is another minority scholarship funded by Bill and Melinda Gates for excellent African American, American Indian, Asian Pacific Islander American, and Latino students who enroll in college.
* Career-specific: These are scholarships a college or university awards to students who plan to pursue a specific field of study. Often, the most generous awards go to students who pursue careers in high-need areas, such as education or nursing. Many schools in the United States give future nurses full scholarships to enter the field, especially if the student intends to work in a high-need community.
* College-specific: College-specific scholarships are offered by individual colleges and universities to highly qualified applicants. These scholarships are given on the basis of academic and personal achievement. Some scholarships have a "bond" requirement. Recipients may be required to work for a particular employer for a specified period of time or to work in rural or remote areas; otherwise, they may be required to repay the value of the support they received from the scholarship. This is particularly the case with education and nursing scholarships for people prepared to work in rural and remote areas. The programs offered by the uniformed services of the United States (Army, Navy, Marine Corps, Air Force, Coast Guard, National Oceanic and Atmospheric Administration Commissioned Officer Corps, and Public Health Service Commissioned Corps) sometimes resemble such scholarships.
* Athletic: Awarded to students with exceptional skill in a sport. Often this is so that the student will be available to attend the school or college and play the sport on their team, although in some countries government funded sports scholarships are available, allowing scholarship holders to train for international representation. School-based athletics scholarships can be controversial, as some believe that awarding scholarship money for athletic rather than academic or intellectual purposes is not in the institution's best interest.
* Brand: These scholarships are sponsored by a corporation that is trying to gain attention to their brand, or a cause. Sometimes these scholarships are referred to as branded scholarships. The Miss America beauty pageant is a famous example of a brand scholarship.
* Creative contest: These scholarships are awarded to students based on a creative submission. Contest scholarships are also called mini project-based scholarships, where students can submit entries based on unique and innovative ideas.
* "Last dollar": can be provided by private and government-based institutions, and are intended to cover the remaining fees charged to a student after the various grants are taken into account. To prohibit institutions from taking last dollar scholarships into account, and thereby removing other sources of funding, these scholarships are not offered until after financial aid has been offered in the form of a letter. Furthermore, last dollar scholarships may require families to have filed taxes for the most recent year, received their other sources of financial aid, and not yet received loans.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1607) Sepsis
Summary
Sepsis, formerly known as septicemia (septicaemia in British English) or blood poisoning, is a life-threatening condition that arises when the body's response to infection causes injury to its own tissues and organs.
This initial stage of sepsis is followed by suppression of the immune system. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. There may also be symptoms related to a specific infection, such as a cough with pneumonia, or painful urination with a kidney infection. The very young, old, and people with a weakened immune system may have no symptoms of a specific infection, and the body temperature may be low or normal instead of having a fever. Severe sepsis causes poor organ function or blood flow. The presence of low blood pressure, high blood lactate, or low urine output may suggest poor blood flow. Septic shock is low blood pressure due to sepsis that does not improve after fluid replacement.
Sepsis is caused by many organisms including bacteria, viruses and fungi. Common locations for the primary infection include the lungs, brain, urinary tract, skin, and abdominal organs. Risk factors include being very young or old, a weakened immune system from conditions such as cancer or diabetes, major trauma, and burns. Previously, a sepsis diagnosis required the presence of at least two systemic inflammatory response syndrome (SIRS) criteria in the setting of presumed infection. In 2016, a shortened sequential organ failure assessment score (SOFA score), known as the quick SOFA score (qSOFA), replaced the SIRS system of diagnosis. qSOFA criteria for sepsis include at least two of the following three: increased breathing rate, change in the level of consciousness, and low blood pressure. Sepsis guidelines recommend obtaining blood cultures before starting antibiotics; however, the diagnosis does not require the blood to be infected. Medical imaging is helpful when looking for the possible location of the infection. Other potential causes of similar signs and symptoms include anaphylaxis, adrenal insufficiency, low blood volume, heart failure, and pulmonary embolism.
Sepsis requires immediate treatment with intravenous fluids and antimicrobials. Ongoing care often continues in an intensive care unit. If an adequate trial of fluid replacement is not enough to maintain blood pressure, then the use of medications that raise blood pressure becomes necessary. Mechanical ventilation and dialysis may be needed to support the function of the lungs and kidneys, respectively. A central venous catheter and an arterial catheter may be placed for access to the bloodstream and to guide treatment. Other helpful measurements include cardiac output and superior vena cava oxygen saturation. People with sepsis need preventive measures for deep vein thrombosis, stress ulcers, and pressure ulcers unless other conditions prevent such interventions. Some people might benefit from tight control of blood sugar levels with insulin. The use of corticosteroids is controversial, with some reviews finding benefit, and others not.
Disease severity partly determines the outcome. The risk of death from sepsis is as high as 30%, while for severe sepsis it is as high as 50%, and septic shock 80%. Sepsis affected about 49 million people in 2017, with 11 million deaths (1 in 5 deaths worldwide). In the developed world, approximately 0.2 to 3 people per 1000 are affected by sepsis yearly, resulting in about a million cases per year in the United States. Rates of disease have been increasing. Some data indicate that sepsis is more common among males than females, however, other data show a greater prevalence of the disease among women. Descriptions of sepsis date back to the time of Hippocrates.
Details
What is Sepsis?
What is sepsis?
Is sepsis contagious?
What causes sepsis?
Who is at risk?
What are the signs & symptoms?
What should I do if I think I might have sepsis?
Anyone can get an infection, and almost any infection, including COVID-19, can lead to sepsis. In a typical year:
* At least 1.7 million adults in America develop sepsis.
* At least 350,000 adults who develop sepsis die during their hospitalization or are discharged to hospice.
* 1 in 3 people who dies in a hospital had sepsis during that hospitalization
* Sepsis, or the infection causing sepsis, starts before a patient goes to the hospital in nearly 87% of cases.
Sepsis is the body’s extreme response to an infection. It is a life-threatening medical emergency. Sepsis happens when an infection you already have triggers a chain reaction throughout your body. Infections that lead to sepsis most often start in the lung, urinary tract, skin, or gastrointestinal tract. Without timely treatment, sepsis can rapidly lead to tissue damage, organ failure, and death.
Is sepsis contagious?
You can’t spread sepsis to other people. However, an infection can lead to sepsis, and you can spread some infections to other people.
Sepsis happens when…
What causes sepsis?
Infections can put you or your loved one at risk for sepsis. When germs get into a person’s body, they can cause an infection. If you don’t stop that infection, it can cause sepsis. Bacterial infections cause most cases of sepsis. Sepsis can also be a result of other infections, including viral infections, such as COVID-19 or influenza, or fungal infections.
Who is at risk?
Anyone can develop sepsis, but some people are at higher risk for sepsis:
Adults 65 or older
People with weakened immune systems
People with chronic medical conditions, such as diabetes, lung disease, cancer, and kidney disease
People with recent severe illness or hospitalization
What are the signs & symptoms?
A person with sepsis might have one or more of the following signs or symptoms:
High heart rate or weak pulse
Confusion or disorientation
Extreme pain or discomfort
Fever, shivering, or feeling very cold
Shortness of breath
Clammy or sweaty skin
A medical assessment by a healthcare professional is needed to confirm sepsis.
What should I do if I think I might have sepsis?
Sepsis is a medical emergency. If you or your loved one has an infection that’s not getting better or is getting worse, ACT FAST.
Get medical care IMMEDIATELY. Ask your healthcare professional, “Could this infection be leading to sepsis?” and if you should go to the emergency room.
If you have a medical emergency, call 911. If you have or think you have sepsis, tell the operator. If you have or think you have COVID-19, tell the operator this as well. If possible, put on a mask before medical help arrives.
With fast recognition and treatment, most people survive. Treatment requires urgent medical care, usually in an intensive care unit in a hospital, and includes careful monitoring of vital signs and often antibiotics.
Additional Information
Sepsis is systemic inflammatory condition that occurs as a complication of infection and in severe cases may be associated with acute and life-threatening organ dysfunction. Worldwide, sepsis has long been a common cause of illness and mortality in hospitals, intensive care units, and emergency departments. In 2017 alone, an estimated 11 million people worldwide died from sepsis, accounting for nearly one-fifth of all deaths globally that year. Nonetheless, this number marked a decrease in sepsis death rates from the last part of the 20th century. Improvements in health care, including better sanitation and the development of more effective treatments, were thought to have contributed to the decline.
Populations most susceptible to sepsis include the elderly and persons who are severely ill and hospitalized. In the early 21st century, other factors, including increased life expectancy for persons with immunodeficiency disorders (e.g., HIV/AIDS), increased incidence of antibiotic resistance, and increased use of anticancer chemotherapy and immunosuppressive drugs (e.g., for organ transplantation), have emerged as important risk factors of sepsis.
Risk factors, symptoms, and diagnosis
In addition to the elderly and to persons with weak immune systems, newborns, pregnant women, and individuals affected by chronic diseases such as diabetes mellitus are also highly susceptible to sepsis. Other risk factors include hospitalization and the introduction of medical devices (e.g., surgical instruments) into the body. Early symptoms of sepsis include increased heart rate, increased respiratory rate, suspected or confirmed infection, and increased or decreased body temperature (i.e., greater than 101.3 °F [38.5 °C] or lower than 95 °F [35 °C]). Diagnosis is based on the presence of at least two of these symptoms. In many instances, however, the condition is not diagnosed until it has progressed to severe sepsis, which is characterized by symptoms of organ dysfunction, including irregular heartbeat, laboured breathing, confusion, dizziness, decreased urinary output, and skin discoloration. The condition may then progress to septic shock, which occurs when the above symptoms are accompanied by a marked drop in blood pressure. Severe sepsis and septic shock may also involve the failure of two or more organ systems, at which point the condition may be described as multiple organ dysfunction syndrome (MODS). The condition may progress through these stages in a matter of hours, days, or weeks, depending on treatment and other factors.
Treatment and complications
Prompt treatment is required in order to decrease the risk of progression to septic shock or MODS. Initial treatment includes the emergency intravenous administration of fluids and antibiotics. Vasoconstrictor drugs also may be given intravenously to raise blood pressure, and patients who experience breathing difficulties sometimes require mechanical ventilation. Dialysis, which helps clear the blood of infectious agents, is initiated when kidney failure is evident, and surgery may be used to drain an infection.
Many patients experience a decrease in quality of life following sepsis, particularly if the patient is older or the attack severe. Acute lung injury and neuronal injury resulting from sepsis, for example, have been associated with long-term cognitive impairment. Older persons who suffer from such complications may not be able to live independently following their recovery from sepsis and often require long-term treatment with medication.
Pathophysiology
At the cellular level, sepsis is characterized by changes in the function of endothelial tissue (the endothelium forms the inner surface of blood vessels), in the coagulation (blood clotting) process, and in blood flow. These changes appear to be initiated by the cellular release of pro-inflammatory substances in response to the presence of infectious microorganisms. The substances, which include short-lived regulatory proteins known as cytokines, in turn interact with endothelial cells and thereby cause injury to the endothelium and possibly the death (apoptosis) of endothelial cells. These interactions lead to the activation of coagulation factors. In very small blood vessels (microvessels), the coagulation response, in combination with endothelial damage, may impede blood flow and cause the vessels to become leaky. As fluid and microorganisms escape into the surrounding tissues, the tissues begin to swell (edema); in the lungs this leads to pulmonary edema, which manifests as shortness of breath. If the supply of coagulation proteins becomes exhausted, bleeding may ensue. Cytokines also cause blood vessels to dilate (widen), producing a decrease in blood pressure. The damage incited by the inflammatory response is widespread and has been described as a “pan-endothelial” effect because of the distribution of endothelial tissue in blood vessels throughout the body; this effect appears to explain the systemic nature of sepsis.
The existence of multiple conditions that are characterized by similar symptoms complicates the clinical picture of sepsis. For example, sepsis is closely related to bacteremia, which is the infection of blood with bacteria, and septicemia, which is a systemic inflammatory condition caused specifically by bacteria and typically associated with bacteremia. Sepsis differs from these conditions in that it may arise in response to infection with any of a variety of microorganisms, including bacteria, viruses, protozoans, and fungi. However, the occasional progression of septicemia to more-advanced stages of sepsis and the frequent involvement of bacterial infection in sepsis preclude clear clinical distinction between these conditions. Sepsis is also distinguished from systemic inflammatory response syndrome (SIRS), a condition that can arise independent of infection (e.g., from factors such as burns or trauma).
Sepsis through history
One of the first medical descriptions of putrefaction and a sepsislike condition was provided in the 5th and 4th centuries BCE in works attributed to the ancient Greek physician Hippocrates (the Greek word sepsis means “putrefaction”). With no knowledge of infectious microorganisms, the ancient Greeks and the physicians who came after them variably associated the condition with digestive illness, miasma (infection by bad air), and spontaneous generation. These apocryphal associations persisted until the 19th century, when infection finally was discovered to be the underlying cause of sepsis, a realization that emerged from the work of British surgeon and medical scientist Sir Joseph Lister and French chemist and microbiologist Louis Pasteur.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1608) Defense Mechanism
Defense mechanism, in psychoanalytic theory, is any of a group of mental processes that enables the mind to reach compromise solutions to conflicts that it is unable to resolve. The process is usually unconscious, and the compromise generally involves concealing from oneself internal drives or feelings that threaten to lower self-esteem or provoke anxiety. The concept derives from the psychoanalytic hypothesis that there are forces in the mind that oppose and battle against each other. The term was first used in Sigmund Freud’s paper “The Neuro-Psychoses of Defence” (1894).
Some of the major defense mechanisms described by psychoanalysts are the following:
1. Repression is the withdrawal from consciousness of an unwanted idea, affect, or desire by pushing it down, or repressing it, into the unconscious part of the mind. An example may be found in a case of hysterical amnesia, in which the victim has performed or witnessed some disturbing act and then completely forgotten the act itself and the circumstances surrounding it.
2. Reaction formation is the fixation in consciousness of an idea, affect, or desire that is opposite to a feared unconscious impulse. A mother who bears an unwanted child, for example, may react to her feelings of guilt for not wanting the child by becoming extremely solicitous and overprotective to convince both the child and herself that she is a good mother.
3. Projection is a form of defense in which unwanted feelings are displaced onto another person, where they then appear as a threat from the external world. A common form of projection occurs when an individual, threatened by his own angry feelings, accuses another of harbouring hostile thoughts.
4. Regression is a return to earlier stages of development and abandoned forms of gratification belonging to them, prompted by dangers or conflicts arising at one of the later stages. A young wife, for example, might retreat to the security of her parents’ home after her first quarrel with her husband.
5. Sublimation is the diversion or deflection of instinctual drives, usually sexual ones, into noninstinctual channels. Psychoanalytic theory holds that the energy invested in sexual impulses can be shifted to the pursuit of more acceptable and even socially valuable achievements, such as artistic or scientific endeavours.
6. Denial is the conscious refusal to perceive that painful facts exist. In denying latent feelings of homosexuality or hostility, or mental defects in one’s child, an individual can escape intolerable thoughts, feelings, or events.
7. Rationalization is the substitution of a safe and reasonable explanation for the true (but threatening) cause of behaviour.
Psychoanalysts emphasize that the use of a defense mechanism is a normal part of personality function and not in and of itself a sign of psychological disorder. Various psychological disorders, however, can be characterized by an excessive or rigid use of these defenses.
Additional Information
In psychoanalytic theory, a defence mechanism (American English: defense mechanism), is an unconscious psychological operation that functions to protect a person from anxiety-producing thoughts and feelings related to internal conflicts and outer stressors.
The idea of defence mechanisms comes from psychoanalytic theory, a psychological perspective of personality that sees personality as the interaction between three components: id, ego, and super-ego. These psychological strategies may help people put distance between themselves and threats or unwanted feelings, such as guilt or shame.
Defence mechanisms may result in healthy or unhealthy consequences depending on the circumstances and frequency with which the mechanism is used. Defence mechanisms (German: Abwehrmechanismen) are psychological strategies brought into play by the unconscious mind to manipulate, deny, or distort reality in order to defend against feelings of anxiety and unacceptable impulses and to maintain one's self-schema or other schemas. These processes that manipulate, deny, or distort reality may include the following: repression, or the burying of a painful feeling or thought from one's awareness even though it may resurface in a symbolic form; identification, incorporating an object or thought into oneself; and rationalization, the justification of one's behaviour and motivations by substituting "good" acceptable reasons for the actual motivations. In psychoanalytic theory, repression is considered the basis for other defence mechanisms.
According to this theory, healthy people normally use different defence mechanisms throughout life. A defence mechanism becomes pathological only when its persistent use leads to maladaptive behaviour such that the physical or mental health of the individual is adversely affected. Among the purposes of ego defence mechanisms is to protect the mind/self/ego from anxiety or social sanctions or to provide a refuge from a situation with which one cannot currently cope.
One resource used to evaluate these mechanisms is the Defense Style Questionnaire (DSQ-40).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1609) Electricity
Gist
Electricity is the flow of electrical power or charge. Electricity is both a basic part of nature and one of the most widely used forms of energy.
Summary
Electricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.
The presence of either a positive or negative electric charge produces an electric field. The movement of electric charges is an electric current and produces a magnetic field. In most applications, a force acts on a charge with a magnitude given by Coulomb's law. Electric potential is typically measured in volts.
Electricity is at the heart of many modern technologies, being used for:
* Electric power where electric current is used to energise equipment;
* Electronics which deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.
Electrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the 17th and 18th centuries. The theory of electromagnetism was developed in the 19th century, and by the end of that century electricity was being put to industrial and residential use by electrical engineers. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society.
Details
Electricity is a phenomenon associated with stationary or moving electric charges. Electric charge is a fundamental property of matter and is borne by elementary particles. In electricity the particle involved is the electron, which carries a charge designated, by convention, as negative. Thus, the various manifestations of electricity are the result of the accumulation or motion of numbers of electrons.
Electrostatics
Electrostatics is the study of electromagnetic phenomena that occur when there are no moving charges—i.e., after a static equilibrium has been established. Charges reach their equilibrium positions rapidly because the electric force is extremely strong. The mathematical methods of electrostatics make it possible to calculate the distributions of the electric field and of the electric potential from a known configuration of charges, conductors, and insulators. Conversely, given a set of conductors with known potentials, it is possible to calculate electric fields in regions between the conductors and to determine the charge distribution on the surface of the conductors. The electric energy of a set of charges at rest can be viewed from the standpoint of the work required to assemble the charges; alternatively, the energy also can be considered to reside in the electric field produced by this assembly of charges. Finally, energy can be stored in a capacitor; the energy required to charge such a device is stored in it as electrostatic energy of the electric field.
Coulomb’s law
Static electricity is a familiar electric phenomenon in which charged particles are transferred from one body to another. For example, if two objects are rubbed together, especially if the objects are insulators and the surrounding air is dry, the objects acquire equal and opposite charges and an attractive force develops between them. The object that loses electrons becomes positively charged, and the other becomes negatively charged. The force is simply the attraction between charges of opposite sign. The properties of this force were described above; they are incorporated in the mathematical relationship known as Coulomb’s law.
Additional Information
Electricity is central to many parts of life in modern societies and will become even more so as its role in transport and heating expands through technologies such as electric vehicles and heat pumps. Power generation is currently the largest source of carbon dioxide (CO2) emissions globally, but it is also the sector that is leading the transition to net zero emissions through the rapid ramping up of renewables such as solar and wind. At the same time, the current global energy crisis has placed electricity security and affordability high on the political agenda in many countries.
Electricity consumption in the European Union recorded a sharp 3.5% decline year-on-year (y-o-y) in 2022 as the region was particularly hard hit by high energy prices, which led to significant demand destruction among industrial consumers. Electricity demand in India and the United States rose, while Covid restrictions affected China’s growth.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1610) Occupational Therapy
Gist
Occupational therapy (OT) is a branch of health care that helps people of all ages who have physical, sensory, or cognitive problems. OT can help them regain independence in all areas of their lives. Occupational therapists help with barriers that affect a person's emotional, social, and physical needs.
Summary
Occupational therapy (OT) is a healthcare profession. It involves the use of assessment and intervention to develop, recover, or maintain the meaningful activities, or occupations, of individuals, groups, or communities. The field of OT consists of health care practitioners trained and educated to improve mental and physical performance. Occupational therapists specialize in teaching, educating, and supporting participation in any activity that occupies an individual's time. It is an independent health profession sometimes categorized as an allied health profession and consists of occupational therapists (OTs) and occupational therapy assistants (OTAs). While OTs and OTAs have different roles, they both work with people who want to improve their mental and or physical health, disabilities, injuries, or impairments.
The American Occupational Therapy Association defines an occupational therapist as someone who "helps people across their lifespan participate in the things they want and/or need to do through the therapeutic use of everyday activities (occupations)". Definitions by professional occupational therapy organizations outside North America are similar in content.
Common interventions include:
* Helping children with disabilities to participate in school and social situations (independent mobility is often a central concern)
* Training in assistive device technology, meaningful and purposeful activities, and life skills.
* Physical injury rehabilitation
* Mental dysfunction rehabilitation
* Support of individuals across the age spectrum experiencing physical and cognitive changes
* Assessing ergonomics and assistive seating options to maximize independent function, while alleviating the risk of pressure injury
* Education in the disease and rehabilitation process
* Advocating for patient health
Typically, occupational therapists are university-educated professionals and must pass a licensing exam to practice.
Currently, entry level occupational therapists must have a masters degree while certified occupational therapy assistants require a two year associates degree to practice in the United States. Individuals must pass a national board certification and apply for a state license in most states. Occupational therapists often work closely with professionals in physical therapy, speech–language pathology, audiology, nursing, nutrition, social work, psychology, medicine, and assistive technology.
Details
Occupational therapy aims to improve your ability to do everyday tasks if you're having difficulties.
How to get occupational therapy
You can get occupational therapy free through the NHS or social services, depending on your situation.
You can:
* speak to a GP about a referral
* search for your local council to ask if you can get occupational therapy
You can also pay for it yourself. The Royal College of Occupational Therapists lists qualified and registered occupational therapists.
Find an occupational therapist
You can check an occupational therapist is qualified and registered with the Health & Care Professions Council (HCPC) using its online register of health and care professionals.
How occupational therapy can help you
Occupational therapy can help you with practical tasks if you:
* are physically disabled
* are recovering from an illness or operation
* have learning disabilities
* have mental health problems
* are getting older
Occupational therapists work with people of all ages and can look at all aspects of daily life in your home, school or workplace.
They look at activities you find difficult and see if there's another way you can do them.
The Royal College of Occupational Therapists has more information about what occupational therapy is.
Additional Information
Occupational therapy is use of self-care and work and play activities to promote and maintain health, prevent disability, increase independent function, and enhance development. Occupation includes all the activities or tasks that a person performs each day. For example, getting dressed, playing a sport, taking a class, cooking a meal, getting together with friends, and working at a job are considered occupations. Participation in occupations serves many purposes, from taking care of oneself and interacting with others to earning a living, developing skills, and contributing to society.
An occupational therapist works with persons who are unable to carry out the various activities that they want, need, or are expected to perform. Therapists are skilled in analyzing daily activities and tasks, and they work to construct therapy programs that enable persons to participate more satisfactorily in daily occupations. Occupational therapy intervention and the organization of specific therapy programs are coordinated with the work of other professional and health care personnel.
History
The discipline of occupational therapy evolved from the recognition many years ago that participation in work and other restorative activities improved the health of persons affected by mental or physical illness. In fact, patients have long been employed in the utility services of psychiatric hospitals. In the 19th century the moral treatment approach proposed the use of daily activities to improve the lives of people who were institutionalized for mental illness. By the early 20th century, experiments were being made in the use of arts and craft activities to occupy persons with serious mental disorders. This practice gave rise to the first occupational therapy workshops and later to schools for the training of occupational therapists.
The goal of early occupational therapy was to improve health through structured activities. World War I emphasized the need for occupational therapy, since the physical rehabilitation of veterans provided them an opportunity to return to productive work. In 1917, coincident with the increase in demand to aid veterans in the United States, the National Society for the Promotion of Occupational Therapy (later the American Occupational Therapy Association) was founded. Subsequent advancements in occupational therapy included the development of techniques used to analyze activities and the prescription of specific crafts and occupations for patients, particularly for young people and for patients within hospitals. In 1952 the World Federation of Occupational Therapists was formed, and in 1954 the first international congress of occupational therapists was held at Edinburgh.
In the latter part of the 20th century and early part of the 21st century, the development and refinement of theoretical models to guide occupational therapy assessment and intervention further advanced the practice of occupational therapy. These theories focus on the complex relationships between the motivations and skills of patients, the occupations that bring meaning to their lives, and the environments in which they live. Occupational science was developed to support the study of occupation and its complexity in everyday life. As a result, research in occupational therapy has grown substantially and has played an important role in providing scientific evidence to support many occupational therapy interventions.
Modern occupational therapy
Occupational therapists work with individuals of all ages and with various organizations, including companies and governments. The practice of occupational therapy focuses on maintenance of health, prevention of disability, and improvement of participation in occupations after illness, accident, or disability. Thus, therapists typically work with persons who have physical challenges in occupations because of illness, injury, or disability. They also work with persons who are at risk for decreased participation in their occupations. For example, programs for older adults that adapt their living environments to minimize the risk for a fall help them to continue to live in the community.
Establishing therapist-patient partnerships is an important part of a successful therapy program. Initial assessments enable patients to identify the occupations that are most meaningful to them but that they have difficulty performing. This helps therapists tailor programs to each patient’s needs and goals. Modern occupational therapy also focuses on the analysis, adaptation, and use of daily occupations to enable persons to live fully within their community. Each person’s day is filled with a variety of different activities and tasks, such as getting dressed, taking a bus, making a phone call, writing a report, loading equipment at work, or playing a game. Occupational therapists are trained to analyze these activities and tasks to determine what skills and abilities are required to complete them. If a person has difficulty engaging fully in day-to-day occupations, a therapist works with that person to assess why he or she cannot perform the specific activities and tasks that make up an occupation. Factors within the activity, the person, and the environment are examined to determine reasons for difficulties in performance. The occupational therapist and the person then develop a plan to improve performance through active participation in the occupation. Therapy may focus on improving a person’s skills through participation in the activity, adapting the activity to make it easier, or changing the environment to improve performance.
Examples of applied occupational therapy
The approaches that occupational therapists use to maintain and improve participation in the daily activities and tasks of patients can be illustrated by specific cases. The following examples explore several different situations that may be encountered by therapists.
In the first example, a young child who has cerebral palsy has difficulty learning to dress himself because of limitations in movement and coordination. With his parents, an occupational therapist plans a program to teach the most efficient methods for dressing. Changes to clothing, such as the addition of velcro closures or elastic shoelaces, may be used to adapt the activity. Methods of practice are taught to the parents. Specific activities are practiced throughout his day to help him improve his motor skills. At preschool, the therapist consults with the teacher to provide information about the child’s abilities and how to change the classroom environment to enhance his functioning.
In a second example, an older adult who had a mild stroke is experiencing depression and is uncertain whether she can continue to live in her apartment. A community occupational therapist assesses the woman’s interests and required daily activities and develops a plan for engagement in activities in her apartment and in the community. As she participates in these activities, she gains confidence and improves her ability to live independently. The therapist also makes adaptations to the woman’s kitchen so that she can reach utensils and make her meals easily and safely.
In a third example, after a motor vehicle accident, a 45-year-old woman is unable to return to work as an administrative assistant because of a neck injury. An occupational therapist analyzes the demands of the woman’s job and her ability to complete work-oriented tasks. The therapist makes changes to the woman’s work area to minimize pain and fatigue. The therapist also creates a paced return-to-work schedule, allowing the woman to improve her endurance gradually in order to achieve a successful return to her workplace.
In a fourth example, knowing that about 15 percent of people living in their country have a disability, the leaders of a community of 60,000 citizens decide to improve access to local recreation and leisure programs. An occupational therapist is hired to conduct an accessibility audit of the programs and their physical locations. Recommendations are provided to decrease physical, attitudinal, and policy barriers that may limit full participation.
In these examples, occupational therapy enhanced a person’s ability to participate by improving his or her skills or by adapting the activity or changing the environment. The continued advance of occupational science, which enables the consideration of new findings from research to be considered along with assessment of the client’s needs, forms an important part of the success of the therapeutic approaches described above.
Education of occupational therapists
Occupational therapists worldwide are educated at colleges or universities. Across countries, there is a range in the qualifications required for an occupational therapist to enter into practice. Many countries require a baccalaureate, or bachelor’s degree. In Canada and the United States, the minimum qualification is a master’s degree in occupational therapy. The United States also has entry-level clinical doctorate degrees. Europe supports bachelor’s degrees as well as advanced degrees (master’s and doctorate) after entry-level practice has been achieved. In Australia, entry-level qualifications can be obtained from a bachelor’s or master’s degree. There is an increase in the number of therapists globally who return to university to obtain advanced master’s or doctoral degrees in order to teach or to conduct research.
Occupational therapy education focuses on the theoretical concepts of occupation and the skills and abilities to practice as an occupational therapist. Students also must have adequate knowledge of anatomy, physiology, medicine, surgery, psychiatry, and psychology, since this knowledge is part of the foundation of occupational therapy assessment and intervention. Every occupational therapy education program includes periods of supervised clinical experience.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1611) Jurisprudence
Summary
Jurisprudence is Science or philosophy of law. Jurisprudence may be divided into three branches: analytical, sociological, and theoretical. The analytical branch articulates axioms, defines terms, and prescribes the methods that best enable one to view the legal order as an internally consistent, logical system. The sociological branch examines the actual effects of the law within society and the influence of social phenomena on the substantive and procedural aspects of law. The theoretical branch evaluates and criticizes law in terms of the ideals or goals postulated for it.
Details
The term Jurisprudence (when it does not refer to authoritative legal decision-making, as in "the jurisprudence of the Supreme Court") is almost synonymous with legal theory and legal philosophy (or philosophy of law). Jurisprudence as scholarship is principally concerned with what, in general, law is and ought to be. That includes questions of how persons and social relations are understood in legal terms, and of the values in and of law. Work that is counted as jurisprudence is mostly philosophical, but it includes work that also belongs to other disciplines, such as sociology, history, politics and economics.
Modern jurisprudence began in the 18th century and it was based on the first principles of natural law, civil law, and the law of nations. General jurisprudence can be divided into categories both by the type of question scholars seek to answer and by the theories of jurisprudence, or schools of thought, regarding how those questions are best answered. Contemporary philosophy of law, which deals with general jurisprudence, addresses problems internal to law and legal systems and problems of law as a social institution that relates to the larger political and social context in which it exists.
This article addresses three distinct branches of thought in general jurisprudence. Ancient natural law is the idea that there are rational objective limits to the power of legislative rulers. The foundations of law are accessible through reason, and it is from these laws of nature that human laws gain whatever force they have. Analytic jurisprudence (Clarificatory jurisprudence) rejects natural law's fusing of what law is and what it ought to be. It espouses the use of a neutral point of view and descriptive language when referring to aspects of legal systems. It encompasses such theories of jurisprudence as legal positivism, holds that there is no necessary connection between law and morality and that the force of law comes from basic social facts; and "legal realism", which argues that the real-world practice of law determines what law is, the law having the force that it does because of what legislators, lawyers, and judges do with it. Unlike experimental jurisprudence, which seeks to investigate the content of folk legal concepts using the methods of social science, the traditional method of both natural law and analytic jurisprudence is philosophical analysis. Normative jurisprudence is concerned with "evaluative" theories of law. It deals with what the goal or purpose of law is, or what moral or political theories provide a foundation for the law. It not only addresses the question "What is law?", but also tries to determine what the proper function of law should be, or what sorts of acts should be subject to legal sanctions, and what sorts of punishment should be permitted.
Etymology
The English word is derived from the Latin, iurisprudentia. Iuris is the genitive form of ius meaning law, and prudentia meaning prudence (also: discretion, foresight, forethought, circumspection). It refers to the exercise of good judgment, common sense, and caution, especially in the conduct of practical matters. The word first appeared in written English in 1628, at a time when the word prudence meant knowledge of, or skill in, a matter. It may have entered English via the French jurisprudence, which appeared earlier.
History
Ancient Indian jurisprudence is mentioned in various Dharmaśāstra texts, starting with the Dharmasutra of Bhodhayana.
In Ancient China, the Daoists, Confucians, and Legalists all had competing theories of jurisprudence.
Jurisprudence in Ancient Rome had its origins with the (periti)—experts in the jus mos maiorum (traditional law), a body of oral laws and customs.
Praetors established a working body of laws by judging whether or not singular cases were capable of being prosecuted either by the edicta, the annual pronunciation of prosecutable offense, or in extraordinary situations, additions made to the edicta. An iudex would then prescribe a remedy according to the facts of the case.
The sentences of the iudex were supposed to be simple interpretations of the traditional customs, but—apart from considering what traditional customs applied in each case—soon developed a more equitable interpretation, coherently adapting the law to newer social exigencies. The law was then adjusted with evolving institutiones (legal concepts), while remaining in the traditional mode. Praetors were replaced in the 3rd century BC by a laical body of prudentes. Admission to this body was conditional upon proof of competence or experience.
Under the Roman Empire, schools of law were created, and practice of the law became more academic. From the early Roman Empire to the 3rd century, a relevant body of literature was produced by groups of scholars, including the Proculians and Sabinians. The scientific nature of the studies was unprecedented in ancient times.
After the 3rd century, juris prudentia became a more bureaucratic activity, with few notable authors. It was during the Eastern Roman Empire (5th century) that legal studies were once again undertaken in depth, and it is from this cultural movement that Justinian's Corpus Juris Civilis was born.
Natural law
In its general sense, natural law theory may be compared to both state-of-nature law and general law understood on the basis of being analogous to the laws of physical science. Natural law is often contrasted to positive law which asserts law as the product of human activity and human volition.
Another approach to natural-law jurisprudence generally asserts that human law must be in response to compelling reasons for action. There are two readings of the natural-law jurisprudential stance.
The strong natural law thesis holds that if a human law fails to be in response to compelling reasons, then it is not properly a "law" at all. This is captured, imperfectly, in the famous maxim: lex iniusta non est lex (an unjust law is no law at all).
The weak natural law thesis holds that if a human law fails to be in response to compelling reasons, then it can still be called a "law", but it must be recognised as a defective law.
Notions of an objective moral order, external to human legal systems, underlie natural law. What is right or wrong can vary according to the interests one is focused on. John Finnis, one of the most important of modern natural lawyers, has argued that the maxim "an unjust law is no law at all" is a poor guide to the classical Thomist position.
Strongly related to theories of natural law are classical theories of justice, beginning in the West with Plato's Republic.
Aristotle
Aristotle is often said to be the father of natural law. Like his philosophical forefathers Socrates and Plato, Aristotle posited the existence of natural justice or natural right. His association with natural law is largely due to how he was interpreted by Thomas Aquinas. This was based on Aquinas' conflation of natural law and natural right, the latter of which Aristotle posits in Book V of the Nicomachean Ethics (Book IV of the Eudemian Ethics). Aquinas's influence was such as to affect a number of early translations of these passages, though more recent translations render them more literally.
Aristotle's theory of justice is bound up in his idea of the golden mean. Indeed, his treatment of what he calls "political justice" derives from his discussion of "the just" as a moral virtue derived as the mean between opposing vices, just like every other virtue he describes. His longest discussion of his theory of justice occurs in Nicomachean Ethics and begins by asking what sort of mean a just act is. He argues that the term "justice" actually refers to two different but related ideas: general justice and particular justice. When a person's actions toward others are completely virtuous in all matters, Aristotle calls them "just" in the sense of "general justice"; as such, this idea of justice is more or less coextensive with virtue. "Particular" or "partial justice", by contrast, is the part of "general justice" or the individual virtue that is concerned with treating others equitably.
Aristotle moves from this unqualified discussion of justice to a qualified view of political justice, by which he means something close to the subject of modern jurisprudence. Of political justice, Aristotle argues that it is partly derived from nature and partly a matter of convention. This can be taken as a statement that is similar to the views of modern natural law theorists. But it must also be remembered that Aristotle is describing a view of morality, not a system of law, and therefore his remarks as to nature are about the grounding of the morality enacted as law, not the laws themselves.
The best evidence of Aristotle's having thought there was a natural law comes from the Rhetoric, where Aristotle notes that, aside from the "particular" laws that each people has set up for itself, there is a "common" law that is according to nature. The context of this remark, however, suggests only that Aristotle thought that it could be rhetorically advantageous to appeal to such a law, especially when the "particular" law of one's own city was adverse to the case being made, not that there actually was such a law. Aristotle, moreover, considered certain candidates for a universally valid, natural law to be wrong. Aristotle's theoretical paternity of the natural law tradition is consequently disputed.
Thomas Aquinas
Thomas Aquinas was the most influential Western medieval legal scholar.
Thomas Aquinas is the foremost classical proponent of natural theology, and the father of the Thomistic school of philosophy, for a long time the primary philosophical approach of the Roman Catholic Church. The work for which he is best known is the Summa Theologiae. One of the thirty-five Doctors of the Church, he is considered by many Catholics to be the Church's greatest theologian. Consequently, many institutions of learning have been named after him.
Aquinas distinguished four kinds of law: eternal, natural, divine, and human:
* Eternal law refers to divine reason, known only to God. It is God's plan for the universe. Man needs this plan, for without it he would totally lack direction.
* Natural law is the "participation" in the eternal law by rational human creatures, and is discovered by reason
* Divine law is revealed in the scriptures and is God's positive law for mankind
* Human law is supported by reason and enacted for the common good.
Natural law is based on "first principles":
... this is the first precept of the law, that good is to be done and promoted, and evil is to be avoided. All other precepts of the natural law are based on this ...
The desires to live and to procreate are counted by Aquinas among those basic (natural) human values on which all other human values are based.
School of Salamanca
Francisco de Vitoria was perhaps the first to develop a theory of ius gentium (the rights of peoples), and thus is an important figure in the transition to modernity. He extrapolated his ideas of legitimate sovereign power to international affairs, concluding that such affairs ought to be determined by forms respecting of the rights of all and that the common good of the world should take precedence before the good of any single state. This meant that relations between states ought to pass from being justified by force to being justified by law and justice. Some scholars have upset the standard account of the origins of International law, which emphasises the seminal text De iure belli ac pacis by Hugo Grotius, and argued for Vitoria and, later, Suárez's importance as forerunners and, potentially, founders of the field. Others, such as Koskenniemi, have argued that none of these humanist and scholastic thinkers can be understood to have founded international law in the modern sense, instead placing its origins in the post-1870 period.
Francisco Suárez, regarded as among the greatest scholastics after Aquinas, subdivided the concept of ius gentium. Working with already well-formed categories, he carefully distinguished ius inter gentes from ius intra gentes. Ius inter gentes (which corresponds to modern international law) was something common to the majority of countries, although, being positive law, not natural law, it was not necessarily universal. On the other hand, ius intra gentes, or civil law, is specific to each nation.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1612) Excursion
Gist
A short journey or trip that a group of people make for pleasure.
Summary
An excursion is a trip by a group of people, usually made for leisure, education, or physical purposes. It is often an adjunct to a longer journey or visit to a place, sometimes for other (typically work-related) purposes.
Public transportation companies issue reduced price excursion tickets to attract business of this type. Often these tickets are restricted to off-peak days or times for the destination concerned.
Short excursions for education or for observations of natural phenomena are called field trips. One-day educational field studies are often made by classes as extracurricular exercises, e.g. to visit a natural or geographical feature.
The term is also used for short military movements into foreign territory, without a formal announcement of war.
Details
1. A usually short journey made for pleasure; an outing.
2. A roundtrip in a passenger vehicle at a special low fare.
3. A group taking a short pleasure trip together.
4. A diversion or deviation from a main topic; a digression.
5. Physics
a. A movement from and back to a mean position or axis in an oscillating or alternating motion.
b. The distance traversed in such a movement.
In short:
a) A military sortie; raid.
b) A short trip taken with the intention of returning to the point of departure; short journey, as for pleasure; jaunt.
c) A roundtrip in a passenger vehicle at a special low fare.
d) A round trip (on a train, bus, ship, etc.) at reduced rates, usually with limits set on the dates of departure and return.
e) A group taking such a trip.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1613) Fermentation
Gist
Fermentation is a process in which sugars are used to generate energy for living cells. Besides, this energy is obtained without the need of O2, since it uses an anaerobic pathway. Thus, it represents an alternative way to obtain energy! Fermenting microorganisms and their by-products define the fermentation type.
Summary
Fermentation is a metabolic process that produces chemical changes in organic substances through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage. The science of fermentation is known as zymology.
In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically.
Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans.
Industrial fermentation is a broader term used for the process of applying microbes for the large-scale production of chemicals, biofuels, enzymes, proteins and pharmaceuticals.
Definitions and etymology
Below are some definitions of fermentation ranging from informal, general usages to more scientific definitions.
i) Preservation methods for food via microorganisms (general use).
ii) Any large-scale microbial process occurring with or without air (common definition used in industry, also known as industrial fermentation).
iii) Any process that produces alcoholic beverages or acidic dairy products (general use).
iv) Any energy-releasing metabolic process that takes place only under anaerobic conditions (somewhat scientific).
v) Any metabolic process that releases energy from a sugar or other organic molecule, does not require oxygen or an electron transport system, and uses an organic molecule as the final electron acceptor (most scientific).
The word "ferment" is derived from the Latin verb fervere, which means to boil. It is thought to have been first used in the late 14th century in alchemy, but only in a broad sense. It was not used in the modern scientific sense until around 1600.
Details
Fermentation is a chemical process by which molecules such as glucose are broken down anaerobically. More broadly, fermentation is the foaming that occurs during the manufacture of wine and beer, a process at least 10,000 years old. The frothing results from the evolution of carbon dioxide gas, though this was not recognized until the 17th century. French chemist and microbiologist Louis Pasteur in the 19th century used the term fermentation in a narrow sense to describe the changes brought about by yeasts and other microorganisms growing in the absence of air (anaerobically); he also recognized that ethyl alcohol and carbon dioxide are not the only products of fermentation.
Anaerobic breakdown of molecules
In the 1920s it was discovered that, in the absence of air, extracts of muscle catalyze the formation of lactate from glucose and that the same intermediate compounds formed in the fermentation of grain are produced by muscle. An important generalization thus emerged: that fermentation reactions are not peculiar to the action of yeast but also occur in many other instances of glucose utilization.
Glycolysis, the breakdown of sugar, was originally defined about 1930 as the metabolism of sugar into lactate. It can be further defined as that form of fermentation, characteristic of cells in general, in which the six-carbon sugar glucose is broken down into two molecules of the three-carbon organic acid, pyruvic acid (the nonionized form of pyruvate), coupled with the transfer of chemical energy to the synthesis of adenosine triphosphate (ATP). The pyruvate may then be oxidized, in the presence of oxygen, through the tricarboxylic acid cycle, or in the absence of oxygen, be reduced to lactic acid, alcohol, or other products. The sequence from glucose to pyruvate is often called the Embden–Meyerhof pathway, named after two German biochemists who in the late 1920s and ’30s postulated and analyzed experimentally the critical steps in that series of reactions.
The term fermentation now denotes the enzyme-catalyzed, energy-yielding pathway in cells involving the anaerobic breakdown of molecules such as glucose. In most cells the enzymes occur in the soluble portion of the cytoplasm. The reactions leading to the formation of ATP and pyruvate thus are common to sugar transformation in muscle, yeasts, some bacteria, and plants.
Industrial fermentation
Industrial fermentation processes begin with suitable microorganisms and specified conditions, such as careful adjustment of nutrient concentration. The products are of many types: alcohol, glycerol, and carbon dioxide from yeast fermentation of various sugars; butyl alcohol, acetone, lactic acid, monosodium glutamate, and acetic acid from various bacteria; and citric acid, gluconic acid, and small amounts of antibiotics, vitamin B12, and riboflavin (vitamin B2) from mold fermentation. Ethyl alcohol produced via the fermentation of starch or sugar is an important source of liquid biofuel.
Additional Information
Fermentation Definition:
What is fermentation? Fermentation is the breaking down of sugar molecules into simpler compounds to produce substances that can be used in making chemical energy. Chemical energy, typically in the form of ATP, is important as it drives various biological processes. Fermentation does not use oxygen; thus, it is “anaerobic”.
Apart from fermentation, living things produce chemical energy by degrading sugar molecules (e.g. glucose) through aerobic respiration and anaerobic respiration. Aerobic respiration uses oxygen, hence, the term ”aerobic”. It has three major steps. First, it begins with glycolysis wherein the 6-carbon sugar molecule is lysed into two 3-carbon pyruvate molecules. Next, each pyruvate is converted into acetyl coenzyme A to be broken down to CO2 through the citric acid cycle. Along with this, the hydrogen atoms and electrons from the carbon molecules are transferred to the electron-carrier molecules, NADH, and FADH2. Then, these electron carriers shuttle the high-energy electrons to the electron transport chain to harness the energy and synthesize ATP. The final electron acceptor in the chain is oxygen. As for anaerobic respiration, this form of respiration does not require oxygen. However, it is similar to aerobic respiration in a way that the electrons are passed along the electron transport chain to the final electron acceptor. In anaerobic respiration, the bottom of the chain is not oxygen but other molecules, for example, sulfate ion or nitrate ion.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1614) Emergency
Gist
an unforeseen combination of circumstances or the resulting state that calls for immediate action.
2. an urgent need for assistance or relief.
Details
An emergency is an urgent, unexpected, and usually dangerous situation that poses an immediate risk to health, life, property, or environment and requires immediate action. Most emergencies require urgent intervention to prevent a worsening of the situation, although in some situations, mitigation may not be possible and agencies may only be able to offer palliative care for the aftermath.
While some emergencies are self-evident (such as a natural disaster that threatens many lives), many smaller incidents require that an observer (or affected party) decide whether it qualifies as an emergency. The precise definition of an emergency, the agencies involved and the procedures used, vary by jurisdiction, and this is usually set by the government, whose agencies (emergency services) are responsible for emergency planning and management.
Defining an emergency
An incident, to be an emergency, conforms to one or more of the following, if it:
* Poses an immediate threat to life, health, property, or environment
* Has already caused loss of life, health detriments, property damage, or environmental damage
* has a high probability of escalating to cause immediate danger to life, health, property, or environment
In the United States, most states mandate that a notice be printed in each telephone book that requires that someone must relinquish use of a phone line, if a person requests the use of a telephone line (such as a party line) to report an emergency. State statutes typically define an emergency as, "...a condition where life, health, or property is in jeopardy, and the prompt summoning of aid is essential."
Whilst most emergency services agree on protecting human health, life and property, the environmental impacts are not considered sufficiently important by some agencies. This also extends to areas such as animal welfare, where some emergency organizations cover this element through the "property" definition, where animals owned by a person are threatened (although this does not cover wild animals). This means that some agencies do not mount an "emergency" response where it endangers wild animals or environment, though others respond to such incidents (such as oil spills at sea that threaten marine life). The attitude of the agencies involved is likely to reflect the predominant opinion of the government of the area.
Types of emergency:
Dangers to life
Many emergencies cause an immediate danger to the life of people involved. This can range from emergencies affecting a single person, such as the entire range of medical emergencies including heart attacks, strokes, cardiac arrest and trauma, to incidents that affect large numbers of people such as natural disasters including tornadoes, hurricanes, floods, earthquakes, mudslides and outbreaks of diseases such as coronavirus, cholera, Ebola, and malaria.
Most agencies consider these the highest priority emergency, which follows the general school of thought that nothing is more important than human life.
Dangers to health
Some emergencies are not necessarily immediately threatening to life, but might have serious implications for the continued health and well-being of a person or persons (though a health emergency can subsequently escalate to life-threatening).
The causes of a health emergency are often very similar to the causes of an emergency threatening to life, which includes medical emergencies and natural disasters, although the range of incidents that can be categorized here is far greater than those that cause a danger to life (such as broken limbs, which do not usually cause death, but immediate intervention is required if the person is to recover properly). Many life emergencies, such as cardiac arrest, are also health emergencies.
Dangers to the environment
Some emergencies do not immediately endanger life, health or property, but do affect the natural environment and creatures living within it. Not all agencies consider this a genuine emergency, but it can have far-reaching effects on animals and the long term condition of the land. Examples would include forest fires and marine oil spills.
Systems of classifying emergencies
Agencies across the world have different systems for classifying incidents, but all of them serve to help them allocate finite resource, by prioritising between different emergencies.
The first stage of any classification is likely to define whether the incident qualifies as an emergency, and consequently if it warrants an emergency response. Some agencies may still respond to non-emergency calls, depending on their remit and availability of resource. An example of this would be a fire department responding to help retrieve a cat from a tree, where no life, health or property is immediately at risk.
Following this, many agencies assign a sub-classification to the emergency, prioritising incidents that have the most potential for risk to life, health or property (in that order). For instance, many ambulance services use a system called the Advanced Medical Priority Dispatch System (AMPDS) or a similar solution. The AMPDS categorises all calls to the ambulance service using it as either 'A' category (immediately life-threatening), 'B' Category (immediately health threatening) or 'C' category (non-emergency call that still requires a response). Some services have a fourth category, where they believe that no response is required after clinical questions are asked.
Another system for prioritizing medical calls is known as Emergency Medical Dispatch (EMD). Jurisdictions that use EMD typically assign a code of "alpha" (low priority), "bravo" (medium priority), "charlie" (requiring advanced life support), delta (high priority, requiring advanced life support) or "echo" (maximum possible priority, e.g., witnessed cardiac arrests) to each inbound request for service; these codes are then used to determine the appropriate level of response.
Other systems (especially as regards major incidents) use objective measures to direct resource. Two such systems are SAD CHALET and ETHANE, which are both mnemonics to help emergency services staff classify incidents, and direct resource. Each of these acronyms helps ascertain the number of casualties (usually including the number of dead and number of non-injured people involved), how the incident has occurred, and what emergency services are required.
Agencies involved in dealing with emergencies
Most developed countries have a number of emergency services operating within them, whose purpose is to provide assistance in dealing with any emergency. They are often government operated, paid for from tax revenue as a public service, but in some cases, they may be private companies, responding to emergencies in return for payment, or they may be voluntary organisations, providing the assistance from funds raised from donations.
Most developed countries operate three core emergency services:
Police – handle mainly crime-related emergencies.
Fire – handle fire-related emergencies and usually possess secondary rescue duties.
Medical – handle medical-related emergencies.
There may also be a number of specialized emergency services, which may be a part of one of the core agencies, or may be separate entities who assist the main agencies. This can include services, such as bomb disposal, search and rescue, and hazardous material operations.
The Military and the Amateur Radio Emergency Service (ARES) or Radio Amateur Civil Emergency Service (RACES) help in large emergencies such as a disaster or major civil unrest.
Summoning emergency services
Most countries have an emergency telephone number, also known as the universal emergency number, which can be used to summon the emergency services to any incident. This number varies from country to country (and in some cases by region within a country), but in most cases, they are in a short number format, such as 911 (United States and many parts of Canada), 999 (United Kingdom), 112 (Europe) and 000 (Australia).
The majority of mobile phones also dial the emergency services, even if the phone keyboard is locked, or if the phone has an expired or missing SIM card, although the provision of this service varies by country and network.
Civil emergency services
In addition to those services provided specifically for emergencies, there may be a number of agencies who provide an emergency service as an incidental part of their normal 'day job' provision. This can include public utility workers, such as in provision of electricity or gas, who may be required to respond quickly, as both utilities have a large potential to cause danger to life, health and property if there is an infrastructure failure.
Domestic emergency services
Generally perceived as pay per use emergency services, domestic emergency services are small, medium or large businesses who tend to emergencies within the boundaries of licensing or capabilities. These tend to consist of emergencies where health or property is perceived to be at risk but may not qualify for official emergency response. Domestic emergency services are in principal similar to civil emergency services where public or private utility workers will perform corrective repairs to essential services and avail their service at all times; however, these are at a cost for the service. An example would be an emergency plumber
Emergency action principles (EAP)
Emergency action principles are key 'rules' that guide the actions of rescuers and potential rescuers. Because of the inherent nature of emergencies, no two are likely to be the same, so emergency action principles help to guide rescuers at incidents, by sticking to some basic tenets.
The adherence to (and contents of) the principles by would-be rescuers varies widely based on the training the people involved in emergency have received, the support available from emergency services (and the time it takes to arrive) and the emergency itself.
Key emergency principle
The key principle taught in almost all systems is that the rescuer, whether a lay person or a professional, should assess the situation for danger.
The reason that an assessment for danger is given such high priority is that it is core to emergency management that rescuers do not become secondary victims of any incident, as this creates a further emergency that must be dealt with.
A typical assessment for danger would involve observation of the surroundings, starting with the cause of the accident (e.g. a falling object) and expanding outwards to include any situational hazards (e.g. fast moving traffic) and history or secondary information given by witnesses, bystanders or the emergency services (e.g. an attacker still waiting nearby).
Once a primary danger assessment has been complete, this should not end the system of checking for danger, but should inform all other parts of the process.
If at any time the risk from any hazard poses a significant danger (as a factor of likelihood and seriousness) to the rescuer, they should consider whether they should approach the scene (or leave the scene if appropriate).
Managing an emergency
There are many emergency services protocols that apply in an emergency, which usually start with planning before an emergency occurs. One commonly used system for demonstrating the phases is shown here on the right.
The planning phase starts at preparedness, where the agencies decide how to respond to a given incident or set of circumstances. This should ideally include lines of command and control, and division of activities between agencies. This avoids potentially negative situations such as three separate agencies all starting an official emergency shelter for victims of a disaster.
Following an emergency occurring, the agencies then move to a response phase, where they execute their plans, and may end up improvising some areas of their response (due to gaps in the planning phase, which are inevitable due to the individual nature of most incidents).
Agencies may then be involved in recovery following the incident, where they assist in the clear up from the incident, or help the people involved overcome their mental trauma.
The final phase in the circle is mitigation, which involves taking steps to ensure no re-occurrence is possible, or putting additional plans in place to ensure less damage is done. This should feed back into the preparedness stage, with updated plans in place to deal with future emergencies, thus completing the circle.
State of emergency
In the event of a major incident, such as civil unrest or a major disaster, many governments maintain the right to declare a state of emergency, which gives them extensive powers over the daily lives of their citizens, and may include temporary curtailment on certain civil rights, including the right to trial. For instance to discourage looting of an evacuated area, a shoot on sight policy, however unlikely to occur, may be publicized.
Additional Information
Emergency medicine is a medical specialty emphasizing the immediacy of treatment of acutely ill or injured individuals.
Among the factors that influenced the growth of emergency medicine was the increasing specialization in other areas of medicine. With the shift away from general practice—especially in urban centres—the emergency room became for many, in effect, a primary source of health care. Another factor was the adoption of a number of standard emergency procedures—such as immediate paramedic attention to severe wounds and the rapid transportation of the ill or injured to a hospital—that had evolved in the military medical corps; as used in the civilian hospital, these techniques resulted in such measures as the training of paramedics and the development of the hospital emergency room as a major trauma centre.
Together these factors led to a greatly increased demand for emergency services and in the early 1960s led to the full-time staffing of hospital emergency rooms. The physicians who led the emergency-room team, once recruited from other specialties, felt an increasing demand for training in the management of both major traumas and a wide range of acute medical problems. Emergency medicine became an officially recognized specialty in 1979. In the following decades, prehospital care benefited from technological advances, particularly in the area of cardiac life-support.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1615) Elixir
Summary
An elixir is a sweet liquid used for medical purposes, to be taken orally and intended to cure one's illness. When used as a pharmaceutical preparation, an elixir contains at least one active ingredient designed to be taken orally.
Etymology
For centuries elixir primarily meant an ingredient used in alchemy, either referring to a liquid which purportedly converts lead to gold, or a substance or liquid which is believed to cure all ills and give eternal life.
Types
Non-medicated elixirs
These are used as solvents or vehicles for the preparation of medicated elixirs. Active ingredients are dissolved in a 15–50% by volume solution of ethyl alcohol:
aromatic elixirs (USP)
isoalcoholic elixirs (NF)
compound benzaldehyde elixirs (NF)
Medicated elixirs
These include:
* antihistaminic elixirs used against allergy, such as chlorpheniramine maleate (USP) or diphenhydramine HCl
* sedative and hypnotic elixirs, the former to induce drowsiness, the latter to induce sleep
* pediatric elixirs such as chloral hydrate
* expectorant elixirs used to facilitate productive cough (i.e. cough with sputum), such as terpin hydrate
East Asian vitamin drinks
Daily non-alcoholic non-caffeinated 'vitamin drinks' have been popular in East Asia since the 1950s, with Oronamin from Otsuka Pharmaceutical perhaps the market leader. Packaged in brown light-proof bottles, these drinks have the reputation of being enjoyed by old men and other health-conscious individuals. Counterparts exist in South Korea and China.
Western energy drinks typically have caffeine and are targeted at a younger demographic, with colorful labels and printed claims of increased athletic/daily performance.
Composition
An elixir is a hydro-alcoholic solution of at least one active ingredient. The alcohol is mainly used to:
* Solubilize the active ingredient(s) and some excipients
* Retard the crystallization of sugar
* Preserve the finished product
* Provide a sharpness to the taste
* Aid in masking the unpleasant taste of the active ingredient(s)
* Enhance the flavor.
The lowest alcoholic quantity that will dissolve completely the active ingredient(s) and give a clear solution is generally chosen. High concentrations of alcohol give burning taste to the final product.
An elixir may also contain the following excipients:
* Sugar and/or sugar substitutes like the sugar polyols glycerol and sorbitol.
* Preservatives like parabens and benzoates and antioxidants like butylated hydroxytoluene (BHT) and sodium metabisulfite.
* Buffering agents
* Chelating agents like sodium ethylenediaminetetraacetic acid (EDTA)
* Flavoring agents and flavor enhancers
* Coloring agents
Storage
Elixirs should only be stored in a tightly closed, light resistant container away from direct heat and sunlight.
Details
Miraculous, magical, and maybe a little mysterious, an elixir is a sweet substance or solution that cures the problem at hand.
Elixir is a word often used with a knowing wink — a sort of overstatement of a product's effectiveness, or a decision maker's policy. With linguistic roots in the long-ago alchemists' search for the philosophers' stone, the word has an element of fantasy to spice up anything, like a remedy for the common cold. The mythic fountain of youth is certainly an elixir, but it can also refer to a real liquid, concept, or plan.
Definitions of elixir:
* a substance believed to cure all ills
* a sweet flavored liquid (usually containing a small amount of alcohol) used in compounding medicines to be taken by mouth in order to mask an unpleasant taste
* hypothetical substance that the alchemists believed to be capable of changing base metals into gold.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1616) Visual Communication
Gist
Visual communication is the use of visual elements to convey ideas and information which include but are not limited to signs, typography, drawing, graphic design, illustration, industrial design, advertising, animation, and electronic resources. Humans have used visual communication since prehistoric times.
Summary
Visual communication is the practice of using visual elements to convey a message, inspire change, or evoke emotion.
It’s one part communication design—crafting a message that educates, motivates, and engages, and one part graphic design—using design principles to communicate that message so that it’s clear and eye-catching.
Effective visual communication should be equally appealing and informative.
Details
Visual communication is the use of visual elements to convey ideas and information which include but are not limited to signs, typography, drawing, graphic design, illustration, industrial design, advertising, animation, and electronic resources. Humans have used visual communication since prehistoric times. Within modern culture, there are several types of characteristics when it comes to visual elements, they consist of objects, models, graphs, diagrams, maps, and photographs. Outside the different types of characteristics and elements, there are seven components of visual communication: color, shape, tones, texture, figure-ground, balance, and hierarchy.
Each of these characteristics, elements, and components play an important role in daily lives. Visual communication holds a specific purpose in aspects such as social media, culture, politics, economics, and science. In considering these different aspects, visual elements present various uses and how they convey information. Whether it is advertisements, teaching and learning, or speeches and presentations, they all involve visual aids that communicate a message. In reference to the visual aids, the following are the most common: chalkboard or whiteboard, poster board, handouts, video excerpts, projection equipment, and computer-assisted presentations.
Overview
The debate about the nature of visual communication dates back thousands of years. Visual communication relies on a collection of activities, communicating ideas, attitudes, and values via visual resources, i.e. text, graphics, or video. The evaluation of a good visual communication design is mainly based on measuring comprehension by the audience, not on personal aesthetic and/or artistic preference as there are no universally agreed-upon principles of aesthetics. Visual communication by e-mail, a textual medium, is commonly expressed with ASCII art, emoticons, and embedded digital images. Visual communication has become one of the most important approaches using which people communicate and share information.
The term 'visual presentation' is used to refer to the actual presentation of information through a visible medium such as text or images. Recent research in the field has focused on web design and graphically-oriented usability.
Important figures
Aldous Huxley is regarded as one of the most prominent explorers of visual communication and sight-related theories.[10] Becoming near-blind in his teen years as the result of an illness influenced his approach, and his work includes important novels on the dehumanizing aspects of scientific progress, most famously Brave New World and The Art of Seeing. He described "seeing" as being the sum of sensing, selecting, and perceiving. One of his most famous quotes is "The more you see, the more you know."
Max Wertheimer is said to be the father of Gestalt psychology. Gestalt means form or shape in German, and the study of Gestalt psychology show emphasis in simplicity, as its properties group visuals by similarity in shape or color, continuity, and proximity. Additional laws include closure and figure-ground principles in studied images is also intensively taught.
Image analysis
Visual communication contains image aspects. The interpretation of images is subjective and to understand the depth of meaning, or multiple meanings, communicated in an image requires image analysis. Images can be analyzed though many perspectives, for example these six major perspectives presented by Paul Martin Lester: Personal, Historical, Technical, Ethical, Cultural, and Critical.
* Personal perspective: When a viewer has an opinion about an image based on their personal thoughts. Personal response depends on the viewer's thoughts and values, individually. However, this might sometimes conflict with cultural values. Also when a viewer has viewed an image with a personal perspective, it is hard to change the view of the image on the viewer, even though the image can be seen in other ways.
* Historical perspective: An image's view can be arising from the history of the use of media. Through times sort images have been changed, because the use of different (new) media. For example: The result of using the computer to edit images (e.g. Photoshop) is quite different when comparing images that are made and edited by craft.
* Technical perspective: When the view of an image is influenced by the use of lights, position and the presentation of the image. The right use of light, position and presentation of the image can improve the view of the image. It makes the image looks better than the reality.
* Ethical perspective: From this perspective, the maker of the image, the viewer and the image itself must be responsible morally and ethically to the image. This perspective is also categorized in six categories: categorical imperative, utilitarianism, hedonism, golden mean, golden rule, and veil of ignorance.
* Cultural perspective: Symbolization is an important definition for this perspective. Cultural perspective involves identity of symbols. The uses of words that are related with the image, the use of heroes in the image, etc. are the symbolization of the image. The cultural perspective can also be seen as the semiotic perspective.
* Critical perspective: The view of images in the critical perspective is when the viewers criticize the images, but the critics have been made in interests of the society, although an individual makes the critics. This way this perspective differs from the personal perspective.
Visual aid media: Simple to advanced
Chalkboard or whiteboard: Chalkboards and whiteboards are very useful visual aids, particularly when more advanced types of media are available. They are cheap and also allow for much flexibility. The use of chalkboards or whiteboards is convenient, but they are not a perfect visual aid. Often, using this medium as an aid can create confusion or boredom. Particularly if a student who is not familiar with how to properly use visual aids attempts to draw on a board while they are speaking, they detract time and attention from their actual speech.
Poster board: A poster is a very simple and easy visual aid. Posters can display charts, graphs, pictures, or illustrations. The biggest drawback of using a poster as a visual aid is that often a poster can appear unprofessional. Since a poster board paper is relatively flimsy, often the paper will bend or fall over. The best way to present a poster is to hang it up or tape it to a wall.
Handouts: Handouts can also display charts, graphs, pictures, or illustrations. An important aspect of the use of a handout is that a person can keep a handout with them long after the presentation is over. This can help the person better remember what was discussed. Passing out handouts, however, can be extremely distracting. Once a handout is given out, it might potentially be difficult to bring back your audience's attention. The person who receives the handout might be tempted to read what is on the paper, which will keep them from listening to what the speaker is saying. If using a handout, the speaker distributes the hand out right before you reference it. Distributing handouts is acceptable in a lecture that is an hour or two, but in a short lecture of five to ten minutes, a handout should not be used.
Video excerpts: A video can be a great visual aid and attention grabber, however, a video is not a replacement for an actual speech. There are several potential drawbacks to playing a video during a speech or lecture. First, if a video is playing that includes audio, the speaker will not be able to talk. Also, if the video is very exciting and interesting, it can make what the speaker is saying appear boring and uninteresting. The key to showing a video during a presentation is to make sure to transition smoothly into the video and to only show very short clips.
Projection equipment: There are several types of projectors. These include slide projectors, overhead projectors, and computer projectors. Slide projectors are the oldest form of projector, and are no longer used. Overhead projectors are still used but are somewhat inconvenient to use. In order to use an overhead projector, a transparency must be made of whatever is being projected onto the screen. This takes time and costs money. Computer projectors are the most technologically advanced projectors. When using a computer projector, pictures and slides are easily taken right from a computer either online or from a saved file and are blown up and shown on a large screen. Though computer projectors are technologically advanced, they are not always completely reliable because technological breakdowns are not uncommon of the computers of today.
Computer-assisted presentations: Presentations through presentation software can be an extremely useful visual aid, especially for longer presentations. For five- to ten-minute presentations, it is probably not worth the time or effort to put together a deck of slides. For longer presentations, however, they can be a great way to keep the audience engaged and keep the speaker on track. A potential drawback of using them is that it usually takes a lot of time and energy to put together. There is also the possibility of a computer malfunction, which can mess up the flow of a presentation.
Components
Components of visualization make communicating information more intriguing and compelling. The following components are the foundation for communicating visually. Hierarchy is an important principle because it assists the audience in processing the information by allowing them to follow through the visuals piece by piece. When having a focal point on a visual aid (i.e. Website, Social Media, Poster, etc...), it can serve as a starting point for the audience to guide them. In order to achieve hierarchy, we must take into account the other components: Color, Shape, Tones, Texture, Figure-Ground, Balance.
Colors is the first and most important component when communicating through visuals. Colors displays an in-depth connection between emotions and experiences. Additive and subtractive color models help in visually communicating aesthetically please information. Additive color model, also known as RGB color (Red, Green, Blue) goes from dark to light colors, while subtractive color model is the opposite. The subtractive color model includes the primary CMYK colors (Cyan, Magenta, Yellow, Black) which go from light to dark. Shape is the next fundamental component that assists in creating a symbol that builds a connection with the audience. There are two categories that shapes can fall under: Organic or Biomorphic shapes, and Geometric or Rectilinear shapes. Organic or biomorphic shapes are shapes that depict natural materials (which include curvy lines), while Geometric or Rectilinear shapes are shapes that are created by man (including triangles, rectangles, ovals, and circles).
Tone refers to the difference of color intensity, meaning more light or dark. The purpose of achieving a certain tone is to put a spotlight on a graphical presentation and emphasize the information. Similarly, texture can enhance the viewers optics and creates a more personal feel compared to a corporate feel. Texture refers to the surface of an object, whether is it 2-D or 3-D, that can amplify a user's content.
Figure-ground is the relationship between a figure and the background. In other words, it is the relationship between shapes, objects, types, etc. and the space it is in. We can look at figure as the positive space, and ground as the negative space. In comparison, positive space is the objects that hold dominance visually, while negative space (as mentioned previously) is the background. In addition to creating a strong contrast in color, texture, and tone, figure-ground can highlight different figures. As for balance, it is important to have symmetrical or asymmetrical balance in visual communication. Symmetrical balance holds a stable composition and is proper in conveying informative visual communication. As for asymmetrical balance, the balance of visuals is weighted more to one side. For instance, color is more weighted to one color than the other, while in a symmetrical balance all colors are equally weighted.
Prominence and motive:
Social media
Social media is one of the most effective ways to communicate. The incorporation of text and images deliver messages quicker and more simplistic through social media platforms. A potential drawback can be there is limited access due to the internet access requirement and certain limitations to the number of characters and image size. Despite the potential drawback, there has been a shift towards more visual images with the rise of YouTube, Instagram, and Snapchat. In the rise of these platforms, Facebook and Twitter, have followed suit and integrated more visual images into their platform outside the use of written posts. It can be stated that visual images are used in two ways: as additional clarification for spoken or written text, or to create individual meaning (usually incorporating ambiguous meanings). These meanings can assist in creating casual friendships through interactions and either show or fabricate reality. These major platforms are becoming focused on visual images by growing a multi-modal platform with users having the ability to edit or adjust their pictures or videos these platforms. When analyzing the relationship between visual communication and social media, four themes arise:
* Emerging genres and practices: The sharing of various visual elements allow for the creation of genres, or new arrangements of socially accepted visual elements (i.e. photographs or GIFs) based on the platforms. These emerging genres are used as self-expression of identity, to feel a sense of belonging of different sub-group of the online community.
* Identity construction: Similar to genres, users will use visuals through social media to express their identities. Visual elements can change in meaning over a period of time by the person who shared it, which means that visual elements can be dynamic. This makes visuals uncontrollable since the person may not identify as that specific identity, but rather someone who has evolved.
* Everyday public/private vernacular practices: This theme presents the difficulty of deciphering what is considered public, or private. Users can post the privacy from their own home, however, their post is interacting with users from the online public.
* Transmedia circulation, appropriation, and control: Transmedia circulation refers to visual elements being circulated through different types of media. Visual elements, such as images can be taken from one platform, edited, and posted to another platform without recognizing where it originally came from. The concept of appropriation and ownership can be brought into question, making aware the idea that if a user can appropriate another person work, then that user's work can appropriated, as well.
Culture
Members of different cultures can participate in the exchange of visual imagery based on the idea of universal understandings. The term visual culture allows for all cultures to feel equal, making it the inclusive aspect of every life. When considering visual culture in communication, it is shaped by the values amongst all cultures, especially regarding the concepts of high and low-context. Cultures that are generally more high-context will rely heavily on visual elements that have an implied and implicit meaning. However, cultures that are low-context will rely on visual elements that have a direct meaning and rely more on the textual explanations.
Politics
Visual communication in politics have become a primary sense of communication, while dialogue and text have become a secondary sense. This may be due to the increased use of televisions, as viewers become more dependent on visuals. Sound bite has become a popular and perfected art among all political figures. Despite it being a favored mode of showcasing a political figure's agenda, it has shown that 25.1% of news coverage displayed image bites - instead of voices, there are images and short videos. Visuals are deemed an essential function in political communication, and behind these visuals are 10 functions for why political figures use them. These functions include:
* Argument function: Although images do not indicate any words being said, this function conveys the idea that images can have an association between objects or ideas. Visuals in politics can make arguments about the different aspects of a political figure's character or intentions. When introducing visual imagery with sound, the targeted audience can clarify ambiguous messages that a political figure has said in interviews or news stories.
* Agenda setting function: Under this function, it is important that political figures produce newsworthy pictures that will allow for their message to gain coverage. The reason for this is due to the agenda-setting theory, where importance of public agenda is taken into consideration when the media determines the importance of a certain story or issue. With that said, if politicians do not provide an interesting and attention-grabbing picture, there will likely be no news coverage. A way for a politician to gain news coverage, is to provide exclusivity for what the media can capture from a certain event. Despite not having the ability to control whether they receive coverage, they can control if the media gets an interesting and eye-catching visual.
* Dramatization function: Similar to agenda setting, the dramatization function targets a specific policy that a political figure wants to advocate for. This function can be seen when Michelle Obama promoted nutrition by hosting a media event of her planting a vegetable garden, or Martin Luther King Jr. producing visuals from his 1963 campaign for racial injustice. In some cases, these images are used as icons for social movements.
* Emotional function: Visuals can be used as a way to provoke an emotional response. A study that was performed found that motion pictures and video has more of an emotional impact than still images. On the other hand, research has suggested that the logic and rationality of a viewer is not barred by emotion. In fact, logic and emotion are interrelated meaning that images not only can have emotional arousal, but also influence viewers to think logically.
* Image-building function: Imagery gives a viewer a first impression of a candidate when they are running for office. These visuals give voters a sense of who they will be voting for during the elections, regarding their background, personality, or demeanor. They can create their image by appearing be family-oriented, religiously involved, or showing a commonality with the disadvantaged community.
* Identification function: Through the identification function, visuals can create an identification between political figures and audiences. In other words, the audience may perceive a type of similarity with the political figure. When a voter finds a similarity with a candidate they are more likely to vote for them. This is the same when a voter notices a candidate who does not have any perceived similarities, then they are less likely to vote for them.
* Documentation function: Similar to a stamp on a passport that indicated you have been to a certain country, photographs of a political figure can document that an event had happened and they were there. By documenting an event that occurred, there is evidence and proof for argumentative claims. If a political figure claims one thing, then there is evidence to either back it up or disprove it.
* Societal symbol function: This function is used in visuals when political figures use iconic symbols to draw on emotional power. For instance, political figures will stand with American flags, be photographed with military personnel, or even attending a sport. These three areas of societal symbols hold a strong sense of patriotism. In comparison, congressional candidates may be pictured with former or current presidents to gain an implied endorsement. Many places like the Statue of Liberty, Mount Rushmore, or the Tomb of the Unknown Soldier can be seen and iconic, societal symbols that hold a sense of emotional power.
* Transportation function: The transportation function of using visuals is to transporter the viewer to a different time or place. Visuals can figuratively bring viewers to the past or to an idealized future. Political figures will use this tactic as a way to appeal to the emotional side of their audience and get them to visually relate to the argument that is at hand.
* Ambiguity function: Visuals can be used to interpret different meanings without having to add any words. By not adding any words, visuals are normally used for controversial arguments. On the basis that visual claims can be controversial, they are held to a less strict standard compared to other symbols.
Economic
Economics has been built on the foundation of visual elements, such as graphs and charts. Similar to the other aspects of why visual elements are used, graphs are used by economists to clarify complex ideas. Graphs simplify the process of visualizing trends that happen over time. Along the same lines, graphs are able to assist in determining a relationship between two or more variables. The relationship can determine if there is a positive correlation or negative correlation between the variables. A graph that economists rely heavily on is a time-series graph, which measures a particular variable over a period of time. The graph includes time being on the X-axis, while a changing variable is on the Y-axis.
Science and medicine
Science and medicine has shown a need for visual communication to assist in explaining to non-scientific readers. From Bohr's atomic model to NASA's photographs of Earth, these visual elements have served as tools in furthering the understand of science and medicine. More specifically, elements like graphs and slides portray both data and scientific concepts. Patterns that are revealed by those graphs are then used in association with the data to determine a meaningful correlation. As for photographs, they can be useful for physicians to rely on in figuring out visible signs of diseases and illnesses.
However, using visual elements can have a negative effect on the understanding of information. Two major obstacles for non-scientific readers is: 1.) the lack of integration of visual elements in every day scientific language, and 2.) incorrectly identifying the targeted audience and not adjusting to their level of understanding. To tackle these obstacles, one solution is for science communicators must place the user at the center of the design, which is called User-Centered Design. This design focuses on strictly the user and how they can interact with the visual element with minimum stress, but maximum level of efficiency. Another solution could be implemented at the source, which is university-based programs. In these programs, universities need to introduce visual literacy to those in science communication, helping in producing graduates who can accurately interpret, analyze, evaluate, and design visual elements that further the understanding of science and medicine.
Additional Information
Visual communication is the practice of using visual elements to convey a message, inspire change, or evoke emotion.
It’s one part communication design—crafting a message that educates, motivates, and engages, and one part graphic design—using design principles to communicate that message so that it’s clear and eye-catching.
Effective visual communication should be equally appealing and informative.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1617) Pheniramine
Gist
Pheniramine is an antihistamine used to treat allergic rhinitis and pruritus. Naphcon A, Opcon-A, Robitussin AC, Visine-A. Generic Name Pheniramine DrugBank Accession Number DB01620 Background. Pheniramine is a first-generation antihistamine in the alkylamine class, similar to brompheniramine and chlorpheniramine.
Summary
Pheniramine (trade name Avil among others) is an antihistamine with anticholinergic properties used to treat allergic conditions such as hay fever or urticaria. It has relatively strong sedative effects, and may sometimes be used off-label as an over-the-counter sleeping pill in a similar manner to other sedating antihistamines such as diphenhydramine. Pheniramine is also commonly found in eyedrops used for the treatment of allergic conjunctivitis.
It was patented in 1948. Pheniramine is generally sold in combination with other medications, rather than as a stand-alone drug, although some formulations are available containing pheniramine by itself.
Side effects
Pheniramine may cause drowsiness or Tachycardia, and over-dosage may lead to sleep disorders.
Overdose may lead to seizures, especially in combination with alcohol.
People combining with cortisol in the long term should avoid pheniramine as it may decrease levels of adrenaline (epinephrine) which may lead to loss of consciousness.
Pheniramine is a deliriant (hallucinogen) in toxic doses. Recreational use of Coricidin for the dissociative (hallucinogenic) effect of its dextromethorphan is hazardous because it also contains chlorpheniramine.
Chemical relatives
Halogenation of pheniramine increases its potency 20-fold. Halogenated derivatives of pheniramine include chlorphenamine, brompheniramine, dexchlorpheniramine, dexbrompheniramine, and zimelidine. Two other halogenated derivatives, fluorpheniramine and iodopheniramine, are currently in use for research on combination therapies for malaria and some cancers.
Other analogs include diphenhydramine, and doxylamine.
Stereoisomerism
Pheniramine contains a stereocenter and can exists as either of two enantiomers. The pharmaceutical drug is a racemate, an equal mixture of the (R)- and (S)-forms.
Details
Pheniramine Uses
Pheniramine is used in the treatment of allergic conditions, Respiratory disease with excessive mucus, Skin conditions with inflammation & itching and Meniere's disease.
How Pheniramine works
Pheniramine is an antiallergic medication. It works by blocking the action of a chemical messenger (histamine). This relieves symptoms of severe allergic reactions due to insect bite/sting, certain medicines, hives (rashes, swelling, etc).
Additional Information
Pheniramine is used for treating allergic conditions like urticaria or hay fever. It is also used as an over-the-counter sleeping pill similar to other sedating antihistamines. The active constituent of this medication is an alkylamine derivative, a histamine H1-receptor antagonist with anticholinergic and moderate sedative effects. Besides all of its uses, it is also found in eyedrops used to treat allergic conjunctivitis.
Pheniramine is usually marketed in combination with other medications, instead of being sold as a stand-alone drug. For example, Neo Citran contains Pheniramine in its composition.
Dosage includes half to one tablet 3 times daily for adults and half a tablet 3 times daily for children (5 to 10 years of age). Pheniramine is not recommended for children under the age of 5 years. To avoid travel sickness, an individual is advised to take the first dose at least 30 minutes prior to travelling.
Side effects of using Pheniramine include drowsiness or bradycardia. Overdose may cause sleep disorders and even seizures if taken in combination with alcohol. It may also lead to lowered levels of adrenaline (epinephrine) which may in turn cause loss of consciousness, if taken in combination with cortisol in long term and hence should be avoided.
Information given here is based on the salt content of the medicine. Uses and effects of the medicine may vary from person to person. It is advisable to consult a Internal Medicine Specialist before using this medicine.
This medicine is prescribed for the following conditions:
* Allergic Rhinitis
* Common Cold Symptoms
* Motion Sickness
* Urticaria.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1618) Angiography
Gist
Angiography is a type of X-ray used to check blood vessels. Blood vessels do not show clearly on a normal X-ray, so a special dye called a contrast agent needs to be injected into your blood first. This highlights your blood vessels, allowing your doctor to see any problems.
Summary
Angiography, also called arteriography, is a diagnostic imaging procedure in which arteries and veins are examined by using a contrast agent and X-ray technology. Blood vessels cannot be differentiated from the surrounding organs in conventional radiography. It is therefore necessary to inject into the lumen of the vessels a substance that will distinguish them from the surrounding tissues. The contrast medium used is a water-soluble substance containing iodine. On the radiograph, iodine-containing structures cast a denser shadow than do other body tissues. The technique in use today was developed in the early 1950s by Swedish cardiologist Sven-Ivar Seldinger.
In a typical angiography procedure, a needle is used to puncture the main artery in the groin, armpit, or crook of the arm and to place a coiled wire in the artery. The needle is withdrawn, and a small flexible hollow tube (catheter) is passed over the wire and into the artery. The wire is removed, and contrast medium is injected through the catheter. Both the arteries and the structures they supply with blood can then be visualized.
A technique called digital subtraction angiography (DSA) is particularly useful in diagnosing arterial occlusion (blockage). For example, it can be used to identify constriction (stenosis) of the carotid artery or clot formation (thrombosis) in a pulmonary artery. It also can be used to detect renal vascular disease. After contrast material is injected into an artery or vein, a physician produces fluoroscopic images. Using these digitized images, a computer subtracts the image made with contrast material from a postinjection image made without contrast material, producing an image that allows the dye in the arteries to be seen more clearly. In this manner, the images arising from soft tissues, bones, and gas are the same in both the initial and the subsequent image and are thereby eliminated by the subtraction process. The remaining images of blood vessels containing the contrast material are thus more prominent.
All organs of the body can be examined by using various angiography techniques. Radiographic evaluations of diseased arteries supplying the legs, the brain, and the heart are necessary before corrective surgical procedures are undertaken. See also angiocardiography.
Details
Angiography or arteriography is a medical imaging technique used to visualize the inside, or lumen, of blood vessels and organs of the body, with particular interest in the arteries, veins, and the heart chambers. Modern angiography is performed by injecting a radio-opaque contrast agent into the blood vessel and imaging using X-ray based techniques such as fluoroscopy.
The word itself comes from the Greek words angeion 'vessel' and γράφειν graphein 'to write, record'. The film or image of the blood vessels is called an angiograph, or more commonly an angiogram. Though the word can describe both an arteriogram and a venogram, in everyday usage the terms angiogram and arteriogram are often used synonymously, whereas the term venogram is used more precisely.
The term angiography has been applied to radionuclide angiography and newer vascular imaging techniques such as CO2 angiography, CT angiography and MR angiography. The term isotope angiography has also been used, although this more correctly is referred to as isotope perfusion scanning.
History
The technique was first developed in 1927 by the Portuguese physician and neurologist Egas Moniz at the University of Lisbon to provide contrasted X-ray cerebral angiography in order to diagnose several kinds of nervous diseases, such as tumors, artery disease and arteriovenous malformations. Moniz is recognized as the pioneer in this field. He performed the first cerebral angiogram in Lisbon in 1927, and Reynaldo dos Santos performed the first aortogram in the same city in 1929. In fact, many current angiography techniques were developed by the Portuguese at the University of Lisbon. For example, in 1932, Lopo de Carvalho performed the first pulmonary angiogram via venous puncture of the superior member. In 1948 the first cavogram was performed by Sousa Pereira. Radial access technique for angiography can be traced back to 1953, where Eduardo Pereira first cannulated the radial artery to perform a coronary angiogram. With the introduction of the Seldinger technique in 1953, the procedure became markedly safer as no sharp introductory devices needed to remain inside the vascular lumen.
Technique
Depending on the type of angiogram, access to the blood vessels is gained most commonly through the femoral artery, to look at the left side of the heart and at the arterial system; or the jugular or femoral vein, to look at the right side of the heart and at the venous system. Using a system of guide wires and catheters, a type of contrast agent (which shows up by absorbing the X-rays), is added to the blood to make it visible on the X-ray images.
The X-ray images taken may either be still, displayed on an image intensifier or film, or motion images. For all structures except the heart, the images are usually taken using a technique called digital subtraction angiography or DSA. Images in this case are usually taken at 2–3 frames per second, which allows the interventional radiologist to evaluate the flow of the blood through a vessel or vessels. This technique "subtracts" the bones and other organs so only the vessels filled with contrast agent can be seen. The heart images are taken at 15–30 frames per second, not using a subtraction technique. Because DSA requires the patient to remain motionless, it cannot be used on the heart. Both these techniques enable the interventional radiologist or cardiologist to see stenosis (blockages or narrowings) inside the vessel which may be inhibiting the flow of blood and causing pain.
After the procedure has been completed, if the femoral technique is applied, the site of arterial entry is either manually compressed, stapled shut, or sutured in order to prevent access-site complications.
Uses
Coronary angiography
One of the most common angiograms performed is to visualize the coronary arteries. A long, thin, flexible tube called a catheter is used to administer the X-ray contrast agent at the desired area to be visualized. The catheter is threaded into an artery in the forearm, and the tip is advanced through the arterial system into the major coronary artery. X-ray images of the transient radiocontrast distribution within the blood flowing inside the coronary arteries allows visualization of the size of the artery openings. The presence or absence of atherosclerosis or atheroma within the walls of the arteries cannot be clearly determined.
Coronary angiography can visualize coronary artery stenosis, or narrowing of the blood vessel. The degree of stenosis can be determined by comparing the width of the lumen of narrowed segments of blood vessel with wider segments of adjacent vessel.
Cerebral angiography
Cerebral angiography provides images of blood vessels in and around the brain to detect abnormalities, including arteriovenous malformations and aneurysms. One common cerebral angiographic procedure is neuro-vascular digital subtraction angiography.
Pulmonary angiography
Pulmonary angiography is used to visualise the anatomy of pulmonary vessels.
Peripheral angiography
Angiography is also commonly performed to identify vessels narrowing in patients with leg claudication or cramps, caused by reduced blood flow down the legs and to the feet; in patients with renal stenosis (which commonly causes high blood pressure) and can be used in the head to find and repair stroke. These are all done routinely through the femoral artery, but can also be performed through the brachial or axillary (arm) artery. Any stenoses found may be treated by the use of balloon angioplasty, stenting, or atherectomy.
Fluorescein angiography
Fluorescein angiography is a medical procedure in which a fluorescent dye is injected into the bloodstream. The dye highlights the blood vessels in the back of the eye so they can be photographed. This test is often used to manage eye disorders.
OCT angiography
Optical coherence tomography (OCT) is a technology using near-infrared light to image the eye, in particular penetrate the retina to view the micro-structure behind the retinal surface. ocular OCT angiography (OCTA) is a method leveraging OCT technology to assess the vascular health of the retina.
Microangiography
Microangiography is commonly used to visualize tiny blood vessels.
Post mortem CT angiography
Post mortem CT angiography for medicolegal cases is a method initially developed by the Virtopsy group. Originating from that project, both watery and oily solutions have been evaluated.
While oily solutions require special deposition equipment to collect waste water, watery solutions seem to be regarded as less problematic. Watery solutions also were documented to enhance post mortem CT tissue differentiation whereas oily solutions were not. Conversely, oily solutions seem to only minimally disturb ensuing toxicological analysis, while watery solutions may significantly impede toxicological analysis, thus requiring blood sample preservation before post mortem CT angiography.
Complications
Angiography is a relatively safe procedure. But it does have some minor and very few major complications. After an angiogram, a sudden shock can cause a little pain at the surgery area, but heart attacks and strokes usually don't occur, as they may in bypass surgery.
Cerebral angiography
Major complications in cerebral angiography such as in digital subtraction angiography or contrast MRI are also rare but include stroke, an allergic reaction to the anaesthetic other medication or the contrast medium, blockage or damage to one of the access veins in the leg, pseudoaneurysm at the puncture site; or thrombosis and embolism formation. Bleeding or bruising at the site where the contrast is injected are minor complications, delayed bleeding can also occur but is rare.
Additional risks
The contrast medium that is used usually produces a sensation of warmth lasting only a few seconds, but may be felt in a greater degree in the area of injection. If the patient is allergic to the contrast medium, much more serious side effects are inevitable; however, with new contrast agents the risk of a severe reaction is less than one in 80,000 examinations. Additionally, damage to blood vessels can occur at the site of puncture/injection, and anywhere along the vessel during passage of the catheter. If digital subtraction angiography is used instead, the risks are considerably reduced because the catheter does not need to be passed as far into the blood vessels; thus lessening the chances of damage or blockage.
Infection
Antibiotic prophylaxis may be given in those procedures that are not clean, or clean procedures that results in generation of infarcted or necrotic tissues such as embolisation. Routine diagnostic angiography is often considered a clean procedure. Prophylaxis is also given to prevent infection from infected space into blood stream.
Thrombosis
There are six risk factors causing thrombosis after arterial puncture: low blood pressure, small arterial diameter, multiple puncture tries, long duration of cannulation, administration of vasopressor/inotropic agents, and the usage of catheters with side holes.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1619) Amyotrophic lateral sclerosis (ALS)
Gist
Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's Disease, is a rare neurological disease that affects motor neurons—those nerve cells in the brain and spinal cord that control voluntary muscle movement. Voluntary muscles are those we choose to move to produce movements like chewing, walking, and talking.
Summary
Amyotrophic lateral sclerosis (ALS), also called Lou Gehrig disease or motor neuron disease, is a degenerative neurological disorder that causes muscle atrophy and paralysis. The disease usually occurs after age 40; it affects men more often than women. ALS is frequently called Lou Gehrig disease in memory of the famous baseball player Lou Gehrig, who died from the disease in 1941.
Course of the disease
ALS affects the motor neurons—i.e., those neurons that control muscular movements. The disease is progressive, and muscles innervated by degenerating neurons become weak and eventually atrophy. Early symptoms of ALS typically include weakness in the muscles of the legs or arms and cramping or twitching in the muscles of the feet and hands. Speech may be slurred as well. As the disease advances, speech and swallowing become difficult. Later symptoms include severe muscle weakness, frequent falls, breathing difficulty, persistent fatigue, spasticity, and intense twitching. The affected muscles are eventually paralyzed. Death generally results from atrophy or paralysis of the respiratory muscles. Most patients with ALS survive between three and five years after disease onset.
Two rare subtypes of ALS are progressive muscular atrophy and progressive bulbar palsy. Progressive muscular atrophy is a variety of ALS in which the neuron degeneration is most pronounced in the spinal cord. Symptoms are similar to the common form of ALS, though spasticity is absent and muscle weakness is less severe. In addition, individuals with progressive muscular atrophy generally survive longer than those suffering from typical ALS. Progressive bulbar palsy is caused by degeneration of the cranial nerves and brainstem. Chewing, talking, and swallowing are difficult, and involuntary emotional outbursts of laughing and tongue twitching and atrophy are common. The prognosis is especially grave in this form of ALS.
Causes of ALS
The majority of ALS cases are sporadic (not inherited) and of unknown cause. Approximately 5–10 percent of cases are hereditary; roughly 30 percent of these cases are associated with mutations occurring in genes known as FUS/TLS, TDP43, and SOD1.
Although the mechanisms by which genetic variations give rise to ALS are unclear, it is known that the protein encoded by FUS/TLS plays a role in regulating the translation of RNA to protein in motor neurons. This function is similar to that of the protein encoded by TDP43. Variations in both genes cause an accumulation of proteins in the cytoplasm of neurons, which is suspected to contribute to neuronal dysfunction. Defects in SOD1, which produces an enzyme known as SOD, or superoxide dismutase, appear to facilitate the destruction of motor neurons by harmful molecules known as free radicals (molecular by-products of normal cell metabolism that can accumulate in and destroy cells). ALS-associated mutations in SOD1 result in the inability of the SOD enzyme to neutralize free radicals in neurons.
In 2011 scientists reported the discovery of ALS-associated mutations in a gene known as UBQLN2, which shed light on the pathological process underlying neuronal degeneration in ALS patients. UBQLN2 encodes a protein called ubiquilin 2, which plays an important role in recycling damaged proteins from neurons in the spinal cord and neurons in the cortex and hippocampus of the brain. Similar to mutations in FUS/TLS and TDP43, mutations in UBQLN2 result in the accumulation of proteins in neurons. However, unlike other known molecular pathologies associated with ALS, abnormalities in ubiquilin 2 have been identified in all forms of the disease—sporadic, familial, and ALS/dementia (which targets the brain)—and also have been implicated in other neurodegenerative diseases. The universal nature of ubiquilin 2 abnormalities in ALS suggests that all forms of the disease share a common pathological mechanism.
Diagnosis and treatment
Genetic screening can identify carriers of gene mutations in families with a history of ALS. However, in most cases, diagnosis is based primarily on tests that rule out other neurological disorders, particularly in individuals who do not have a family history of the disease. Urine tests and blood analysis are commonly used when attempting to diagnosis ALS. Patients also may undergo electromyography, which records the electrical activity of muscle fibres, and nerve conduction studies, which measure the speed of neuronal conduction and the strength of neuronal signaling. In addition, some patients are examined by means of magnetic resonance imaging (MRI), which can provide information about brain structure and activity.
There is no cure for ALS. However, the progression of the disease can be slowed by treatment with a drug called riluzole. Riluzole is the only drug treatment available specifically for ALS and has been shown to increase survival by about two to three months. A surgical treatment available to patients with advanced disease is tracheostomy, in which an opening is created in the trachea in order to enable connection to a ventilator (breathing machine). Patients also may choose to undergo physical therapy involving exercises to maintain muscle strength. In addition, speech therapy and the use of special computers and speech synthesizers can help maintain or improve communication.
Some persons affected by ALS carry a variation in a gene called KIFAP3 that appears to slow the rate of progression of the disease. In fact, in those persons with ALS who carry this genetic variant, survival may be extended by as much as 40–50 percent.
Details
Amyotrophic lateral sclerosis (ALS), also known as motor neuron disease (MND) or Lou Gehrig's disease, is a neurodegenerative disease[a] that results in the progressive loss of motor neurons that control voluntary muscles. ALS is the most common form of the motor neuron diseases. Early symptoms of ALS include stiff muscles, muscle twitches, and gradual increasing weakness and muscle wasting. Limb-onset ALS begins with weakness in the arms or legs, while bulbar-onset ALS begins with difficulty speaking or swallowing. Around half of people with ALS develop at least mild difficulties with thinking and behavior, and about 15% develop frontotemporal dementia.[8] Motor neuron loss continues until the ability to eat, speak, move, and finally the ability to breathe is lost with the cause of early death usually being respiratory failure.
Most cases of ALS (about 90% to 95%) have no known cause, and are known as sporadic ALS. However, both genetic and environmental factors are believed to be involved. The remaining 5% to 10% of cases have a genetic cause linked to a history of the disease in the family, and these are known as familial ALS. About half of these genetic cases are due to one of two specific genes. The diagnosis is based on a person's signs and symptoms, with testing done to rule out other potential causes.
There is no known cure for ALS. The goal of treatment is to improve symptoms. A medication called riluzole may extend life by about two to three months. Non-invasive ventilation may result in both improved quality and length of life. Mechanical ventilation can prolong survival but does not stop disease progression, with death usually caused by respiratory failure. A feeding tube may help. The disease can affect people of any age, but usually starts around the age of 60. The average survival from onset to death is two to four years, though this can vary, and about 10% survive longer than 10 years.
Descriptions of the disease date back to at least 1824 by Charles Bell. In 1869, the connection between the symptoms and the underlying neurological problems was first described by French neurologist Jean-Martin Charcot, who in 1874 began using the term amyotrophic lateral sclerosis.
Classification
ALS is a motor neuron disease, which is a group of neurological disorders that selectively affect motor neurons, the cells that control voluntary muscles of the body. Other motor neuron diseases include primary lateral sclerosis (PLS), progressive muscular atrophy (PMA), progressive bulbar palsy, pseudobulbar palsy, and monomelic amyotrophy (MMA).
ALS itself can be classified in a few different ways: by how fast the disease progresses; by whether it is familial or sporadic, and by the region first affected. In about 25% of cases, muscles in the face, mouth, and throat are affected first because motor neurons in the part of the brainstem called the medulla oblongata (formerly called the "bulb") start to die first along with lower motor neurons. This form is called "bulbar-onset ALS". In about 5% of cases, muscles in the trunk of the body are affected first. In most cases the disease spreads and affects other spinal cord regions. A few people with ALS have symptoms that are limited to one spinal cord region for at least 12 to 24 months before spreading to a second region; these regional variants of ALS are associated with a better prognosis.
Classical ALS, PLS, and PMA
Typical or "classical" ALS involves neurons in the brain (upper motor neurons) and in the spinal cord (lower motor neurons).
ALS can be classified by the types of motor neurons that are affected. Typical or "classical" ALS involves upper motor neurons in the brain and lower motor neurons in the spinal cord. Primary lateral sclerosis (PLS) involves only upper motor neurons, and progressive muscular atrophy (PMA) involves only lower motor neurons. There is debate over whether PLS and PMA are separate diseases or simply variants of ALS.
Classic ALS accounts for about 70% of all cases of ALS and can be subdivided into limb-onset ALS (also known as spinal-onset) and bulbar-onset ALS. Limb-onset ALS begins with weakness in the arms and legs and accounts for about two-thirds of all classic ALS cases. Bulbar-onset ALS begins with weakness in the muscles of speech, chewing, and swallowing and accounts for the other one-third of cases. Bulbar onset is associated with a worse prognosis than limb-onset ALS; a population-based study found that bulbar-onset ALS has a median survival of 2.0 years and a 10-year survival rate of 3%, while limb-onset ALS has a median survival of 2.6 years and a 10-year survival rate of 13%. A rare variant is respiratory-onset ALS that accounts for about 3% of all cases of ALS, in which the initial symptoms are difficulty breathing (dyspnea) with exertion, at rest, or while lying down (orthopnea). Spinal and bulbar symptoms tend to be mild or absent at the beginning. It is more common in males. Respiratory-onset ALS has the worst prognosis of any ALS variant; in a population-based study, those with respiratory-onset had a median survival of 1.4 years and 0% survival at 10 years.
Primary lateral sclerosis (PLS) accounts for about 5% of all cases of ALS and affects upper motor neurons in the arms and legs. However, more than 75% of people with apparent PLS develop lower motor neuron signs within four years of symptom onset, meaning that a definite diagnosis of PLS cannot be made until then. PLS has a better prognosis than classic ALS, as it progresses slower, results in less functional decline, does not affect the ability to breathe, and causes less severe weight loss.
Progressive muscular atrophy (PMA) accounts for about 5% of all cases of ALS and affects lower motor neurons in the arms and legs. While PMA is associated with longer survival on average than classic ALS, it still progresses to other spinal cord regions over time, eventually leading to respiratory failure and death. Upper motor neuron signs can develop late in the course of PMA, in which case the diagnosis might be changed to classic ALS.
Regional variants
Regional variants of ALS have symptoms that are limited to a single spinal cord region for at least a year; they progress more slowly than classic ALS and are associated with longer survival. Examples include flail arm syndrome, flail leg syndrome, and isolated bulbar ALS. Flail arm syndrome and flail leg syndrome are often considered to be regional variants of PMA because they only involve lower motor neurons. Isolated bulbar ALS can involve upper or lower motor neurons. These regional variants of ALS cannot be diagnosed at the onset of symptoms; a failure of the disease to spread to other spinal cord regions for an extended period of time (at least 12 months) must be observed.
Flail arm syndrome, also called brachial amyotrophic diplegia, is characterized by lower motor neuron damage in the cervical spinal cord only, leading to gradual onset of weakness in the proximal arm muscles and decreased or absent reflexes. Flail leg syndrome, also called leg amyotrophic diplegia, is characterized by lower motor neuron damage in the lumbosacral spinal cord only, leading to gradual onset of weakness in the legs and decreased or absent reflexes. Isolated bulbar ALS is characterized by upper or lower motor neuron damage in the bulbar region only, leading to gradual onset of difficulty with speech (dysarthria) and swallowing (dysphagia); breathing (respiration) is generally preserved, at least initially. Two small studies have shown that people with isolated bulbar ALS may live longer than people with bulbar-onset ALS.
Age of onset
ALS can also be classified based on the age of onset. While the peak age of onset is 58 to 63 for sporadic ALS and 47 to 52 for familial ALS, about 10% of all cases of ALS begin before age 45 ("young-onset" ALS), and about 1% of all cases begin before age 25 (juvenile ALS). People who develop young-onset ALS are more likely to be male, less likely to have bulbar onset of symptoms, and more likely to have a slower progression of disease. Juvenile ALS is more likely to be familial than adult-onset ALS; genes known to be associated with juvenile ALS include ALS2, SETX, SPG11, FUS, and SIGMAR1. Although most people with juvenile ALS live longer than those with adult-onset ALS, some of them have specific mutations in FUS and SOD1 that are associated with a poor prognosis. Late onset (after age 65) is associated with a more rapid functional decline and shorter survival.
Signs and symptoms
The disorder causes muscle weakness, atrophy, and muscle spasms throughout the body due to the degeneration of the upper motor and lower motor neurons. Individuals affected by the disorder may ultimately lose the ability to initiate and control all voluntary movement, although bladder and bowel function and the extraocular muscles (the muscles responsible for eye movement) are usually spared until the final stages of the disease. Sensory nerves and the autonomic nervous system are generally unaffected, meaning the majority of people with ALS maintain hearing, sight, touch, smell, and taste.
Initial symptoms
The start of ALS may be so subtle that the symptoms are overlooked. The earliest symptoms of ALS are muscle weakness or muscle atrophy. Other presenting symptoms include trouble swallowing or breathing, cramping, or stiffness of affected muscles; muscle weakness affecting an arm or a leg; or slurred and nasal speech. The parts of the body affected by early symptoms of ALS depend on which motor neurons in the body are damaged first.
In limb-onset ALS, the first symptoms are in the arms or the legs. If the legs are affected first, people may experience awkwardness, tripping, or stumbling when walking or running; this is often marked by walking with a "dropped foot" that drags gently on the ground. If the arms are affected first, they may experience difficulty with tasks requiring manual dexterity, such as buttoning a shirt, writing, or turning a key in a lock.
In bulbar-onset ALS, the first symptoms are difficulty speaking or swallowing. Speech may become slurred, nasal in character, or quieter. There may be difficulty with swallowing and loss of tongue mobility. A smaller proportion of people experience "respiratory-onset" ALS, where the intercostal muscles that support breathing are affected first.
Over time, people experience increasing difficulty moving, swallowing (dysphagia), and speaking or forming words (dysarthria). Symptoms of upper motor neuron involvement include tight and stiff muscles (spasticity) and exaggerated reflexes (hyperreflexia), including an overactive gag reflex. An abnormal reflex commonly called Babinski's sign also indicates upper motor neuron damage. Symptoms of lower motor neuron degeneration include muscle weakness and atrophy, muscle cramps, and fleeting twitches of muscles that can be seen under the skin (fasciculations). However, twitching is more of a side effect than a diagnostic symptom; it either occurs after or accompanies weakness and atrophy.
Progression
Although the initial symptoms and rate of progression vary from person to person, the disease eventually spreads to unaffected regions and the affected regions become more affected. Most people eventually are not able to walk or use their hands and arms, lose the ability to speak and swallow food and their own saliva, and begin to lose the ability to cough and to breathe on their own. While the disease does not cause pain directly, pain is a symptom experienced by most people with ALS and can take the form of neuropathic pain (pain caused by nerve damage), spasticity, muscle cramps, and nociceptive pain caused by reduced mobility and muscle weakness; examples of nociceptive pain in ALS include contractures (permanent shortening of a muscle or joint), neck pain, back pain, shoulder pain, and pressure ulcers.
The rate of progression can be measured using the ALS Functional Rating Scale - Revised (ALSFRS-R), a 12-item instrument survey administered as a clinical interview or self-reported questionnaire that produces a score between 48 (normal function) and 0 (severe disability); it is the most commonly used outcome measure in clinical trials and is used by doctors to track disease progression. Though the degree of variability is high and a small percentage of people have a much slower disorder, on average, people with ALS lose about 0.9 FRS points per month. A survey-based study among clinicians showed that they rated a 20% change in the slope of the ALSFRS-R as being clinically meaningful.
Disease progression tends to be slower in people who are younger than 40 at onset, are mildly obese, have symptoms restricted primarily to one limb, and those with primarily upper motor neuron symptoms. Conversely, progression is faster and prognosis poorer in people with bulbar-onset ALS, respiratory-onset ALS and frontotemporal dementia.
Late stages
Difficulties with chewing and swallowing make eating very difficult and increase the risk of choking or of aspirating food into the lungs. In later stages of the disorder, aspiration pneumonia can develop, and maintaining a healthy weight can become a significant problem that may require the insertion of a feeding tube. As the diaphragm and intercostal muscles of the rib cage that support breathing weaken, measures of lung function such as vital capacity and inspiratory pressure diminish. In respiratory-onset ALS, this may occur before significant limb weakness is apparent. The most common cause of death among people with ALS are respiratory failure or pneumonia and most people with ALS die in their own home from the former cause, with their breath stopping while they sleep.
Although respiratory support can ease problems with breathing and prolong survival, it does not affect the progression of ALS. Most people with ALS die between two and four years after the diagnosis. Around half of people with ALS die within 30 months of their symptoms beginning, and about 20% of people with ALS live between five and ten years after symptoms begin.[3] Guitarist Jason Becker has lived since 1989 with the disorder, while astrophysicist Stephen Hawking lived for 55 more years following his diagnosis, but they are considered unusual cases.
Cognitive and behavioral symptoms
Cognitive or behavioral dysfunction is present in 30–50% of individuals with ALS. Around half of people with ALS will experience mild changes in cognition and behavior, and 10–15% will show signs of frontotemporal dementia (FTD). Most people with ALS who have normal cognition at the time of diagnosis have preserved cognition throughout the course of their disease; the development of cognitive impairment in those with normal cognition at baseline is associated with a worse prognosis. Repeating phrases or gestures, apathy, and loss of inhibition are frequently reported behavioral features of ALS. Language dysfunction, executive dysfunction, and troubles with social cognition and verbal memory are the most commonly reported cognitive symptoms in ALS; a meta-analysis found no relationship between dysfunction and disease severity. However, cognitive and behavioral dysfunctions have been found to correlate with reduced survival in people with ALS and increased caregiver burden; this may be due in part to deficits in social cognition. About half the people who have ALS experience emotional lability, in which they cry or laugh for no reason; it is more common in those with bulbar-onset ALS.
Cause
Though the exact cause of ALS is unknown, genetic and environmental factors are thought to be of roughly equal importance. The genetic factors are better understood than the environmental factors; no specific environmental factor has been definitively shown to cause ALS. A liability threshold model for ALS proposes that cellular damage accumulates over time due to genetic factors present at birth and exposure to environmental risks throughout life.
Genetics
ALS can be classified as familial or sporadic, depending on whether or not there is a family history of the disease. There is no consensus among neurologists on the exact definition of familial ALS. The strictest definition is that a person with ALS must have two or more first-degree relatives (children, siblings, or parents) who also have ALS. A less strict definition is that a person with ALS must have at least one first-degree or second-degree relative (grandparents, grandchildren, aunts, uncles, nephews, nieces or half-siblings) who also has ALS. Familial ALS is usually said to account for 10% of all cases of ALS, though estimates range from 5%[46] to 20%. Higher estimates use a broader definition of familial ALS and examine the family history of people with ALS more thoroughly.
In sporadic ALS, there is no family history of the disease. Sporadic ALS and familial ALS appear identical clinically and pathologically; about 10% of people with sporadic ALS have mutations in genes that are known to cause familial ALS. In light of these parallels, the term "sporadic ALS" has been criticized as misleading because it implies that cases of sporadic ALS are only caused by environmental factors; the term "isolated ALS" has been suggested as a more accurate alternative.
More than 20 genes have been associated with familial ALS, of which four account for the majority of familial cases: C9orf72 (40%), SOD1 (20%), FUS (1–5%), and TARDBP (1–5%). The genetics of familial ALS are better understood than the genetics of sporadic ALS; as of 2016, the known ALS genes explained about 70% of familial ALS and about 15% of sporadic ALS. Overall, first-degree relatives of an individual with ALS have a 1% risk of developing ALS. ALS has an oligogenic mode of inheritance, meaning that mutations in two or more genes are required to cause disease.
ALS and frontotemporal dementia (FTD) are now considered to be part of a common disease spectrum (FTD–ALS) because of genetic, clinical, and pathological similarities. Genetically, C9orf72 repeat expansions account for about 40% of familial ALS and 25% of familial FTD. Clinically, 50% of people with ALS have some cognitive or behavioral impairments and 5–15% have FTD, while 40% of people with FTD have some motor neuron symptoms and 12.5% have ALS. Pathologically, abnormal aggregations of TDP-43 protein are seen in up to 97% of ALS patients and up to 50% of FTD patients. In December 2021 a paper found the TDP-43 proteinopathy is in turn caused by defective cyclophilin A which regulates TARDBP gene expression. Other genes known to cause FTD-ALS include CHCHD10, SQSTM1, and TBK1.
Environmental factors
Where no family history of the disease is present – around 90% of cases – no cause is known. Possible associations for which evidence is inconclusive include military service and smoking. Although studies on military history and ALS frequency are inconsistent, there is weak evidence for a positive correlation. Various proposed factors include exposure to environmental toxins (inferred from geographical deployment studies), as well as alcohol and tobacco use during military service.
A 2016 review of 16 meta-analyses concluded that there was convincing evidence for an association with chronic occupational exposure to lead; suggestive evidence for farming, exposure to heavy metals other than lead, beta-carotene intake, and head injury; and weak evidence for omega-3 fatty acid intake, exposure to extremely low frequency electromagnetic fields, pesticides, and serum uric acid.
In a 2017 study by the United States Centers for Disease Control and Prevention analyzing U.S. deaths from 1985 to 2011, occupations correlated with ALS deaths were white collar, such as in management, financial, architectural, computing, legal, and education jobs. Other potential risk factors remain unconfirmed, including chemical exposure, electromagnetic field exposure, occupation, physical trauma, and electric shock. There is a tentative association with exposure to various pesticides, including the organochlorine insecticides aldrin, dieldrin, DDT, and toxaphene.
Head injury
A 2015 review found that moderate to severe traumatic brain injury is a risk factor for ALS, but whether mild traumatic brain injury increases rates was unclear. A 2017 meta-analysis found an association between head injuries and ALS; however, this association disappeared when the authors considered the possibility of reverse causation, which is the idea that head injuries are an early symptom of undiagnosed ALS, rather than the cause of ALS.
Physical activity
A number of reviews prior to 2021 found no relationship between the amount of physical activity and the risk of developing ALS. A 2009 review found that the evidence for physical activity as a risk factor for ALS was limited, conflicting, and of insufficient quality to come to a firm conclusion.[69] A 2014 review concluded that physical activity in general is not a risk factor for ALS, that soccer and American football are possibly associated with ALS, and that there was not enough evidence to say whether or not physically demanding occupations are associated with ALS. A 2016 review found the evidence inconclusive and noted that differences in study design make it difficult to compare studies, as they do not use the same measures of physical activity or the same diagnostic criteria for ALS. However, research published in 2021 suggested that there was a positive causal relationship between ALS and intense physical exercise in those with a risk genotype.
Sports
Both football and American football have been identified as risk factors for ALS in several studies, although this association is based on small numbers of ALS cases. A 2012 retrospective cohort study of 3,439 former NFL players found that their risk of dying from neurodegenerative causes was three times higher than the general US population, and their risk of dying from ALS or Alzheimer's disease was four times higher. However, this increased risk was calculated on the basis of two deaths from Alzheimer's disease and six deaths from ALS out of 334 deaths total in this cohort, meaning that this study does not definitively prove that playing American football is a risk factor for ALS. Some NFL players thought to have died from ALS may have actually had chronic traumatic encephalopathy (CTE), a neurodegenerative disorder associated with multiple head injuries that can present with symptoms that are very similar to ALS.
Football was identified as a possible risk factor for ALS in a retrospective cohort study of 24,000 Italian footballers who played between 1960 and 1996. There were 375 deaths in this group, including eight from ALS. Based on this information and the incidence of ALS, it was calculated that the soccer players were 11 times more likely to die from ALS than the general Italian population. However, this calculation has been criticized for relying on an inappropriately low number of expected cases of ALS in the cohort. When the lifetime risk of developing ALS was used to predict the number of expected cases, soccer players were no more likely to die of ALS than the general population.
Smoking
Smoking is possibly associated with ALS. A 2009 review concluded that smoking was an established risk factor for ALS.[78] A 2010 systematic review and meta-analysis concluded that there was not a strong association between smoking and ALS, but that smoking might be associated with a higher risk of ALS in women. A 2011 meta-analysis concluded that smoking increases the risk of ALS versus never smoking. Among smokers, the younger they started smoking, the more likely they were to get ALS; however, neither the number of years smoked nor the number of cigarettes smoked per day affected their risk of developing ALS.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1620) Mass media
Gist
Mass media means technology that is intended to reach a mass audience. It is the primary means of communication used to reach the vast majority of the general public. The most common platforms for mass media are newspapers, magazines, radio, television, and the Internet.
Summary
Mass media are modes (or, less commonly, a single mode) of mass communication whereby information, opinion, advocacy, propaganda, advertising, artwork, entertainment, and other forms of expression are conveyed to a very large audience. In this, the most general, sense of the term, mass media have included print, radio, television, film, video, audio recording, and the Internet—in particular, the World Wide Web and Internet-based social media. The term mass media is also used to refer collectively to types of public or private organizations that produce or disseminate particular forms of expression through such modes, including newspapers and wire services, periodicals, book publishers, libraries, radio and television networks, movie studios, and record companies. Notably, since the late 20th century the Internet as a mode of mass communication has come to provide alternative platforms for mass media organizations that were once restricted to earlier-established technologies. It is now common, for example, for newspapers, periodicals, and books to be published on the Web or through Web-based applications (indeed, some publishing companies have abandoned the print medium altogether) and for musical recordings, television programs, and films to be accessible on individual websites or through dedicated streaming services. Finally, in the United States another common referent of mass media is the group of mostly private corporations that publish or broadcast news and news commentary for a nationwide audience. Mass media in that sense have often been criticized, collectively and individually, for alleged liberal or conservative bias in their reporting on important political, economic, and social issues.
Details
Mass media refers to a diverse array of media that reach a large audience via mass communication.
Broadcast media transmit information electronically via media such as films, radio, recorded music, or television. Digital media comprises both Internet and mobile mass communication. Internet media comprise such services as email, social media sites, websites, and Internet-based radio and television. Many other mass media outlets have an additional presence on the web, by such means as linking to or running TV ads online, or distributing QR codes in outdoor or print media to direct mobile users to a website. In this way, they can use the easy accessibility and outreach capabilities the Internet affords, as thereby easily broadcast information throughout many different regions of the world simultaneously and cost-efficiently. Outdoor media transmit information via such media as AR advertising; billboards; blimps; flying billboards (signs in tow of airplanes); placards or kiosks placed inside and outside buses, commercial buildings, shops, sports stadiums, subway cars, or trains; signs; or skywriting. Print media transmit information via physical objects, such as books, comics, magazines, newspapers, or pamphlets. Event organising and public speaking can also be considered forms of mass media.
The organisations that control these technologies, such as movie studios, publishing companies, and radio and television stations, are also known as the mass media.
Issues with definition
In the late 20th century, mass media could be classified into eight mass media industries: books, the Internet, magazines, movies, newspapers, radio, recordings and television. The explosion of digital communication technology in the late 20th and early 21st centuries made prominent the question: what forms of media should be classified as "mass media"? For example, it is controversial whether to include mobile phones, computer games (such as MMORPGs) and video games in the definition. In the early 2000s, a classification called the "seven mass media" came into use. In order of introduction, they are:
* Print (books, pamphlets, newspapers, magazines, posters, etc.) from the late 15th century
* Recordings (gramophone records, magnetic tapes, cassettes, cartridges, CDs and DVDs) from the late 19th century
* Cinema from about 1900
* Radio from about 1910
* Television from about 1950
* Internet from about 1990
* Mobile phones from about 2000
Each mass medium has its own content types, creative artists, technicians and business models. For example, the Internet includes blogs, podcasts, web sites and various other technologies built atop the general distribution network. The sixth and seventh media, Internet and mobile phones, are often referred to collectively as digital media; and the fourth and fifth, radio and TV, as broadcast media. Some argue that video games have developed into a distinct mass form of media.
While a telephone is a two-way communication device, mass media communicates to a large group. In addition, the telephone has transformed into a cell phone which is equipped with Internet access. A question arises whether this makes cell phones a mass medium or simply a device used to access a mass medium (the Internet). There is currently a system by which marketers and advertisers are able to tap into satellites, and broadcast commercials and advertisements directly to cell phones, unsolicited by the phone's user. This transmission of mass advertising to millions of people is another form of mass communication.
Video games may also be evolving into a mass medium. Video games (for example, massively multiplayer online role-playing games (MMORPGs), such as RuneScape) provide a common gaming experience to millions of users across the globe and convey the same messages and ideologies to all their users. Users sometimes share the experience with one another by playing online. Excluding the Internet, however, it is questionable whether players of video games are sharing a common experience when they play the game individually. It is possible to discuss in great detail the events of a video game with a friend one has never played with, because the experience is identical to each. The question, then, is whether this is a form of mass communication.
Characteristics
Five characteristics of mass communication have been identified by sociologist John Thompson of Cambridge University:
"Comprises both technical and institutional methods of production and distribution" – This is evident throughout the history of mass media, from print to the Internet, each suitable for commercial utility
Involves the "commodification of symbolic forms" – as the production of materials relies on its ability to manufacture and sell large quantities of the work; as radio stations rely on their time sold to advertisements, so too newspapers rely on their space for the same reasons
"Separate contexts between the production and reception of information"
Its "reach to those 'far removed' in time and space, in comparison to the producers"
"Information distribution" – a "one to many" form of communication, whereby products are mass-produced and disseminated to a great quantity of audiences
Mass vs. mainstream and alternative
The term "mass media" is sometimes erroneously used as a synonym for "mainstream media". Mainstream media are distinguished from alternative media by their content and point of view. Alternative media are also "mass media" outlets in the sense that they use technology capable of reaching many people, even if the audience is often smaller than the mainstream.
In common usage, the term "mass" denotes not that a given number of individuals receives the products, but rather that the products are available in principle to a plurality of recipients.
Forms of mass media:
Broadcast
The sequencing of content in a broadcast is called a schedule. With all technological endeavours a number of technical terms and slang have developed.
Radio and television programs are distributed over frequency bands which are highly regulated in the United States. Such regulation includes determination of the width of the bands, range, licensing, types of receivers and transmitters used, and acceptable content.
Cable television programs are often broadcast simultaneously with radio and television programs, but have a more limited audience. By coding signals and requiring a cable converter box at individual recipients' locations, cable also enables subscription-based channels and pay-per-view services.
A broadcasting organisation may broadcast several programs simultaneously, through several channels (frequencies), for example BBC One and Two. On the other hand, two or more organisations may share a channel and each use it during a fixed part of the day, such as the Cartoon Network/Adult Swim. Digital radio and digital television may also transmit multiplexed programming, with several channels compressed into one ensemble.
When broadcasting is done via the Internet the term webcasting is often used. In 2004, a new phenomenon occurred when a number of technologies combined to produce podcasting. Podcasting is an asynchronous broadcast/narrowcast medium. Adam Curry and his associates, the Podshow, are principal proponents of podcasting.
Film
The term 'film' encompasses motion pictures as individual projects, as well as the field in general. The name comes from the photographic film (also called film stock), historically the primary medium for recording and displaying motion pictures. Many other terms for film exist, such as motion pictures (or just pictures and "picture"), the silver screen, photoplays, the cinema, picture shows, flicks and, most commonly, movies.
Films are produced by recording people and objects with cameras, or by creating them using animation techniques or special effects. Films comprise a series of individual frames, but when these images are shown in rapid succession, an illusion of motion is created. Flickering between frames is not seen because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after the source has been removed. Also of relevance is what causes the perception of motion: a psychological effect identified as beta movement.
Film has emerged as an important art form. They entertain, educate, enlighten and inspire audiences. Any film can become a worldwide attraction, especially with the addition of dubbing or subtitles that translate the original language.
Video games
A video game is a computer-controlled game in which a video display, such as a monitor or television set, is the primary feedback device. The term "computer game" also includes games which display only text or which use other methods, such as sound or vibration, as their primary feedback device. There always must also be some sort of input device, usually in the form of button/joystick combinations (on arcade games), a keyboard and mouse/trackball combination (computer games), a controller (console games), or a combination of any of the above. Also, more esoteric devices have been used for input, e.g., the player's motion. Usually there are rules and goals, but in more open-ended games the player may be free to do whatever they like within the confines of the virtual universe.
In common usage, an "arcade game" refers to a game designed to be played in an establishment in which patrons pay to play on a per-use basis. A "computer game" or "PC game" refers to a game that is played on a personal computer. A "Console game" refers to one that is played on a device specifically designed for the use of such, while interfacing with a standard television set. A "video game" (or "videogame") has evolved into a catchall phrase that encompasses the aforementioned along with any game made for any other device, including, but not limited to, advanced calculators, mobile phones, PDAs, etc.
Audio recording and reproduction
Sound recording and reproduction is the electrical or mechanical re-creation or amplification of sound, often as music. This involves the use of audio equipment such as microphones, recording devices and loudspeakers. From early beginnings with the invention of the phonograph using purely mechanical techniques, the field has advanced with the invention of electrical recording, the mass production of the 78 record, the magnetic wire recorder followed by the tape recorder, the vinyl LP record. The invention of the compact cassette in the 1960s, followed by Sony's Walkman, gave a major boost to the mass distribution of music recordings, and the invention of digital recording and the compact disc in 1983 brought massive improvements in ruggedness and quality. The most recent developments have been in digital audio players.
An album is a collection of related audio recordings, released together to the public, usually commercially.
The term record album originated from the fact that 78 RPM phonograph disc records were kept together in a book resembling a photo album. The first collection of records to be called an "album" was Tchaikovsky's Nutcracker Suite, release in April 1909 as a four-disc set by Odeon Records. It retailed for 16 shillings—about £15 in modern currency.
A music video (also promo) is a short film or video that accompanies a complete piece of music, most commonly a song. Modern music videos were primarily made and used as a marketing device intended to promote the sale of music recordings. Although the origins of music videos go back much further, they came into their own in the 1980s, when Music Television's format was based on them. In the 1980s, the term "rock video" was often used to describe this form of entertainment, although the term has fallen into disuse.
Music videos can accommodate all styles of filmmaking, including animation, live-action films, documentaries, and non-narrative, abstract film.
Internet
The Internet (also known simply as "the Net" or less precisely as "the Web") is a more interactive medium of mass media, and can be briefly described as "a network of networks". Specifically, it is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It consists of millions of smaller domestic, academic, business and governmental networks, which together carry various information and services, such as email, online chat, file transfer, and the interlinked web pages and other documents of the World Wide Web.
Contrary to some common usage, the Internet and the World Wide Web are not synonymous: the Internet is the system of interconnected computer networks, linked by copper wires, fibre-optic cables, wireless connections etc.; the Web is the contents, or the interconnected documents, linked by hyperlinks and URLs. The World Wide Web is accessible through the Internet, along with many other services including e-mail, file sharing and others described below.
Toward the end of the 20th century, the advent of the World Wide Web marked the first era in which most individuals could have a means of exposure on a scale comparable to that of mass media. Anyone with a web site has the potential to address a global audience, although serving to high levels of web traffic is still relatively expensive. It is possible that the rise of peer-to-peer technologies may have begun the process of making the cost of bandwidth manageable. Although a vast amount of information, imagery, and commentary (i.e. "content") has been made available, it is often difficult to determine the authenticity and reliability of information contained in web pages (in many cases, self-published). The invention of the Internet has also allowed breaking news stories to reach around the globe within minutes. This rapid growth of instantaneous, decentralised communication is often deemed likely to change mass media and its relationship to society.
"Cross-media" means the idea of distributing the same message through different media channels. A similar idea is expressed in the news industry as "convergence". Many authors understand cross-media publishing to be the ability to publish in both print and on the web without manual conversion effort. An increasing number of wireless devices with mutually incompatible data and screen formats make it even more difficult to achieve the objective "create once, publish many".
The Internet is quickly becoming the center of mass media. Everything is becoming accessible via the internet. Rather than picking up a newspaper, or watching the 10 o'clock news, people can log onto the internet to get the news they want, when they want it. For example, many workers listen to the radio through the Internet while sitting at their desk.
Even the education system relies on the Internet. Teachers can contact the entire class by sending one e-mail. They may have web pages on which students can get another copy of the class outline or assignments. Some classes have class blogs in which students are required to post weekly, with students graded on their contributions.
Blogs (web logs)
Blogging, too, has become a pervasive form of media. A blog is a website, usually maintained by an individual, with regular entries of commentary, descriptions of events, or interactive media such as images or video. Entries are commonly displayed in reverse chronological order, with most recent posts shown on top. Many blogs provide commentary or news on a particular subject; others function as more personal online diaries. A typical blog combines text, images and other graphics, and links to other blogs, web pages, and related media. The ability for readers to leave comments in an interactive format is an important part of many blogs. Most blogs are primarily textual, although some focus on art (artlog), photographs (photoblog), sketchblog, videos (vlog), music (MP3 blog) and audio (podcasting), are part of a wider network of social media. Microblogging is another type of blogging which consists of blogs with very short posts.
RSS feeds
RSS is a format for syndicating news and the content of news-like sites, including major news sites like Wired, news-oriented community sites like Slashdot, and personal blogs. It is a family of Web feed formats used to publish frequently updated content such as blog entries, news headlines, and podcasts. An RSS document (which is called a "feed" or "web feed" or "channel") contains either a summary of content from an associated web site or the full text. RSS makes it possible for people to keep up with web sites in an automated manner that can be piped into special programs or filtered displays.
Podcast
A podcast is a series of digital-media files which are distributed over the Internet using syndication feeds for playback on portable media players and computers. The term podcast, like broadcast, can refer either to the series of content itself or to the method by which it is syndicated; the latter is also called podcasting. The host or author of a podcast is often called a podcaster.
Mobile
Mobile phones were introduced in Japan in 1979 but became a mass media only in 1998 when the first downloadable ringing tones were introduced in Finland. Soon most forms of media content were introduced on mobile phones, tablets and other portable devices, and today the total value of media consumed on mobile vastly exceeds that of internet content, and was worth over $31 billion in 2007 (source Informa). The mobile media content includes over $8 billion worth of mobile music (ringing tones, ringback tones, truetones, MP3 files, karaoke, music videos, music streaming services etc.); over $5 billion worth of mobile gaming; and various news, entertainment and advertising services. In Japan mobile phone books are so popular that five of the ten best-selling printed books were originally released as mobile phone books.
Similar to the internet, mobile is also an interactive media, but has far wider reach, with 3.3 billion mobile phone users at the end of 2007 to 1.3 billion internet users (source ITU). Like email on the internet, the top application on mobile is also a personal messaging service, but SMS text messaging is used by over 2.4 billion people. Practically all internet services and applications exist or have similar cousins on mobile, from search to multiplayer games to virtual worlds to blogs. Mobile has several unique benefits which many mobile media pundits claim make mobile a more powerful media than either TV or the internet, starting with mobile being permanently carried and always connected. Mobile has the best audience accuracy and is the only mass media with a built-in payment channel available to every user without any credit cards or PayPal accounts or even an age limit. Mobile is often called the 7th Mass Medium and either the fourth screen (if counting cinema, TV and PC screens) or the third screen (counting only TV and PC).
Print media
A magazine is a periodical publication containing a variety of articles, generally financed by advertising or purchase by readers.
Magazines are typically published weekly, biweekly, monthly, bimonthly or quarterly, with a date on the cover that is in advance of the date it is actually published. They are often printed in color on coated paper, and are bound with a soft cover.
Magazines fall into two broad categories: consumer magazines and business magazines. In practice, magazines are a subset of periodicals, distinct from those periodicals produced by scientific, artistic, academic or special interest publishers which are subscription-only, more expensive, narrowly limited in circulation, and often have little or no advertising.
Magazines can be classified as:
* General interest magazines (e.g. Frontline, India Today, The Week, The Sunday Times etc.)
* Special interest magazines (women's, sports, business, scuba diving, etc.)
Newspaper
A newspaper is a publication containing news and information and advertising, usually printed on low-cost paper called newsprint. It may be general or special interest, most often published daily or weekly. The most important function of newspapers is to inform the public of significant events. Local newspapers inform local communities and include advertisements from local businesses and services, while national newspapers tend to focus on a theme, which can be exampled with The Wall Street Journal as they offer news on finance and business related-topics. The first printed newspaper was published in 1605, and the form has thrived even in the face of competition from technologies such as radio and television. Recent developments on the Internet are posing major threats to its business model, however. Paid circulation is declining in most countries, and advertising revenue, which makes up the bulk of a newspaper's income, is shifting from print to online; some commentators, nevertheless, point out that historically new media such as radio and television did not entirely supplant existing.
The internet has challenged the press as an alternative source of information and opinion but has also provided a new platform for newspaper organisations to reach new audiences. According to the World Trends Report, between 2012 and 2016, print newspaper circulation continued to fall in almost all regions, with the exception of Asia and the Pacific, where the dramatic increase in sales in a few select countries has offset falls in historically strong Asian markets such as Japan and the Republic of Korea. Most notably, between 2012 and 2016, India's print circulation grew by 89 per cent.
Outdoor media
Outdoor media is a form of mass media which comprises billboards, signs, placards placed inside and outside commercial buildings/objects like shops/buses, flying billboards (signs in tow of airplanes), blimps, skywriting, AR advertising. Many commercial advertisers use this form of mass media when advertising in sports stadiums. Tobacco and alcohol manufacturers used billboards and other outdoor media extensively. However, in 1998, the Master Settlement Agreement between the US and the tobacco industries prohibited the billboard advertising of cigarettes. In a 1994 Chicago-based study, Diana Hackbarth and her colleagues revealed how tobacco- and alcohol-based billboards were concentrated in poor neighbourhoods. In other urban centers, alcohol and tobacco billboards were much more concentrated in African-American neighbourhoods than in white neighbourhoods.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1621) Print media
Gist
Print media generally refers to newspapers. Newspapers collect, edit and print news reports and articles. There are newspapers published in the evening also. They are called eveningers.
Summary:
Magazine
A magazine is a periodical publication containing a variety of articles, generally financed by advertising or purchase by readers.
Magazines are typically published weekly, biweekly, monthly, bimonthly or quarterly, with a date on the cover that is in advance of the date it is actually published. They are often printed in color on coated paper, and are bound with a soft cover.
Magazines fall into two broad categories: consumer magazines and business magazines. In practice, magazines are a subset of periodicals, distinct from those periodicals produced by scientific, artistic, academic or special interest publishers which are subscription-only, more expensive, narrowly limited in circulation, and often have little or no advertising.
Magazines can be classified as:
* General interest magazines (e.g. Frontline, India Today, The Week, The Sunday Times etc.)
* Special interest magazines (women's, sports, business, scuba diving, etc.)
Newspaper
A newspaper is a publication containing news and information and advertising, usually printed on low-cost paper called newsprint. It may be general or special interest, most often published daily or weekly. The most important function of newspapers is to inform the public of significant events. Local newspapers inform local communities and include advertisements from local businesses and services, while national newspapers tend to focus on a theme, which can be exampled with The Wall Street Journal as they offer news on finance and business related-topics. The first printed newspaper was published in 1605, and the form has thrived even in the face of competition from technologies such as radio and television. Recent developments on the Internet are posing major threats to its business model, however. Paid circulation is declining in most countries, and advertising revenue, which makes up the bulk of a newspaper's income, is shifting from print to online; some commentators, nevertheless, point out that historically new media such as radio and television did not entirely supplant existing.
The internet has challenged the press as an alternative source of information and opinion but has also provided a new platform for newspaper organisations to reach new audiences. According to the World Trends Report, between 2012 and 2016, print newspaper circulation continued to fall in almost all regions, with the exception of Asia and the Pacific, where the dramatic increase in sales in a few select countries has offset falls in historically strong Asian markets such as Japan and the Republic of Korea. Most notably, between 2012 and 2016, India's print circulation grew by 89 per cent.
Details
Printing press is a machine by which text and images are transferred from movable type to paper or other media by means of ink. Movable type and paper were invented in China, and the oldest known extant book printed from movable type was created in Korea in the 14th century. Printing first became mechanized in Europe during the 15th century.
The earliest mention of a mechanized printing press in Europe appears in a lawsuit in Strasbourg in 1439; it reveals construction of a press for Johannes Gutenberg and his associates. Gutenberg’s press and others of its era in Europe owed much to the medieval paper press, which was in turn modeled after the ancient wine-and-olive press of the Mediterranean area. A long handle was used to turn a heavy wooden screw, exerting downward pressure against the paper, which was laid over the type mounted on a wooden platen. Gutenberg used his press to print an edition of the Bible in 1455; this Bible is the first complete extant book in the West, and it is one of the earliest books printed from movable type. (Jikji, a book of the teachings of Buddhist priests, was printed by hand from movable type in Korea in 1377.) In its essentials, the wooden press used by Gutenberg reigned supreme for more than 300 years, with a hardly varying rate of 250 sheets per hour printed on one side.
Metal presses began to appear late in the 18th century, at about which time the advantages of the cylinder were first perceived and the application of steam power was considered. By the mid-19th century Richard M. Hoe of New York had perfected a power-driven cylinder press in which a large central cylinder carrying the type successively printed on the paper of four impression cylinders, producing 8,000 sheets an hour in 2,000 revolutions. The rotary press came to dominate the high-speed newspaper field, but the flatbed press, having a flat bed to hold the type and either a reciprocating platen or a cylinder to hold the paper, continued to be used for job printing.
A significant innovation of the late 19th century was the offset press, in which the printing (blanket) cylinder runs continuously in one direction while paper is impressed against it by an impression cylinder. Offset printing is especially valuable for colour printing, because an offset press can print multiple colours in one run. Offset lithography—used for books, newspapers, magazines, business forms, and direct mail—continued to be the most widely used printing method at the start of the 21st century, though it was challenged by ink-jet, laser, and other printing methods.
Apart from the introduction of electric power, advances in press design between 1900 and the 1950s consisted of a great number of relatively minor mechanical modifications designed to improve the speed of the operation. Among these changes were better paper feed, improvements in plates and paper, automatic paper reels, and photoelectric control of colour register. The introduction of computers in the 1950s revolutionized printing composition, with more and more steps in the print process being replaced by digital data. At the end of the 20th century a new electronic printing method, print-on-demand, began to compete with offset printing, though it—and printing generally—came under increasing pressure in developed countries as publishers, newspapers, and others turned to online means of distributing what they had previously printed on paper.
Additional Information
Difference between Print Media and Electronic Media:
1. Print Media:
Print media is a form of mass media as the name suggests the news or information is shared through printed publications. Printed media is the oldest means of sharing information/news. In printed media, the news or information is published in hard copy and then it is released which is more reader-friendly. The main types of print media include newspapers, magazines, and books. In print media Live show, Live discussion, and Live reporting is not possible it is based on the interval update method.
Advantages:
Tangibility: Print media offers a physical copy of the content, which readers can hold and read at their convenience.
Credibility: Print media, such as newspapers and magazines, are considered to be more credible than electronic media due to the rigorous fact-checking process they undergo.
Targeted audience: Print media can be targeted towards specific demographics, making it easier for businesses to reach their intended audience.
Longer shelf-life: Print media has a longer shelf life than electronic media, as it can be stored for a long time and can be re-read multiple times.
Disadvantages:
Limited reach: Print media has a limited reach, as it is distributed only to specific locations and to those who purchase or subscribe to the publication.
Cost: Producing print media can be expensive, as it involves the cost of printing, distribution, and storage.
Time constraints: Print media has a longer production cycle, as it takes time to write, edit, print, and distribute the content.
2. Electronic Media:
Electronic Media is a form of mass media as the name suggests the news or information is shared through electronic medium. Electronic media is the advanced means of sharing information/news. In electronic media, the news or information is uploaded or broadcasted and then it can be viewed through electronic mediums which is more viewer-friendly. The main types of electronic media include television news, News through mobile apps, etc. In electronic media Live shows, Live discussions, Live reporting is possible as it is based on an immediate update method.
Advantages:
Wider reach: Electronic media has a wider reach than print media, as it can be accessed anywhere in the world with an internet connection.
Interactivity: Electronic media allows for greater interactivity with the audience, such as through comments, social media shares, and live streams.
Cost-effective: Electronic media is often cheaper to produce and distribute than print media.
Real-time updates: Electronic media can be updated in real-time, making it ideal for breaking news and live events.
Disadvantages:
Short shelf-life: Electronic media has a shorter shelf life than print media, as content can quickly become outdated or buried in a sea of other digital content.
Credibility concerns: Due to the ease of producing and distributing electronic media, there are concerns about the credibility of the information being presented.
Audience fragmentation: With so many electronic media outlets available, it can be difficult for businesses to target their intended audience effectively.
Distraction: Electronic media can be a distraction, as users may be tempted to switch between different websites, apps, and social media platforms instead of focusing on one piece of content.
Similarities:
Both provide a means of communicating information to a large audience.
Both can be used for marketing and advertising purposes.
Both offer various formats for presenting information, such as text, images, and videos.
Both require the creation of content by writers, editors, and other content creators.
Both can be accessed by individuals at their convenience.
Both have the potential to impact public opinion and shape social discourse.
Both can be used for entertainment and educational purposes.
Both require the use of technology, whether it’s printing presses or digital devices.
Both can be used to create and disseminate news and current events.
Both can be monetized through subscriptions, advertising, or other revenue streams.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1622) Firewall
Gist
A Firewall is a network security device that monitors and filters incoming and outgoing network traffic based on an organization's previously established security policies. At its most basic, a firewall is essentially the barrier that sits between a private internal network and the public Internet.
Details:
Firewall (computing)
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet.
History
The term firewall originally referred to a wall intended to confine a fire within a line of adjacent buildings. Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The term was applied in the late 1980s to network technology that emerged when the Internet was fairly new in terms of its global use and connectivity. The predecessors to firewalls for network security were routers used in the late 1980s. Because they already segregated networks, routers could apply filtering to packets crossing them.
Before it was used in real-life computing, the term appeared in the 1983 computer-hacking movie WarGames, and possibly inspired its later use.
Types
Firewalls are categorized as a network-based or a host-based system. Network-based firewalls are positioned between two or more networks, typically between the local area network (LAN) and wide area network (WAN). They are either a software appliance running on general-purpose hardware, a hardware appliance running on special-purpose hardware, or a virtual appliance running on a virtual host controlled by a hypervisor. Firewall appliances may also offer non firewall functionality, such as DHCP or VPN services. Host-based firewalls are deployed directly on the host itself to control network traffic or other computing resources. This can be a daemon or service as a part of the operating system or an agent application for protection.
Packet filter
The first reported type of network firewall is called a packet filter, which inspects packets transferred between computers. The firewall maintains an access-control list which dictates what packets will be looked at and what action should be applied, if any, with the default action set to silent discard. Three basic actions regarding the packet consist of a silent discard, discard with Internet Control Message Protocol or TCP reset response to the sender, and forward to the next hop. Packets may be filtered by source and destination IP addresses, protocol, source and destination ports. The bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) in conjunction with well-known ports, enabling firewalls of that era to distinguish between specific types of traffic such as web browsing, remote printing, email transmission, and file transfers.
The first paper published on firewall technology was in 1987 when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first-generation architecture. In 1992, Steven McCanne and Van Jacobson released paper on BSD Packet Filter (BPF) while at Lawrence Berkeley Laboratory.
Connection tracking
From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and math Nigam, developed the second generation of firewalls, calling them circuit-level gateways.
Second-generation firewalls perform the work of their first-generation predecessors but also maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 (transport layer) of the OSI model for their conversation, allowing examination of the overall exchange between the nodes.
Application layer
Marcus Ranum, Wei Xu, and Peter Churchyard released an application firewall known as Firewall Toolkit (FWTK) in October 1993. This became the basis for Gauntlet firewall at Trusted Information Systems.
The key benefit of application layer filtering is that it can understand certain applications and protocols such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext Transfer Protocol (HTTP). This allows it to identify unwanted applications or services using a non standard port, or detect if an allowed protocol is being abused. It can also provide unified security management including enforced encrypted DNS and virtual private networking.
As of 2012, the next-generation firewall provides a wider range of inspection at the application layer, extending deep packet inspection functionality to include, but is not limited to:
* Web filtering
* Intrusion prevention systems
* User identity management
* Web application firewall
Additional Information
A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules.
Firewalls have been a first line of defense in network security for over 25 years. They establish a barrier between secured and controlled internal networks that can be trusted and untrusted outside networks, such as the Internet.
A firewall can be hardware, software, or both.
Endpoint specific
Endpoint-based application firewalls function by determining whether a process should accept any given
connection. Application firewalls filter connections by examining the process ID of data packets against a rule set for the local process involved in the data transmission. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers. Application firewalls that hook into socket calls are also referred to as socket filters.
Configuration
Setting up a firewall is a complex and error-prone task. A network may face security issues due to configuration errors.
Firewall policy configuration is based on specific network type (e.g., public or private), and can be set up using firewall rules that either block or allow access to prevent potential attacks from hackers or malware.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1623) Seven Wonders of the World
Summary
Seven Wonders of the World, preeminent architectural and sculptural achievements of the ancient Mediterranean and Middle East, as listed by various observers. The best known are those of the 2nd-century-BCE writer Antipater of Sidon and of a later but unknown observer of the 2nd century BCE who claimed to be the mathematician Philon of Byzantium. Included on the list in its eventual form were the following:
Pyramids of Giza, the oldest of the wonders and the only one of the seven substantially in existence today.
Hanging Gardens of Babylon, thought to be a series of landscaped terraces, exact location unknown, generally ascribed to Queen Sammu-ramat, King Nebuchadrezzar II, or the Assyrian king Sennacherib.
Statue of Zeus at Olympia
Statue of Zeus at Olympia, a large ornate figure of the god on his throne, made about 430 BCE by Phidias of Athens.
Temple of Artemis at Ephesus
Temple of Artemis at Ephesus, a structure famous for its imposing size and for the works of art that adorned it.
Mausoleum of Halicarnassus
Mausoleum of Halicarnassus, monumental tomb of the Anatolian king Mausolus built by his widow Artemisia.
The seven wonders of Greco-Roman antiquity inspired the compilation of many other lists of attractions, both natural and human-made, by successive generations. Among such lists, all of which are limited to seven “wonders,” are the (architectural) wonders of the Middle Ages, the natural wonders of the world, the natural wonders of the United States, the (architectural) wonders of the modern world, and the wonders of American engineering.
Seven Wonders of the Ancient World
* The Pyramids of Giza. Built: About 2600 B.C. Egypt. ...
* Hanging Gardens of Babylon. Built: Unknown, in Iraq. ...
* Temple of Artemis. Built in the sixth century B.C. in Ephesus, Turkey. ...
* Statue of Zeus. ...
* Mausoleum at Halicarnassus. ...
* Colossus of Rhodes. ...
* Lighthouse of Alexandria.
Seven Wonders of the Ancient World
More than 2,000 years ago, travelers would write about incredible sights they had seen on their journeys. Over time, seven of those places made history as the "wonders of the ancient world." Check them out here.
THE PYRAMIDS OF GIZA
Built: About 2600 B.C. Egypt
The Pyramids of Giza, Egypt
Massive tombs of Egyptian pharaohs, the pyramids are the only ancient wonders still standing today. The tallest of the three is called the Great Pyramid.
HANGING GARDENS OF BABYLON
Built: Unknown, in Iraq
Hanging Gardens of Babylon, Iraq
Legend has it that this garden paradise was planted on an artificial mountain and construct to please the wife of King Nebuchadnezzar II, but many experts say it never really existed.
TEMPLE OF ARTEMIS
Built in the sixth century B.C. in Ephesus, Turkey
Temple of Artemis at Ephesus, Turkey
Built to honor Artemis, the Greek goddess of the hunt, this temple was said to have housed many works of art.
STATUE OF ZEUS
Built in the fifth century B.C. in Greece
Statue of Zeus, Greece
This 40-foot (12-meter) statue depicted the king of the Greek gods.
MAUSOLEUM AT HALICARNASSUS
Built in the fourth century B.C. in Turkey
Mausoleum at Halicarnassus
This elaborate tomb was built for King Mausolus and admired for its architectural beauty and splendor.
COLOSSUS OF RHODES
Built in the fourth century B.C. on the island of Rhodes in the Mediterranean Sea.
Colossus of Rhodes
A 110-foot (33.5-meter) statue honored the Greek sun god Helios.
LIGHTHOUSE OF ALEXANDRIA
Built in the third century B.C. in Egypt.
Lighthouse of Alexandria, Egypt
Towering over the Mediterranean coast for more than 1,500 years, the world's first lighthouse used mirrors to reflect sunlight for miles out to sea.
Additional Information
The Seven Wonders of the Ancient World, also known as the Seven Wonders of the World or simply the Seven Wonders, is a list of seven notable structures present during classical antiquity. The first known list of seven wonders dates back to the 2nd–1st century BC.
While the entries have varied over the centuries, the seven traditional wonders are the Great Pyramid of Giza, the Colossus of Rhodes, the Lighthouse of Alexandria, the Mausoleum at Halicarnassus, the Temple of Artemis, the Statue of Zeus at Olympia, and the Hanging Gardens of Babylon. Using modern-day countries, two of the wonders were located in Greece, two in Turkey, two in Egypt, and one in Iraq. Of the seven wonders, only the Pyramid of Giza, which is also by far the oldest of the wonders, still remains standing, with the others being destroyed over the centuries. There is scholarly debate over the exact nature of the Hanging Gardens, and there is doubt as to whether they existed at all.
Background
Alexander the Great's conquest of much of the western world in the 4th century BC gave Hellenistic travellers access to the civilizations of the Egyptians, Persians, and Babylonians. Impressed and captivated by the landmarks and marvels of the various lands, these travellers began to list what they saw to remember them.
Instead of "wonders", the ancient Greeks spoke of "theamata", which means "sights", in other words "things to be seen". Later, the word for "wonder" was used. Hence, the list was meant to be the Ancient World's counterpart of a travel guidebook.
The first reference to a list of seven such monuments was given by Diodorus Siculus. The epigrammist Antipater of Sidon, who lived around or before 100 BC, gave a list of seven "wonders", including six of the present list (substituting the walls of Babylon for the Lighthouse of Alexandria):
I have gazed on the walls of impregnable Babylon along which chariots may race, and on the Zeus by the banks of the Alpheus, I have seen the hanging gardens, and the Colossus of the Helios, the great man-made mountains of the lofty pyramids, and the gigantic tomb of Mausolus; but when I saw the sacred house of Artemis that towers to the clouds, the others were placed in the shade, for the sun himself has never looked upon its equal outside Olympus.
— Greek Anthology IX.58
Another ancient writer, who, perhaps dubiously, identified himself as Philo of Byzantium, wrote a short account entitled The Seven Sights of the World. The surviving manuscript is incomplete, missing its last pages. Still, from the preamble text, we can conclude that the list of seven sights exactly matches Antipater's (the preamble mentions the location of Halicarnassus, but the pages describing the seventh wonder, presumably the Mausoleum, are missing).
Earlier and later lists by the historian Herodotus (c. 484 BC–c. 425 BC) and the poet Callimachus of Cyrene (c. 305–240 BC), housed at the Museum of Alexandria, survive only as references.
The Colossus of Rhodes was the last of the seven to be completed after 280 BC and the first to be destroyed by an earthquake in 226/225 BC. As such, it was already in ruins by the time the list was compiled, and all seven wonders existed simultaneously for less than 60 years.
Scope
The list covered only the Mediterranean and Middle Eastern regions, which then comprised the known world for the Greeks. The primary accounts from Hellenistic writers also heavily influenced the places included in the wonders list. Five of the seven entries are a celebration of Greek accomplishments in construction, with the exceptions being the Pyramids of Giza and the Hanging Gardens of Babylon.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1624) Nature
Nature, in the broadest sense, is the physical world or universe. "Nature" can refer to the phenomena of the physical world, and also to life in general. The study of nature is a large, if not the only, part of science. Although humans are part of nature, human activity is often understood as a separate category from other natural phenomena.
The word nature is borrowed from the Old French nature and is derived from the Latin word natura, or "essential qualities, innate disposition", and in ancient times, literally meant "birth". In ancient philosophy, natura is mostly used as the Latin translation of the Greek word physis, which originally related to the intrinsic characteristics of plants, animals, and other features of the world to develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers (though this word had a dynamic dimension then, especially for Heraclitus), and has steadily gained currency ever since.
During the advent of modern scientific method in the last several centuries, nature became the passive reality, organized and moved by divine laws. With the Industrial revolution, nature increasingly became seen as the part of reality deprived from intentional intervention: it was hence considered as sacred by some traditions (Rousseau, American transcendentalism) or a mere decorum for divine providence or human history (Hegel, Marx). However, a vitalist vision of nature, closer to the pre-Socratic one, got reborn at the same time, especially after Charles Darwin.
Within the various uses of the word today, "nature" often refers to geology and wildlife. Nature can refer to the general realm of living plants and animals, and in some cases to the processes associated with inanimate objects—the way that particular types of things exist and change of their own accord, such as the weather and geology of the Earth. It is often taken to mean the "natural environment" or wilderness—wild animals, rocks, forest, and in general those things that have not been substantially altered by human intervention, or which persist despite human intervention. For example, manufactured objects and human interaction generally are not considered part of nature, unless qualified as, for example, "human nature" or "the whole of nature". This more traditional concept of natural things that can still be found today implies a distinction between the natural and the artificial, with the artificial being understood as that which has been brought into being by a human consciousness or a human mind. Depending on the particular context, the term "natural" might also be distinguished from the unnatural or the supernatural.
Earth
Earth is the only planet known to support life, and its natural features are the subject of many fields of scientific research. Within the Solar System, it is third closest to the Sun; it is the largest terrestrial planet and the fifth largest overall. Its most prominent climatic features are its two large polar regions, two relatively narrow temperate zones, and a wide equatorial tropical to subtropical region. Precipitation varies widely with location, from several metres of water per year to less than a millimetre. 71 percent of the Earth's surface is covered by salt-water oceans. The remainder consists of continents and islands, with most of the inhabited land in the Northern Hemisphere.
Earth has evolved through geological and biological processes that have left traces of the original conditions. The outer surface is divided into several gradually migrating tectonic plates. The interior remains active, with a thick layer of plastic mantle and an iron-filled core that generates a magnetic field. This iron core is composed of a solid inner phase, and a fluid outer phase. Convective motion in the core generates electric currents through dynamo action, and these, in turn, generate the geomagnetic field.
The atmospheric conditions have been significantly altered from the original conditions by the presence of life-forms,[8] which create an ecological balance that stabilizes the surface conditions. Despite the wide regional variations in climate by latitude and other geographic factors, the long-term average global climate is quite stable during interglacial periods,[9] and variations of a degree or two of average global temperature have historically had major effects on the ecological balance, and on the actual geography of the Earth.
Geology
Geology is the science and study of the solid and liquid matter that constitutes the Earth. The field of geology encompasses the study of the composition, structure, physical properties, dynamics, and history of Earth materials, and the processes by which they are formed, moved, and changed. The field is a major academic discipline, and is also important for mineral and hydrocarbon extraction, knowledge about and mitigation of natural hazards, some Geotechnical engineering fields, and understanding past climates and environments.
Geological evolution
The geology of an area evolves through time as rock units are deposited and inserted and deformational processes change their shapes and locations.
Rock units are first emplaced either by deposition onto the surface or intrude into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows, blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.
After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.
Historical perspective
An animation showing the movement of the continents from the separation of Pangaea until the present day
Earth is estimated to have formed 4.54 billion years ago from the solar nebula, along with the Sun and other planets. The Moon formed roughly 20 million years later. Initially molten, the outer layer of the Earth cooled, resulting in the solid crust. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, most or all of which came from ice delivered by comets, produced the oceans and other water sources. The highly energetic chemistry is believed to have produced a self-replicating molecule around 4 billion years ago.
Continents formed, then broke up and reformed as the surface of Earth reshaped over hundreds of millions of years, occasionally combining to make a supercontinent. Roughly 750 million years ago, the earliest known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia which broke apart about 540 million years ago, then finally Pangaea, which broke apart about 180 million years ago.
During the Neoproterozoic era, freezing temperatures covered much of the Earth in glaciers and ice sheets. This hypothesis has been termed the "Snowball Earth", and it is of particular interest as it precedes the Cambrian explosion in which multicellular life forms began to proliferate about 530–540 million years ago.
Since the Cambrian explosion there have been five distinctly identifiable mass extinctions. The last mass extinction occurred some 66 million years ago, when a meteorite collision probably triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared small animals such as mammals. Over the past 66 million years, mammalian life diversified.
Several million years ago, a species of small African ape gained the ability to stand upright. The subsequent advent of human life, and the development of agriculture and further civilization allowed humans to affect the Earth more rapidly than any previous life form, affecting both the nature and quantity of other organisms as well as global climate. By comparison, the Great Oxygenation Event, produced by the proliferation of algae during the Siderian period, required about 300 million years to culminate.
The present era is classified as part of a mass extinction event, the Holocene extinction event, the fastest ever to have occurred. Some, such as E. O. Wilson of Harvard University, predict that human destruction of the biosphere could cause the extinction of one-half of all species in the next 100 years. The extent of the current extinction event is still being researched, debated and calculated by biologists.
Atmosphere, climate, and weather
The Earth's atmosphere is a key factor in sustaining the ecosystem. The thin layer of gases that envelops the Earth is held in place by gravity. Air is mostly nitrogen, oxygen, water vapor, with much smaller amounts of carbon dioxide, argon, etc. The atmospheric pressure declines steadily with altitude. The ozone layer plays an important role in depleting the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.
Terrestrial weather occurs almost exclusively in the lower part of the atmosphere, and serves as a convective system for redistributing heat. Ocean currents are another important factor in determining climate, particularly the major underwater thermohaline circulation which distributes heat energy from the equatorial oceans to the polar regions. These currents help to moderate the differences in temperature between winter and summer in the temperate zones. Also, without the redistributions of heat energy by the ocean currents and atmosphere, the tropics would be much hotter, and the polar regions much colder.
Lightning
Weather can have both beneficial and harmful effects. Extremes in weather, such as tornadoes or hurricanes and cyclones, can expend large amounts of energy along their paths, and produce devastation. Surface vegetation has evolved a dependence on the seasonal variation of the weather, and sudden changes lasting only a few years can have a dramatic effect, both on the vegetation and on the animals which depend on its growth for their food.
Climate is a measure of the long-term trends in the weather. Various factors are known to influence the climate, including ocean currents, surface albedo, greenhouse gases, variations in the solar luminosity, and changes to the Earth's orbit. Based on historical records, the Earth is known to have undergone drastic climate changes in the past, including ice ages.
The climate of a region depends on a number of factors, especially latitude. A latitudinal band of the surface with similar climatic attributes forms a climate region. There are a number of such regions, ranging from the tropical climate at the equator to the polar climate in the northern and southern extremes. Weather is also influenced by the seasons, which result from the Earth's axis being tilted relative to its orbital plane. Thus, at any given time during the summer or winter, one part of the Earth is more directly exposed to the rays of the sun. This exposure alternates as the Earth revolves in its orbit. At any given time, regardless of season, the Northern and Southern Hemispheres experience opposite seasons.
Weather is a chaotic system that is readily modified by small changes to the environment, so accurate weather forecasting is limited to only a few days. Overall, two things are happening worldwide: (1) temperature is increasing on the average; and (2) regional climates have been undergoing noticeable changes.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1625) Urology
Gist
Urology is a surgical speciality that deals with the treatment of conditions involving the male and female urinary tract and the male reproductive organs.
Summary
Urology (from Greek "urine" and "study of"), also known as genitourinary surgery, is the branch of medicine that focuses on surgical and medical diseases of the urinary-tract system and the reproductive organs. Organs under the domain of urology include the kidneys, adrenal glands, ureters, urinary bladder, urethra, and the male reproductive organs (testes, epididymis, vas deferens, seminal vesicles, prostate, and male reproductive organs).
The urinary and reproductive tracts are closely linked, and disorders of one often affect the other. Thus a major spectrum of the conditions managed in urology exists under the domain of genitourinary disorders. Urology combines the management of medical (i.e., non-surgical) conditions, such as urinary-tract infections and benign prostatic hyperplasia, with the management of surgical conditions such as bladder or prostate cancer, kidney stones, congenital abnormalities, traumatic injury, and stress incontinence.
Urological techniques include minimally invasive robotic and laparoscopic surgery, laser-assisted surgeries, and other scope-guided procedures. Urologists receive training in open and minimally invasive surgical techniques, employing real-time ultrasound guidance, fiber-optic endoscopic equipment, and various lasers in the treatment of multiple benign and malignant conditions. Urology is closely related to (and urologists often collaborate with the practitioners of) oncology, nephrology, gynaecology, andrology, pediatric surgery, colorectal surgery, gastroenterology, and endocrinology.
Urology is one of the most competitive and highly sought surgical specialties for physicians, with new urologists comprising less than 1.5% of United States medical-school graduates each year.
Urologists are physicians which have specialized in the field after completing their general degree in medicine. Upon successful completion of a residency program, many urologists choose to undergo further advanced training in a subspecialty area of expertise through a fellowship lasting an additional 12 to 36 months. Subspecialties may include: urologic surgery, urologic oncology and urologic oncological surgery, endourology and endourologic surgery, urogynecology and urogynecologic surgery, reconstructive urologic surgery (a form of reconstructive surgery), minimally-invasive urologic surgery, pediatric urology and pediatric urologic surgery (including adolescent urology, the treatment of premature or delayed puberty, and the treatment of congenital urological syndromes, malformations, and deformations), transplant urology (the field of transplant medicine and surgery concerned with transplantation of organs such as the kidneys, bladder tissue, ureters, and, recently, male reproductive organs), voiding dysfunction, paruresis, neurourology, and androurology and sexual medicine. Additionally, some urologists supplement their fellowships with a master's degree (2–3 years) or with a Ph.D. (4–6 years) in related topics to prepare them for academic as well as focused clinical employment.
Details
Urology is a medical specialty involving the diagnosis and treatment of diseases and disorders of the urinary tract and of the male reproductive organs. (The urinary tract consists of the kidneys, the bladder, the ureters, and the urethra.)
The modern specialty derives directly from the medieval lithologists, who were itinerant healers specializing in the surgical removal of bladder stones. In 1588 the Spanish surgeon Francisco Diaz wrote the first treatises on diseases of the bladder, kidneys, and urethra; he is generally regarded as the founder of modern urology. Most modern urologic procedures developed during the 19th century. At that time flexible catheters were developed for examining and draining the bladder, and in 1877 the German urologist Max Nitze developed the cystoscope. The cystoscope is a tubelike viewing instrument equipped with an electric light on its end. By introducing the instrument through the urethra, the urologist is able to view the interior of the bladder. The first decades of the early 20th century witnessed the introduction of various X-ray techniques that have proved extremely useful in diagnosing disorders of the urinary tract. Urologic surgery was largely confined to the removal of bladder stones until the German surgeon Gustav Simon in 1869 demonstrated that human patients could survive the removal of one kidney, provided the remaining kidney was healthy.
Most of the modern urologist’s patients are male, for two reasons: (1) the urinary tract in females may be treated by gynecologists, and (2) much of the urologist’s work has to do with the prostate gland, which encircles the male urethra close to the juncture between the urethra and the bladder. The prostate gland is often the site of cancer; even more frequently, it enlarges in middle or old age and encroaches on the urethra, causing partial or complete obstruction of the flow of urine. The urologist treats prostate enlargement either by totally excising the prostate or by reaming a wider passageway through it. Urologists may also operate to remove stones that have formed in the urinary tract, and they may perform operations to remove cancers of the kidneys, bladder, and testicles.
What is Urology?
Urology is a part of health care that deals with diseases of the male and female urinary tract (kidneys, ureters, bladder and urethra). It also deals with the male organs that are able to make babies (math, testes, scrotum, prostate, etc.). Since health problems in these body parts can happen to everyone, urologic health is important.
Urology is known as a surgical specialty. Besides surgery, a urologist is a doctor with wisdom of internal medicine, pediatrics, gynecology and other parts of health care. This is because a urologist encounters a wide range of clinical problems. The scope of urology is big and the American Urological Association has named seven subspecialty parts:
* Pediatric Urology (children's urology)
* Urologic Oncology (urologic cancers)
* Renal (kidney) Transplant
* Male Infertility
* Calculi (urinary tract stones)
* Female Urology
* Neurourology (nervous system control of genitourinary organs)
Who takes care of urology patients?
If you have a problem with urologic health you might see a urologist.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1626) Percentile
Gist
What Is a Percentile in Statistics? In statistics, a percentile is a term that describes how a score compares to other scores from the same set. While there is no universal definition of percentile, it is commonly expressed as the percentage of values in a set of data scores that fall below a given value.
Summary
In statistics, a k-th percentile, also known as percentile score or centile, is a score below which a given percentage k of scores in its frequency distribution falls ("exclusive" definition) or a score at or below which a given percentage falls ("inclusive" definition). Percentiles are expressed in the same unit of measurement as the input scores, not in percent; for example, if the scores refer to human weight, the corresponding percentiles will be expressed in kilograms or pounds.
The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3). For example, the 50th percentile (median) is the score below (or at or below, depending on the definition) which 50% of the scores in the distribution are found.
The percentile score and the percentile rank are related terms. The percentile rank of a score is the percentage of scores in its distribution that are less than it, an exclusive definition, and one that can be expressed with a single, simple formula. Percentile scores and percentile ranks are often used in the reporting of test scores from norm-referenced tests, but, as just noted, they are not the same. For percentile rank, a score is given and a percentage is computed. Percentile ranks are exclusive. If the percentile rank for a specified score is 90%, then 90% of the scores were lower. In contrast, for percentiles a percentage is given and a corresponding score is determined, which can be either exclusive or inclusive. The score for a specified percentage (e.g., 90th) indicates a score below which (exclusive definition) or at or below which (inclusive definition) other scores in the distribution fall.
Definitions
There is no standard definition of percentile, however all definitions yield similar results when the number of observations is very large and the probability distribution is continuous.[4] In the limit, as the sample size approaches infinity, the 100pth percentile (0<p<1) approximates the inverse of the cumulative distribution function (CDF) thus formed, evaluated at p, as p approximates the CDF. This can be seen as a consequence of the Glivenko–Cantelli theorem. Some methods for calculating the percentiles are given below.
The normal distribution and percentiles
Representation of the three-sigma rule. The dark blue zone represents observations within one standard deviation (σ) to either side of the mean (μ), which accounts for about 68.3% of the population. Two standard deviations from the mean (dark and medium blue) account for about 95.4%, and three standard deviations (dark, medium, and light blue) for about 99.7%.
The methods given in the definitions section (below) are approximations for use in small-sample statistics. In general terms, for very large populations following a normal distribution, percentiles may often be represented by reference to a normal curve plot. The normal distribution is plotted along an axis scaled to standard deviations, or sigma units. Mathematically, the normal distribution extends to negative infinity on the left and positive infinity on the right. Note, however, that only a very small proportion of individuals in a population will fall outside the −3σ to +3σ range. For example, with human heights very few people are above the +3σ height level.
Percentiles represent the area under the normal curve, increasing from left to right. Each standard deviation represents a fixed percentile. Thus, rounding to two decimal places, −3σ is the 0.13th percentile, −2σ the 2.28th percentile, −1σ the 15.87th percentile, 0σ the 50th percentile (both the mean and median of the distribution), +1σ the 84.13th percentile, +2σ the 97.72nd percentile, and +3σ the 99.87th percentile. This is related to the 68–95–99.7 rule or the three-sigma rule. Note that in theory the 0th percentile falls at negative infinity and the 100th percentile at positive infinity, although in many practical applications, such as test results, natural lower and/or upper limits are enforced.
Applications
When ISPs bill "burstable" internet bandwidth, the 95th or 98th percentile usually cuts off the top 5% or 2% of bandwidth peaks in each month, and then bills at the nearest rate. In this way, infrequent peaks are ignored, and the customer is charged in a fairer way. The reason this statistic is so useful in measuring data throughput is that it gives a very accurate picture of the cost of the bandwidth. The 95th percentile says that 95% of the time, the usage is below this amount: so, the remaining 5% of the time, the usage is above that amount.
Physicians will often use infant and children's weight and height to assess their growth in comparison to national averages and percentiles which are found in growth charts.
The 85th percentile speed of traffic on a road is often used as a guideline in setting speed limits and assessing whether such a limit is too high or low.
In finance, value at risk is a standard measure to assess (in a model-dependent way) the quantity under which the value of the portfolio is not expected to sink within a given period of time and given a confidence value.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1627) Insurance
Summary
Insurance is a means of protection from financial loss in which, in exchange for a fee, a party agrees to compensate another party in the event of a certain loss, damage, or injury. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.
An entity which provides insurance is known as an insurer, insurance company, insurance carrier, or underwriter. A person or entity who buys insurance is known as a policyholder, while a person or entity covered under the policy is called an insured. The insurance transaction involves the policyholder assuming a guaranteed, known, and relatively small loss in the form of a payment to the insurer (a premium) in exchange for the insurer's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms. Furthermore, it usually involves something in which the insured has an insurable interest established by ownership, possession, or pre-existing relationship.
The insured receives a contract, called the insurance policy, which details the conditions and circumstances under which the insurer will compensate the insured, or their designated beneficiary or assignee. The amount of money charged by the insurer to the policyholder for the coverage set forth in the insurance policy is called the premium. If the insured experiences a loss which is potentially covered by the insurance policy, the insured submits a claim to the insurer for processing by a claims adjuster. A mandatory out-of-pocket expense required by an insurance policy before an insurer will pay a claim is called a deductible (or if required by a health insurance policy, a copayment). The insurer may hedge its own risk by taking out reinsurance, whereby another insurance company agrees to carry some of the risks, especially if the primary insurer deems the risk too large for it to carry.
Details
What Is Insurance?
Most people have some kind of insurance: for their car, their house, or even their life. Yet most of us don’t stop to think too much about what insurance is or how it works.
Put simply, insurance is a contract, represented by a policy, in which a policyholder receives financial protection or reimbursement against losses from an insurance company. The company pools clients’ risks to make payments more affordable for the insured.
Insurance policies are used to hedge against the risk of financial losses, both big and small, that may result from damage to the insured or their property, or from liability for damage or injury caused to a third party.
KEY TAKEAWAYS
Insurance is a contract (policy) in which an insurer indemnifies another against losses from specific contingencies or perils.
There are many types of insurance policies. Life, health, homeowners, and auto are the most common forms of insurance.
The core components that make up most insurance policies are the deductible, policy limit, and premium.
Insurance
How Insurance Works
A multitude of different types of insurance policies is available, and virtually any individual or business can find an insurance company willing to insure them—for a price. The most common types of personal insurance policies are auto, health, homeowners, and life. Most individuals in the United States have at least one of these types of insurance, and car insurance is required by law.
Businesses require special types of insurance policies that insure against specific types of risks faced by a particular business. For example, a fast-food restaurant needs a policy that covers damage or injury that occurs as a result of cooking with a deep fryer. An auto dealer is not subject to this type of risk but does require coverage for damage or injury that could occur during test drives.
To select the best policy for you or your family, it is important to pay attention to the three critical components of most insurance policies: deductible, premium, and policy limit.
There are also insurance policies available for very specific needs, such as kidnap and ransom (K&R), medical malpractice, and professional liability insurance, also known as errors and omissions insurance.
Insurance Policy Components
When choosing a policy, it is important to understand how insurance works.
A firm understanding of these concepts goes a long way in helping you choose the policy that best suits your needs. For instance, whole life insurance may or may not be the right type of life insurance for you. Three components of any type of insurance are crucial: premium, policy limit, and deductible.
Premium
A policy’s premium is its price, typically expressed as a monthly cost. The premium is determined by the insurer based on your or your business’s risk profile, which may include creditworthiness.
For example, if you own several expensive automobiles and have a history of reckless driving, you will likely pay more for an auto policy than someone with a single midrange sedan and a perfect driving record. However, different insurers may charge different premiums for similar policies. So finding the price that is right for you requires some legwork.
Policy Limit
The policy limit is the maximum amount that an insurer will pay under a policy for a covered loss. Maximums may be set per period (e.g., annual or policy term), per loss or injury, or over the life of the policy, also known as the lifetime maximum.
Typically, higher limits carry higher premiums. For a general life insurance policy, the maximum amount that the insurer will pay is referred to as the face value, which is the amount paid to a beneficiary upon the death of the insured.
Deductible
The deductible is a specific amount that the policyholder must pay out of pocket before the insurer pays a claim. Deductibles serve as deterrents to large volumes of small and insignificant claims.
Deductibles can apply per policy or per claim, depending on the insurer and the type of policy. Policies with very high deductibles are typically less expensive because the high out-of-pocket expense generally results in fewer small claims.
Types of Insurance
There are many different types of insurance. Let’s look at the most important.
Health Insurance
With regard to health insurance, people who have chronic health issues or need regular medical attention should look for policies with lower deductibles. Though the annual premium is higher than a comparable policy with a higher deductible, less expensive access to medical care throughout the year may be worth the tradeoff.
Home Insurance
Homeowners insurance (also known as home insurance) protects your home and possessions against damage or theft. Virtually all mortgage companies require borrowers to have insurance coverage for the full or fair value of a property (usually the purchase price) and won’t make a loan or finance a residential real estate transaction without proof of it.
Auto Insurance
When you buy or lease a car, it’s important to protect that investment. Getting auto insurance can offer reassurance in case you’re involved in an accident or the vehicle is stolen, vandalized, or damaged by a natural disaster. Instead of paying out of pocket for auto accidents, people pay annual premiums to an auto insurance company; the company then pays all or most of the costs associated with an auto accident or other vehicle damage.
Life Insurance
Life insurance is a contract between an insurer and a policy owner. A life insurance policy guarantees that the insurer pays a sum of money to named beneficiaries when the insured dies in exchange for the premiums paid by the policyholder during their lifetime.
Travel Insurance
Travel insurance is a type of insurance that covers the costs and losses associated with traveling. It is useful protection for those traveling domestically or abroad. According to a 2021 survey by insurance company Battleface, almost half of Americans have faced fees or had to absorb the cost of losses when traveling without travel insurance.
What is insurance?
Insurance is a way to manage your risk. When you buy insurance, you purchase protection against unexpected financial losses. The insurance company pays you or someone you choose if something bad happens to you. If you have no insurance and an accident happens, you may be responsible for all related costs.
What are the four major types of insurance?
There are four types of insurance that most financial experts recommend everybody have: life, health, auto, and long-term disability.
Is insurance an asset?
Depending on the type of life insurance policy and how it is used, permanent life insurance can be considered a financial asset because of its ability to build cash value or be converted into cash. Simply put, most permanent life insurance policies have the ability to build cash value over time.
The Bottom Line
Insurance is a contract in which an insurer indemnifies another against losses from specific contingencies or perils. It helps to protect the insured person or their family against financial loss. There are many types of insurance policies. Life, health, homeowners, and auto are the most common forms of insurance.
Additional Information
Insurance is a system under which the insurer, for a consideration usually agreed upon in advance, promises to reimburse the insured or to render services to the insured in the event that certain accidental occurrences result in losses during a given period. It thus is a method of coping with risk. Its primary function is to substitute certainty for uncertainty as regards the economic cost of loss-producing events.
Insurance relies heavily on the “law of large numbers.” In large homogeneous populations it is possible to estimate the normal frequency of common events such as deaths and accidents. Losses can be predicted with reasonable accuracy, and this accuracy increases as the size of the group expands. From a theoretical standpoint, it is possible to eliminate all pure risk if an infinitely large group is selected.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1628) Horn of Africa
Summary
Horn of Africa is a region of eastern Africa. It is the easternmost extension of African land and for the purposes of this article is defined as the region that is home to the countries of Djibouti, Eritrea, Ethiopia, and Somalia, whose cultures have been linked throughout their long history. Other definitions of the Horn of Africa are more restrictive and exclude some or all of the countries of Djibouti, Eritrea, and Ethiopia. There are also broader definitions, the most common of which include all the countries mentioned above, as well as parts or all of Kenya, Sudan, South Sudan, and Uganda. Part of the Horn of Africa region is also known as the Somali peninsula; this term is typically used when referring to lands of Somalia and eastern Ethiopia.
The Horn contains such diverse areas as the highlands of the Ethiopian Plateau, the Ogaden desert, and the Eritrean and Somalian coasts and is home to the Amhara, Tigray, Oromo, and Somali peoples, among others. Its coasts are washed by the Red Sea, the Gulf of Aden, and the Indian Ocean, and it has long been in contact with the Arabian Peninsula and southwestern Asia. Islam and Christianity are of ancient standing here, and the people speak Afro-Asiatic languages related to those of North Africa and the Middle East.
Details
The Horn of Africa (HoA), also known as the Somali Peninsula, is a large peninsula and geopolitical region in East Africa. Located on the easternmost part of the African mainland, it is the fourth largest peninsula in the world. It is composed of Ethiopia, Eritrea, Somalia and Djibouti; broader definitions also include parts or all of Kenya, Sudan, South Sudan, and Uganda. The term Greater Horn Region (GHR) can additionally include Burundi, Rwanda, and Tanzania. It lies along the southern boundary of the Red Sea and extends hundreds of kilometres into the Guardafui Channel, Gulf of Aden, and Indian Ocean and shares a maritime border with the Arabian Peninsula of Western Asia.
Description
The Horn of Africa Region consists of the internationally recognized countries of Djibouti, Eritrea, Ethiopia, and Somalia.
Geographically the protruding shape that resembles a ”Horn” consists of the “Somali peninsula” and eastern part of Ethiopia. But the region encompasses also the rest of Ethiopia, Eritrea and Djibouti.
Sometimes the term Greater Horn of Africa is used, either to be inclusive of neighbouring northeast African countries such as Kenya, Uganda, South Sudan or to distinguish the broader geopolitical definition of the Horn of Africa from narrower peninsular definitions.
The Greater Horn of Africa consist of more than the typical four countries, including also Kenya, Uganda, Sudan and South Sudan.
The name Horn of Africa is sometimes shortened to HoA. Quite commonly it is referred to simply as "the Horn", while inhabitants are sometimes colloquially termed Horn Africans. Regional studies on the Horn of Africa are carried out in fields such as Ethiopian studies and Somali studies. This peninsula has been known by various names. Ancient Greeks and Romans referred to it as Regio Aromatica or Regio Cinnamonifora due to the aromatic plants or as Regio Incognita owing to its uncharted territory.
Geography:
Geology and climate
The Horn of Africa is almost equidistant from the equator and the Tropic of Cancer. It consists chiefly of mountains uplifted through the formation of the Great Rift Valley, a fissure in the Earth's crust extending from Turkey to Mozambique and marking the separation of the African and Arabian tectonic plates. Mostly mountainous, the region arose through faults resulting from the Rift Valley.
Geologically, the Horn and Yemen once formed a single landmass around 18 million years ago, before the Gulf of Aden rifted and separated the Horn region from the Arabian Peninsula. The Somali Plate is bounded on the west by the East African Rift, which stretches south from the triple junction in the Afar Depression, and an undersea continuation of the rift extending southward offshore. The northern boundary is the Aden Ridge along the coast of Saudi Arabia. The eastern boundary is the Central Indian Ridge, the northern portion of which is also known as the Carlsberg Ridge. The southern boundary is the Southwest Indian Ridge.
Extensive glaciers once covered the Simien and Bale Mountains but melted at the beginning of the Holocene. The mountains descend in a huge escarpment to the Red Sea and more steadily to the Indian Ocean. Socotra is a small island in the Indian Ocean off the coast of Somalia. Its size is 3,600 sq km (1,400 sq mi) and it is a territory of Yemen.
The lowlands of the Horn are generally arid in spite of their proximity to the equator. This is because the winds of the tropical monsoons that give seasonal rains to the Sahel and the Sudan blow from the west. Consequently, they lose their moisture before reaching Djibouti and northern part of Somalia, with the result that most of the Horn receives little rainfall during the monsoon season.
In the mountains of Ethiopia, many areas receive over 2,000 mm (79 in) per year, and even Asmara receives an average of 570 mm (22 in). This rainfall is the sole source of water for many areas outside Ethiopia, including Egypt. In the winter, the northeasterly trade winds do not provide any moisture except in mountainous areas of northern Somalia, where rainfall in late autumn can produce annual totals as high as 500 mm (20 in). On the eastern coast, a strong upwelling and the fact that the winds blow parallel to the coast means annual rainfall can be as low as 50 mm (2.0 in).
The climate in Ethiopia varies considerably between regions. It is generally hotter in the lowlands and temperate on the plateau. At Addis Ababa, which ranges from 2,200 to 2,600 m (7,218 to 8,530 ft), maximum temperature is 26 °C (78.8 °F) and minimum 4 °C (39.2 °F). The weather is usually sunny and dry, but the short (belg) rains occur from February to April and the big (meher) rains from mid-June to mid-September. The Danakil Desert stretches across 100,000 km2 of arid terrain in northeast Ethiopia, southern Eritrea, and northwestern Djibouti. The area is known for its volcanoes and extreme heat, with daily temperatures over 45 °C and often surpassing 50 °C. It has a number of lakes formed by lava flows that dammed up several valleys. Among these are Lake Asale (116 m below sea level) and Lake Giuletti/Afrera (80 m below sea level), both of which possess cryptodepressions in the Danakil Depression. The Afrera contains many active volcanoes, including the Maraho, Dabbahu, Afdera and Erta Ale.
In Somalia, there is not much seasonal variation in climate. Hot conditions prevail year-round along with periodic monsoon winds and irregular rainfall. Mean daily maximum temperatures range from 28 to 43 °C (82 to 109 °F), except at higher elevations along the eastern seaboard, where the effects of a cold offshore current can be felt. Somalia has only two permanent rivers, the Jubba and the Shabele, both of which begin in the Ethiopian Highlands.
Additional Information
The Horn of Africa is a large extension of land that protrudes from the eastern edge of the continent of Africa, lying between the Indian Ocean to the east and the Gulf of Aden to the north, jutting for hundreds of kilometers into the Arabian Sea. Overall, the Horn of Africa is estimated to consist of over 772,200 square miles, most of which boasts a semi–arid to arid climate. Despite difficult living conditions in many parts of the region, recent estimates put the population of the region at about 90.2 million.
In a more general way, the term "Horn of Africa" is also used to define a political region that consists of Djibouti, Ethiopia, Eritrea, and Somalia. Some definitions also include the states of Kenya, Sudan, and Tanzania. The Horn of Africa is considered a subregion of the larger region known as East Africa, and is sometimes referred to as the Somali Peninsula.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline